Monthly Archives: April 2014

Baking ambient occlusion: Exporting from Maya into Unity

This is a post I’ve been meaning to write for a long while regarding my experiences baking an ambient occlusion texture onto a model for import into Unity.

First off, baking ambient occlusion is absolutely worth it to producing an appealing result. It makes the details in a scene really pop. (Ambient occlusion captures the self shadowing of close surfaces due to ambient light.)

With ambient occlusion
MallAmbTex

Without ambient occlusion
MallNoTex

Additionally, by baking the effects of ambient occlusion onto the model, there is no additional overhead to the effect beyond displaying textures. Although I suspect that if you are reading this, I’m already preaching to the converted.

Happily, there are a lot of resources for doing this sort of thing, but the potential pitfalls can vary a lot depending on one’s workflow and approach. The approach I took was to

  • Use Mental Ray to compute ambient occlusion for static objects in the scene.
  • Bake the result to texture.
  • Export the model as an FBX with textures embedded.

So let’s give the details and then I enumerate the pitfalls, eh challenges, I experienced while working this out.

Baking ambient occlusion

Step #1 is to compute ambient occlusion for your scene and then bake the result to texture in Maya. There are a lot of resources for this online, but the video below by Josh Robinson is my favorite.

Baking Ambient Occlusion in Maya from Josh Robinson on Vimeo.

If you are unfamiliar or rusty with Maya, it’s really worth watching the video to see how things are done with Maya’s UI. Below is a summary of the steps in the video:

    • Add a surface shader to your objects. Right click on the object, assign new material, choose a surface shader. The object will now look black.
    • Make sure Mental Ray is enabled. If it is not enabled, objects will be rendered black. This can be set in Window -> Settings/Preferences -> Plug-in Manager. Make sure mayatomr.mll is checked. By the way, if you don’t have the FBX plugin enabled, you should make sure it is also checked.
      Plug-inMenu
  • Setup ambient occlusion shading using Mental Ray
    • Add a mib_amb_occlusion shader. Use the Window -> Rendering Editors -> Hypershade dialog.
    • Connect mib_amb_occlusion to the surface shader (middle click and drag from mib_amb_occlusion to the surface shader and select ‘Default’)
    • In the surface shader properties, set the number of occlusion rays to 128
    • Do a test render. You should see ambient occlusion shading.
  • Bake the ambient occlusion effect to texture
    • Under the menu Lighting/Shading -> Batch Bake (mental ray), open the option dialog. Set the file resolution and file type.
    • Under the menu, Window -> Rendering Editors -> Render Settings, make sure the renderer is set to Mental Ray. Use high quality settings for the best looking results.

When things don’t work

This technique relies on your models having good texture coordinates.

If your model remains completely black after baking, your model may not have UV (e.g. texture) coordinates, or they may be setup incorrectly. I ran into this problem with a few old models stored in OBJ format. Setting up good coordinates is beyond the scope of this tutorial (and I am not an expert texturer myself), but this is something to be aware of.

Other problems can occur if texture coordinates are duplicated or misaligned. This writeup gives a few examples with solutions using 3DS Max.

The scene which I was shading had a lot of objects. By default, each object had its own texture, which for me turned out to be problematic since using the same texture resolution for every object was inappropriate. For large scene objects, I needed to increase the resolution to get a better looking result.

Effect of a small resolution texture
AmbSmallTex

Effect of a high resolution texture
AmbBigTex

A final problem — which I have not entirely fixed in my own environment — is occasional seams in the baked textures. As there are other resources more fit to help troubleshoot such problems in Maya, I am going to move on to the potential gotchas importing the model into Unity.

Export the result to FBX

Step #2 is to export the result to FBX. So far, I have never run into a problem with this process and using FBX in Unity has a number of advantages over using Maya’s .MB format

  • Fewer path problems. Embedded textures are unpacked automatically with good relative paths.
  • The FBX file does not require someone to have Maya installed to load it
  • The FBX file is often smaller

If you can’t find the FBX import/export options, you may need to enable the plug-in manually. Under the menu Windows -> Settings/Preferences -> Plug-in Menu, open the Plug-in manager and make sure fbxmaya.mll is checked.

It’s very important to embed the texture assets when you export.

EmbedMedia

Import the FBX into Unity

Step #3 is to import the FBX file into Unity. In general, this is as simple as copying the FBX into your Unity project’s Asset directory.

However, if your textures do not load correctly, you might be running into one of the following

  • The filenames generated by Maya during the baking process may be too long for Unity to load them. To shorten them, you can specify a shorter prefix in the Mental Ray Baking options (under Texture bake settings), or try shortening the project directory from the menu File -> Project -> Edit Current.  Note that if you repeatedly re-bake the textures, Maya will repeatedly prepend the prefix, leading to names that look like “baked_baked_baked_baked_object1Lighting.tiff”.
  • Multiple objects in the scene may be using the same texture name (this leads to one of the objects using the wrong texture. The result is usually big black splotches in the wrong places). To fix, change the model’s import settings in Unity to use prefixes before material names.
    NameMaterials
  • Maya may be connecting the result of ambient occlusion to “incandescence” instead of color, which Unity ignores. We need it to connect to color instead.ShaderNetworkThis can be fixed manually, but if you have a lot of objects, a script is easier. The following script connects the outColor property of each texture (a ‘file’ object) to the color property of the corresponding lambert material.
    import maya.cmds as cmds
    allObjects = cmds.ls(l=True)
    for obj in allObjects:
       if cmds.nodeType(obj) == 'file':
          connections = cmds.listConnections(obj+".outColor", d=True, s=True )
          print obj, "connects", connections
          for connection in connections:
              if cmds.nodeType(connection) == "lambert":
                 print connection
                 try:
                    cmds.connectAttr( obj+'.outColor', connection+'.color' )
                 except:
                    pass
    

Here’s a demo of the environment shown at the start, running in a browser with Unity’s web plugin (warning: it’s a big environment, the plugin is ~7 MB). The camera can be zoomed in and out with the middle mouse button and rotated with Alt-LeftDrag.


MallEnvThumbnail

And here are some screenshots. Up close, some of the objects still have artifacts (seams) which need fixing, but I’m pleased with how well the scene looks at a distance.

MallEnv1

MallEnv2

If you find this writeup helpful, let me know. Also let me know if you have additional tips, gotchas, and/or alternative methods approaches. I will add them.

Reflections on making research more practical

During the I3D (aka ACM SIGGRAPH conference on Interactive 3D Graphics and Games) dinner, Chris Wyman from NVIDIA Research, gave a speech entitled “Bridging the gap: Making research more practical”.

Chris voiced two primary complaints with academics: (1) that academics produce too much research that is of no practical use and (2) that research that would be relevant for industry is too difficult to publish (*).

One, I really respect that Chris spoke about his own experiences and frustrations so openly (because of the highly political nature of publishing and promotions, academics are often afraid to voice their opinions publicly). Although he did say that he only did so because he left academia to work in industry.

Now, regarding Chris’s first point, it is not the responsibility of academia to do free research for companies. When it happens, sure, companies can really benefit: new product innovations without the risk and cost of doing the research themselves. But industry really shouldn’t expect it. Like everyone else, researchers both need to get paid and perform duties which help them advance their careers. However, if industry is interested in solving a specific problem, they should be willing to pay for it (if federal funding diminishes, this may be the trend for the future. Possible ethical questions aside, for computer science such collaborations can work really well.

I think the second point is more problematic. Chris criticized reviewers as being too harsh and too trigger happy with rejections, with the consequence that it’s too time consuming and difficult for industry folks to publish. Reviews are intended to catch mistakes, omissions, methodological flaws, and the like. I don’t think we should give this up, nor that industry folks should be held to a lower standard.

But the undercurrent of Chris’s complaints was a feeling of disenfranchisement. What do you do when the research community isn’t interested in your research? When papers are rejected on non-technical grounds, often with the feedback of “your work is too incremental”? In Chris’s specific case, he voiced frustration that his papers in high performance rendering were deemed too incremental: for example, they were improving the performance of algorithms which are already real-time. To be honest, such research does not interest me; however, I can believe there might be others who are interested, particularly in industry.

And I can relate to Chris’s complaints on an abstract level. Some of my projects, which in my view are really creative, have encountered their fair share of indifference (“not useful enough”). Cynically, it can be easier to work in a crowded/known area where there is already an established group of people who think the problem is interesting!

An other factor, not discussed at the banquet, is that bias can affect the peer review process. For example, in a 1982 experiment (with a nice summary here), scientists resubmitted papers which had already been accepted and published in a competitive journal with new names and institutions substituted. The following is copied from the abstract:

The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.

With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.” A number of possible interpretations of these data are reviewed and evaluated.

So yeah, we all like to think of ourselves as objective, but….

And for competitive conferences like SIGGRAPH, where only 0-2 papers are accepted per graphics topic area, the subjectivity and capriciousness involved in the review process affects careers and so is an emotional topic for academics. Although the SIGGRAPH publication process strives to be as objective as possible (double-bind review), it cannot ever be completely blind since some people, such as editors and conference organizers, will need to know who the authors are. When you are an outsider to top-tier conference system, it’s not possible to believe that bias doesn’t creep into the system.

My response to the uncertainty in the process is to simply remember that non top-tier conferences also publish good papers. Which brings me back to why small conferences can be so great: in additional to being focused on your subject area, they help build communities and connections between people with common interests, which frankly is impossible at large conferences like SIGGRAPH. And back to Chris’s comments, I3D seems to be a good conference for real-time rendering work. Maybe not animation, though. But that’s ok. Let’s keep our small conferences focused.

(*) Paraphrased in my own words