Stylistic walking dataset


This dataset consists of over 55 examples of stylistic walks (30 fps), preprocessed to seamlessly loop and have matching foot contacts. Additionally, foot contacts are annotated in accompanying ANN files). For more information regarding this dataset, see my thesis

Browsing the dataset using a minimap

Blending arbitrary motions together to create new styles

Nine example motions shown side by side.

PACO gestures in TRC motion file format

The Perception Action And Cognition Lab (PACO) at the University of Glasgow has an extensive set of gestural actions in motion capture format. You can get the data in CSM and PTD formats here.

I have also converted the CSM data to a TRC (Track Row Column) format loadable by Motion Builder (download). Motion Builder can load it if it has an appropriate header (example).

TRC is a text-based marker file format and a straight-forward substitute to C3D, which although more efficient, is a binary format which takes effort to read and write. This format is used by OpenSim, which also has corresponding documentation.

Processing motion data with Motion Builder

In optical motion capture, retro-reflective markers are placed on an actor and recorded by a grid of infrared cameras. The result of this process is typically animations of 3D points, stored in C3D format.

In the left image is an example of 3D point data from a C3D file. On the right is an example of BVH joint data generated from the C3D.

C3D is a binary format which stores animated 3D point data. Using Motion Builder, we can convert this point data to a format (BVH, in this case) which can be used to animate a digital character rigged with a skeleton. This process imports a set of C3D data into Motion Builder and then configures a biped character to fit this data. Notes on using MoBu to convert from C3D to BVH are here.

Raw motion capture data often has artifacts when mapped to a character model, such as self-intersections, floating or sinking feet, and sliding contacts with the floor. Techniques for fixing these types of problems are here. Alternatively, we may want to take an existing motion file and retarget it to a new character.

The above notes describe how to use the features in Motion Builder’s user interface to edit motion data, but it’s also possible to write python scripts to automate these processes.  Below are several example scripts

  •, output text files of when end effectors are close to the floor. Foot annotations are useful for many automatic blending algorithms.
  •, output channel curves, such as X,Y,Z translation
  •, clamp toes to the floor, cleanup foot sliding. In particular, clamping the feet to the floor whenever they are in contact is important for many automated blending algorithms.



The courage to create, from Art & Fear by David Bayles and Ted Orland

What keeps artists from creating? What makes people quit? In Art & Fear: Observations on the perils (and rewards) of artmaking (public library), David Bayles and Ted Orland, offer explanations, solutions, and encouragement for anyone with an urge to make.

The hurdles to creation aren’t merely about talent or technical skill, but about showing up everyday and doing the work. Why is this so hard?  Lack of external deadlines, lack of external validation, self-doubt, lack of confidence. Your early efforts aren’t very good. Your work falls short of your heroes. You fear that “you won’t finish what you started and you fear how people will react even if you do”.

Making art now means working in the face of uncertainty ; it means living with doubt and contradiction, doing something no one much cares whether you do, and for which there may be neither audience nor reward. Making the work you want to make means setting aside these doubts so that you may see clearly what you have done, and thereby see where to go next. Making the work you want to make means finding nourishment within the work itself.

To be an artist is to set aside these doubts and get to work: to focus on the present moment; to remember why you wanted to make something in the first place; to focus on how it feels to be engrossed in your work.

But this process is hard. Art is subjective; it can be personal. You might feel like your creations directly reflect you and thus, when your work sucks, you suck. Unhelpful critics (either indifferent or hostile) will happily reinforce this view, but at the end of the day, this perspective does not help you and must be discarded.

Making art can feel dangerous and revealing. Making art is dangerous and revealing. Making art precipitates self-doubt, stirring deep waters that lay between what you know you should be, and what you fear you might be.

In a general way, fears about yourself prevent you from doing your best work, while fears about your reception by others prevent you from doing your own work.

So how do we fight self-doubt and keep going? By doing work. Doing lots of work. Doing lots of potentially bad work. Learning from your work.

You make good work by (among other things) making lots of work that isn’t very good, and gradually weeding out the parts that aren’t good, the parts that aren’t yours. It’s called feedback, and it’s the most direct route to learning about your own vision. It’s also called doing your work.

Artists get better by sharpening their skills or by acquiring new ones; they get better by learning to work, and by learning from their work.

The most important aspect of art-making is making art. In other words, showing up everyday to practice.  David Bayles and Ted Orland have the following parable to illustrate the power of quantity.

The ceramics teacher announced on opening day that he was dividing the class into two groups. All those on the left side of the studio, he said, would be graded solely on the quantity of work they produced, all those on the right solely on its quality. His procedure was simple: on the final day of class he would bring in his bathroom scales and weigh the work of the “quantity” group: fifty pounds of pots rated an “A”, forty pounds a “B”, and so on. Those being graded on “quality”, however, needed to produce only one pot — albeit a perfect one — to get an “A”. Well, came grading time and a curious fact emerged: the works of highest quality were all produced by the group being graded for quantity. It seems that while the “quantity” group was busily churning out piles of work-and learning from their mistakes — the “quality” group had sat theorizing about perfection, and in the end had little more to show for their efforts than grandiose theories and a pile of dead clay.

So, yes, even if your vision of every project falls short of your dreams for it — each piece is a “test of correspondence between imagination and execution” — the act of creating it moves you forward. Through this process, you will develop new questions and develop new ideas.  Each piece is a reflection of who you were and what you were thinking when you created it.

The work we make, even if unnoticed and undesired by the world, vibrates in perfect harmony to everything we put into it — or withhold from it. In the outside world there maybe no reaction to what we do; in our artwork there is nothing but reaction. The breathtakingly wonderful thing about this reaction is its truthfulness. Look at your work and it tells you how it is when you hold back or when you embrace. When you are lazy, your art is lazy; when you hold back, it holds back; when you hesitate, it stands there staring, hands in its pockets. But when you commit, it comes on like blazes.

Everything we make is a snapshot of a ourselves as a constantly evolving human being — not a symbol of our worth but a reflection of our thought process at a specific place and specific time.

This book focuses on artists — painters, musicians, sculptors, writers — but advice and encouragement within it is apt for any creative practice — or any practice-makes-perfect activity. It rings true to me for research and for engineering. I can only imagine its true for business or activism. Much of what humans do can only be perfected by doing.

This is a great book to motivate you to get started on a project you’ve wanted to do for a long time but for whatever reasons couldn’t.  This is also a great book for when you feel discouraged: when a risk you took didn’t pay off.  Time to get up. Get going. Difficulties are normal. Difficulties are part of the process. Better to be the man in the arena than to be paralyzed by self-doubt, sitting on the sidelines, and guaranteed to accomplish nothing.

PS – Thanks, Andy, for recommending this excellent book! You know who you are!

The Rise of the Extrovert Ideal, from Quiet by Susan Cain

In Quiet: The power of introverts in a world that can’t stop talking (public library),  Susan Cain uses the term Extrovert Ideal to describe the social pressure to be extroverted. Qualities associated with extroversion — enthusiastic, talkative, assertive, and gregarious — are rated positively whereas traits associated with introversion — quiet, thoughtful, analytic — are typically viewed if not negatively, then with less excitement.  Not to mention that introversion has traditionally been  confounded with shyness (which both extroverts and introverts can feel, and which relates to social anxiety) and anti-social behavior. Quiet enumerates the advantages of a reflective approach to life and it also gives a compelling cultural history of how and why extroversion is so valued in the first place.

Cain describes the rise of the Extrovert Ideal as a byproduct of America’s transition from a primarily rural to an urban society.  As people started to live increasingly in cities, among strangers, as opposed to in small towns, among a small, stable group of people, the ability to make quick, favorable impressions on others became increasingly important. Additionally, as America became more industrialized and entrepreneurial, the need to ‘sell’ ideas and products to other people became a fundamental skill for people who wanted to gain the most from the new economy. This view was described by Warren Susman, in his book Culture as History, as transition from a Culture of Character to a Culture of Personality. Character consists of internal traits that anyone can develop and learn, but personality refers to external qualities that are partly determined by the traits you’re born with.

In the Culture of Character, the ideal self was serious, disciplined, and honorable. What counted was not so much the impression one made in public as how one behaved in private.

But when they embraced the Culture of Personality, Americans started to focus on how others perceived them. They became captivated by people who were bold and entertaining.

Susman counted the words that appeared most frequently in the personality-driven advice manuals of the early twentieth century and compared them to the character guides of the nineteenth century. The earlier guides emphasized attributes that anyone could work on improving, described by words like

  • Citizenship
  • Duty
  • Work
  • Golden deeds
  • Honor
  • Reputation
  • Morals
  • Manners
  • Integrity

But the new guides celebrated qualities that were— no matter how easy Dale Carnegie made it sound— trickier to acquire. Either you embodied these qualities or you didn’t:

  • Magnetic
  • Fascinating
  • Stunning
  • Attractive
  • Glowing
  • Dominant
  • Forceful
  • Energetic

The resulting anxiety from a need to constantly market and brand oneself was reflected in self-help books, such as Dale Carnegie’s How to win friends and influence people, and of course, advertising, which offered their products as easy solutions to assuage this anxiety.

…the new personality-driven ads cast consumers as performers with stage fright from which only the advertiser’s product might rescue them. These ads focused obsessively on the hostile glare of the public spotlight. “ALL AROUND YOU PEOPLE ARE JUDGING YOU SILENTLY,” warned a 1922 ad for Woodbury’s soap. “CRITICAL EYES ARE SIZING YOU UP RIGHT NOW,” advised the Williams Shaving Cream company.

Madison Avenue spoke directly to the anxieties of male salesmen and middle managers. In one ad for Dr. West’s toothbrushes, a prosperous-looking fellow sat behind a desk, his arm cocked confidently behind his hip, asking whether you’ve “EVER TRIED SELLING YOURSELF TO YOU? A FAVORABLE FIRST IMPRESSION IS THE GREATEST SINGLE FACTOR IN BUSINESS OR SOCIAL SUCCESS.” The Williams Shaving Cream ad featured a slick-haired, mustachioed man urging readers to “LET YOUR FACE REFLECT CONFIDENCE, NOT WORRY! IT’S THE ‘LOOK’ OF YOU BY WHICH YOU ARE JUDGED MOST OFTEN.”

And so, we see the birth of familiar commercial institutions such as the cosmetics, self-help, fashion, diet, plastic surgery and health industries, all built on a solid foundation of people’s insecurities. Parents worried that their quiet children were ‘anti-social’ and in the 1920s, the term Inferiority Complex became a catch all psychological disorder for people who had trouble adjusting to the cultural ideals. People who already fit the mold well could always do better and people who do not fit the mold at all might be left out altogether. In the 50s, such cultural norms were explicitly stated in educational propaganda files, such as ‘Neat and Clean’ (below) and ‘Social Responsibility’.

Furthermore, when enough people buy into a viewpoint, belief becomes a self-fulfilling reality: in the 1950s, businesses and schools actively sought out extroverted personalities. Cain gives quotes from deans at Harvard and Yale during this time.

University admissions officers looked not for the most exceptional candidates, but for the most extroverted. Harvard’s provost Paul Buck declared in the late 1940s that Harvard should reject the “sensitive, neurotic” type and the “intellectually over-stimulated” in favor of boys of the “healthy extrovert kind.” In 1950, Yale’s president, Alfred Whitney Griswold, declared that the ideal Yalie was not a “beetle-browed, highly specialized intellectual, but a well-rounded man.” Another dean told Whyte that “in screening applications from secondary schools he felt it was only common sense to take into account not only what the college wanted, but what, four years later, corporations’ recruiters would want. ‘They like a pretty gregarious, active type,’ he said. ‘So we find that the best man is the one who’s had an 80 or 85 average in school and plenty of extracurricular activity. We see little use for the “brilliant” introvert.’ ”

And so extroverted traits became prerequisites for success because institutions decided it should be so. Extroversion became part of a ‘winning personality’. Even in fields where introversion is a clear asset, such as engineering, extroverts were preferred, as at IBM.

The scientist’s job was not only to do the research but also to help sell it, and that required a hail-fellow-well-met demeanor. At IBM, a corporation that embodied the ideal of the company man, the sales force gathered each morning to belt out the company anthem, “Ever Onward,” and to harmonize on the “Selling IBM” song, set to the tune of “Singin’ in the Rain.” “Selling IBM,” it began, “we’re selling IBM. What a glorious feeling, the world is our friend.” The ditty built to a stirring close: “We’re always in trim, we work with a vim. We’re selling, just selling, IBM.”

In her book, Cain reminds us that introverted people have contributed a lot to society, especially in the arts, sciences, and activism. But she never says that introversion is better. She argues that society handicaps itself by advocating a single ‘right’ style for interacting with the world.


Baking ambient occlusion: Exporting from Maya into Unity

This is a post I’ve been meaning to write for a long while regarding my experiences baking an ambient occlusion texture onto a model for import into Unity.

First off, baking ambient occlusion is absolutely worth it to producing an appealing result. It makes the details in a scene really pop. (Ambient occlusion captures the self shadowing of close surfaces due to ambient light.)

With ambient occlusion

Without ambient occlusion

Additionally, by baking the effects of ambient occlusion onto the model, there is no additional overhead to the effect beyond displaying textures. Although I suspect that if you are reading this, I’m already preaching to the converted.

Happily, there are a lot of resources for doing this sort of thing, but the potential pitfalls can vary a lot depending on one’s workflow and approach. The approach I took was to

  • Use Mental Ray to compute ambient occlusion for static objects in the scene.
  • Bake the result to texture.
  • Export the model as an FBX with textures embedded.

So let’s give the details and then I enumerate the pitfalls, eh challenges, I experienced while working this out.

Baking ambient occlusion

Step #1 is to compute ambient occlusion for your scene and then bake the result to texture in Maya. There are a lot of resources for this online, but the video below by Josh Robinson is my favorite.

Baking Ambient Occlusion in Maya from Josh Robinson on Vimeo.

If you are unfamiliar or rusty with Maya, it’s really worth watching the video to see how things are done with Maya’s UI. Below is a summary of the steps in the video:

    • Add a surface shader to your objects. Right click on the object, assign new material, choose a surface shader. The object will now look black.
    • Make sure Mental Ray is enabled. If it is not enabled, objects will be rendered black. This can be set in Window -> Settings/Preferences -> Plug-in Manager. Make sure mayatomr.mll is checked. By the way, if you don’t have the FBX plugin enabled, you should make sure it is also checked.
  • Setup ambient occlusion shading using Mental Ray
    • Add a mib_amb_occlusion shader. Use the Window -> Rendering Editors -> Hypershade dialog.
    • Connect mib_amb_occlusion to the surface shader (middle click and drag from mib_amb_occlusion to the surface shader and select ‘Default’)
    • In the surface shader properties, set the number of occlusion rays to 128
    • Do a test render. You should see ambient occlusion shading.
  • Bake the ambient occlusion effect to texture
    • Under the menu Lighting/Shading -> Batch Bake (mental ray), open the option dialog. Set the file resolution and file type.
    • Under the menu, Window -> Rendering Editors -> Render Settings, make sure the renderer is set to Mental Ray. Use high quality settings for the best looking results.

When things don’t work

This technique relies on your models having good texture coordinates.

If your model remains completely black after baking, your model may not have UV (e.g. texture) coordinates, or they may be setup incorrectly. I ran into this problem with a few old models stored in OBJ format. Setting up good coordinates is beyond the scope of this tutorial (and I am not an expert texturer myself), but this is something to be aware of.

Other problems can occur if texture coordinates are duplicated or misaligned. This writeup gives a few examples with solutions using 3DS Max.

The scene which I was shading had a lot of objects. By default, each object had its own texture, which for me turned out to be problematic since using the same texture resolution for every object was inappropriate. For large scene objects, I needed to increase the resolution to get a better looking result.

Effect of a small resolution texture

Effect of a high resolution texture

A final problem — which I have not entirely fixed in my own environment — is occasional seams in the baked textures. As there are other resources more fit to help troubleshoot such problems in Maya, I am going to move on to the potential gotchas importing the model into Unity.

Export the result to FBX

Step #2 is to export the result to FBX. So far, I have never run into a problem with this process and using FBX in Unity has a number of advantages over using Maya’s .MB format

  • Fewer path problems. Embedded textures are unpacked automatically with good relative paths.
  • The FBX file does not require someone to have Maya installed to load it
  • The FBX file is often smaller

If you can’t find the FBX import/export options, you may need to enable the plug-in manually. Under the menu Windows -> Settings/Preferences -> Plug-in Menu, open the Plug-in manager and make sure fbxmaya.mll is checked.

It’s very important to embed the texture assets when you export.


Import the FBX into Unity

Step #3 is to import the FBX file into Unity. In general, this is as simple as copying the FBX into your Unity project’s Asset directory.

However, if your textures do not load correctly, you might be running into one of the following

  • The filenames generated by Maya during the baking process may be too long for Unity to load them. To shorten them, you can specify a shorter prefix in the Mental Ray Baking options (under Texture bake settings), or try shortening the project directory from the menu File -> Project -> Edit Current.  Note that if you repeatedly re-bake the textures, Maya will repeatedly prepend the prefix, leading to names that look like “baked_baked_baked_baked_object1Lighting.tiff”.
  • Multiple objects in the scene may be using the same texture name (this leads to one of the objects using the wrong texture. The result is usually big black splotches in the wrong places). To fix, change the model’s import settings in Unity to use prefixes before material names.
  • Maya may be connecting the result of ambient occlusion to “incandescence” instead of color, which Unity ignores. We need it to connect to color instead.ShaderNetworkThis can be fixed manually, but if you have a lot of objects, a script is easier. The following script connects the outColor property of each texture (a ‘file’ object) to the color property of the corresponding lambert material.
    import maya.cmds as cmds
    allObjects =
    for obj in allObjects:
       if cmds.nodeType(obj) == 'file':
          connections = cmds.listConnections(obj+".outColor", d=True, s=True )
          print obj, "connects", connections
          for connection in connections:
              if cmds.nodeType(connection) == "lambert":
                 print connection
                    cmds.connectAttr( obj+'.outColor', connection+'.color' )

Here’s a demo of the environment shown at the start, running in a browser with Unity’s web plugin (warning: it’s a big environment, the plugin is ~7 MB). The camera can be zoomed in and out with the middle mouse button and rotated with Alt-LeftDrag.


And here are some screenshots. Up close, some of the objects still have artifacts (seams) which need fixing, but I’m pleased with how well the scene looks at a distance.



If you find this writeup helpful, let me know. Also let me know if you have additional tips, gotchas, and/or alternative methods approaches. I will add them.

Reflections on making research more practical

During the I3D (aka ACM SIGGRAPH conference on Interactive 3D Graphics and Games) dinner, Chris Wyman from NVIDIA Research, gave a speech entitled “Bridging the gap: Making research more practical”.

Chris voiced two primary complaints with academics: (1) that academics produce too much research that is of no practical use and (2) that research that would be relevant for industry is too difficult to publish (*).

One, I really respect that Chris spoke about his own experiences and frustrations so openly (because of the highly political nature of publishing and promotions, academics are often afraid to voice their opinions publicly). Although he did say that he only did so because he left academia to work in industry.

Now, regarding Chris’s first point, it is not the responsibility of academia to do free research for companies. When it happens, sure, companies can really benefit: new product innovations without the risk and cost of doing the research themselves. But industry really shouldn’t expect it. Like everyone else, researchers both need to get paid and perform duties which help them advance their careers. However, if industry is interested in solving a specific problem, they should be willing to pay for it (if federal funding diminishes, this may be the trend for the future. Possible ethical questions aside, for computer science such collaborations can work really well.

I think the second point is more problematic. Chris criticized reviewers as being too harsh and too trigger happy with rejections, with the consequence that it’s too time consuming and difficult for industry folks to publish. Reviews are intended to catch mistakes, omissions, methodological flaws, and the like. I don’t think we should give this up, nor that industry folks should be held to a lower standard.

But the undercurrent of Chris’s complaints was a feeling of disenfranchisement. What do you do when the research community isn’t interested in your research? When papers are rejected on non-technical grounds, often with the feedback of “your work is too incremental”? In Chris’s specific case, he voiced frustration that his papers in high performance rendering were deemed too incremental: for example, they were improving the performance of algorithms which are already real-time. To be honest, such research does not interest me; however, I can believe there might be others who are interested, particularly in industry.

And I can relate to Chris’s complaints on an abstract level. Some of my projects, which in my view are really creative, have encountered their fair share of indifference (“not useful enough”). Cynically, it can be easier to work in a crowded/known area where there is already an established group of people who think the problem is interesting!

An other factor, not discussed at the banquet, is that bias can affect the peer review process. For example, in a 1982 experiment (with a nice summary here), scientists resubmitted papers which had already been accepted and published in a competitive journal with new names and institutions substituted. The following is copied from the abstract:

The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.

With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.” A number of possible interpretations of these data are reviewed and evaluated.

So yeah, we all like to think of ourselves as objective, but….

And for competitive conferences like SIGGRAPH, where only 0-2 papers are accepted per graphics topic area, the subjectivity and capriciousness involved in the review process affects careers and so is an emotional topic for academics. Although the SIGGRAPH publication process strives to be as objective as possible (double-bind review), it cannot ever be completely blind since some people, such as editors and conference organizers, will need to know who the authors are. When you are an outsider to top-tier conference system, it’s not possible to believe that bias doesn’t creep into the system.

My response to the uncertainty in the process is to simply remember that non top-tier conferences also publish good papers. Which brings me back to why small conferences can be so great: in additional to being focused on your subject area, they help build communities and connections between people with common interests, which frankly is impossible at large conferences like SIGGRAPH. And back to Chris’s comments, I3D seems to be a good conference for real-time rendering work. Maybe not animation, though. But that’s ok. Let’s keep our small conferences focused.

(*) Paraphrased in my own words

Procedural gaze to reflect character attitudes

In our MIG 2013 paper “Evaluating perceived trust from procedurally animated gaze”, we investigated how well a simple gaze model conveyed how much a character appeared to trust the viewer. If the character trusts the player a lot, they spend the more time looking at the face; otherwise, they spend more time looking away. Check out the video to see a demo of the gaze model and the experiment platform we built in Unity to make it.


In the above work, we find that even with very fast animation clips, people naturally inferred attitudes from the character solely based on the proportion of time the character spent looking at the player. Specifically, gaze conveyed trust, interest, admiration, and friendliness to varying degrees.


Above, * represents significance at the 0.05, 0.01, and 0.001 probabilities, where the null hypothesis corresponds to a regression slope of 0 (e.g. a horizontal line). In other words, the steeper the slope which fits viewer rankings to time looking directly at the player, the more significant the result.

Given how straight-forward gaze is to implement, it’s clear that it’s an easy way to add personality to characters. Additionally, in the paper, we show that it’s also straight-forward to tie gaze to character attitudes which might vary dynamically while you play a game — all without needing additional voice acting, motion capture, scripting, and dialog. For example, suppose a player had a high reputation score with a character’s group, or that the player has spoken with the player multiple times, the character’s body language and voice tone could automatically reflect that. We also show how the gaze can vary probabilistically to look more natural, while still maintaining a desired proportion of time looking in a desired direction.

Experiments in nudging players

Player modeling can be used to train NPCs and bots, to dynamically customize the gameplay (for example, an enemy’s strategies could change based on play style), and to aid testing and level design. In our 2013 AIIDE paper, we proposed a simple probabilistic method for modeling players that could be used to bias players towards certain behaviors. The underlying assumption is that players tend to act certain ways based on what is available in their environment. Thus, if we know the relationship between player behavior and environment, we can tweak the environment to encourage people to behave in certain ways.

In other words, we can model what players do where and then use this information to nudge player behaviors in desired ways. There may be applications for this beyond testing and data collection — video games are unique in that we have absolute control over the environment we present to players. For example, could we better understand what environmental/game incentives either encourage (or discourage) PVP? What differences are there between free-to-play players and subscribers? What do players tend to do at max level?

For the paper, we specifically looked at a straight-forward application of this idea for collecting player metrics. Such an approach could reduce the number of games playtesters need to run because it would allow them to focus on collecting data only for the metrics which need it most. For our proof of concept, we implemented several dynamically configurable environments in Second Life and collected several very simple behavior metrics: the distances between people standing in either narrow or wide spaces; the timing of lane crossings for slow and fast traffic; and the choice of whether to use a health kit based on health level.

Below are two screenshots from two of our experiment setups (top: a space environment; bottom: an office environment), in which players race around to collect tiles for prizes.



Using our player model, we formulated the question of which game to run next as an optimization problem (as an MDP), would run more games, and finally update our player model using the results. Even without running the optimization, one can look at the statistics to see what behaviors occur most frequently in which environments. Even in our straight-forward setup, our assumptions about what players would do were often wrong! We also showed that our optimization-based scheduler did reduce the number of games needed to run, when compared to a schedule which played all scenarios equally. However, there are caveats and limitations to the approach which are worth reading about in the paper.

Even for testing, we didn’t get a chance to explore these ideas further, but I always envisioned it having potential in large, open-world multiplayer environments where it’s particularly difficult to work out every glitch or even understand apriori how players will interact with each other and the game (although companies do an amazing job). For example, could this approach help debug aspects of the environment that lead to trapping players in walls (e.g. do environments where players become trapped have certain shared characteristics? or are the aspects of the character (such as speed at impact) cause the character to become stuck)? Could this approach help debug navigation problems for companion characters, who may block and trap the player in certain areas? After all, once a problem is understood well-enough to be reproducible, it’s often straight-forward to fix.