In optical motion capture, retro-reflective markers are placed on an actor and recorded by a grid of infrared cameras. The result of this process is typically animations of 3D points, stored in C3D format.
In the left image is an example of 3D point data from a C3D file. On the right is an example of BVH joint data generated from the C3D.
C3D is a binary format which stores animated 3D point data. Using Motion Builder, we can convert this point data to a format (BVH, in this case) which can be used to animate a digital character rigged with a skeleton. This process imports a set of C3D data into Motion Builder and then configures a biped character to fit this data. Notes on using MoBu to convert from C3D to BVH are here.
Raw motion capture data often has artifacts when mapped to a character model, such as self-intersections, floating or sinking feet, and sliding contacts with the floor. Techniques for fixing these types of problems are here. Alternatively, we may want to take an existing motion file and retarget it to a new character.
The above notes describe how to use the features in Motion Builder’s user interface to edit motion data, but it’s also possible to write python scripts to automate these processes. Below are several example scripts
- ExportContacts.py, output text files of when end effectors are close to the floor. Foot annotations are useful for many automatic blending algorithms.
- PrintCurve.py, output channel curves, such as X,Y,Z translation
- ToesToFloor.py, clamp toes to the floor, cleanup foot sliding. In particular, clamping the feet to the floor whenever they are in contact is important for many automated blending algorithms.