Salient Poses (high performance command-line tool)
Stuck? Get in Touch!
Table Of Contents
Here's a rough outline of this document - I'd recommend reading it in a linear fashion if you haven't seen anything on this algorithm before.
Easy-to-Edit Motion Capture!
Motion capture has become a core component of the animation pipeline in the visual effects industry. Whether it's a live-action blockbuster film or an indie game being developed in a back-alley office, motion capture is likely to be involved. While this technology is awesome - it allows actors to truly embody a fantasy character - it does have its problems.
Say we start of with a mocap animation (here's one that I grabbed from Adobe's Mixamo):
While the animation looks nice, it actually has lots of keyframes. Let's take a look at the keyframes, visualized here as blue outlines. There are so many keyrames; one for every frame. In this case there are 60 per second!
While having all of these keyframes involved is necessary during recording - we want motion capture to capture the actor's performance precisely - they involve a large memory footprint (problematic for video games) and make the motion hard to change (problematic for motion editors). Here's a picture of just some of the data for the animation above; can you imagine loading all the animations for a protagonist character in a video game (there are sometimes thousands of unique motion clips) or trying to adjusting the motion using this data:
To address this problem, I've developed a new algorithm for compressing and editing motion capture. Titled "Salient Poses", this algorithm uses optimal keyframe reduction to simplify motion-capture animation. At an conceptual level, we could say that Salient Poses converts motion capture into hand-crafted keyframe animation.
More precisely, the algorithm works by finding potential set of important - that is, "salient" - poses. In each set of poses, the choice of poses has been carefully determined so that it most accurately reconstructs the original motion. Once found, we can create a new animation using just these poses. Here's an illustration of one possible optimal set of poses selected for the animation above:
Comparing these poses to the original motion-capture, we can already see the benefit: the motion can be expressed with fewer poses. Having fewer poses in the animation means a smaller memory footprint and also that editors invest less time for editing (fewer changes are required). And, furthermore, the data is now much sparser than before:
With all that done, the last step is to create a new motion using on the selected poses. To do this the algorithm performs inbetweening, which is the process of deciding how to transitioning between the poses to best recreate the original animation. It's hard to describe exactly how the in-betweening works, but you might imagine it as recreating the curve traced by each of the character's joints:
Before and After
To sum the whole process up, here's a look at the original animation (right side) and the same animation after compression with Salient Poses (left side). In this particular case the compressed animation contains only the seven poses from above, paired with the reconstructed curves.
What's in this project?
This repository presents a high-performance implementation of the Salient Poses algorithm as a command-line tool that I've designed for offline use by a technical artist. The tool is built with
C++ and uses
OpenCL to distribute some of the heavier number crunching to many parallel computation units.
Since this implementation is designed as bare-bones super-fast version, it only operates on a basic CSV animation format. If you provide the tool one of these CSV animations, you'll get back another CSV containing the compressed animation. The resulting animation CSV contains a collection of curves that can be keyed into standard animation tools like Blender or Maya. To get an idea of the CSV formats, check out some of the examples used for the tests.
Please note that this implementation is design for a specific purpose. If you looking for something you can use interactively, please check out the Salient Poses - Maya implementation. I'm also planning to work on a Blender in the future). And, if that's not enough, let me know you're favorite animation tool, I'd be happy to program it up for you!
Getting setup is simple, provided you've got a bit of scripting experience under your belt (if not, feel free to get in touch and I'm happy to walk you through the process). The steps are:
- ensure you have an OpenCL device (most Intel, AMD, and NVIDIA chips are fine),
- write an exporter and importer for your animation tool (generally a few lines of Python), and
- download or build the tool.
Importer and Exporter
You'll need to write scripts to export animation into the CSV file format used by this tool and then to import the compressed animation back in. I've provided a Python importer/exporter for Maya as an example, see the Maya Importer / Exporter. Again, check out the examples in expected to get a feeling for the format.
Download or Build
Once you get that far, it's time to use the tool. You are welcome to take the executable from the releases page, or otherwise download and build the command line tool yourself (you'll need CMake and an IDE such as Visual Studio or Xcode. Use the relevant executable (
setup.bat for Windows,
setup.command for Mac, or
setup.sh for Linux users) in the root folder of the project to perform the IDE setup. If you decided to build the code yourself, make sure you run the tests (on Windows use
run-tests.bat, on Mac use
run-tests.command, and on Linux use
run-tests.sh). If the tests pass, you're good to go!
From there, all you need to do is run the command line tool: simply click on
run-main.command (Mac), or
run-main.sh (Linux). A command line, shell, or terminal will open and ask you for some information:
- where to find exported animation CSV file,
- which frames to start and end the compressed animation (you can compress just part of the animation if you like),
- the number of keyframes to keep in total,
- any particular keyframes that should be kept no matter what, and finally
- which OpenCL-capable device you would like to use.
Here's an example using one of the test files:
Once the tool has finished, you can find the simplified animation in a new CSV file in the same folder as the animation CSV. User your importer to load the animation back into your program of choice.
Digging Into the Code
This algorithm works in a few steps, which I like to think of as:
- analysis, building a table that expresses the importance of each poses,
- selection, optimally choosing a set of poses for each level of compression, and
- reconstruction, where we create the new animation from a given set of poses.
If you'd like to peek into the code, start by looking at the Error Table class (this performs the analysis step), then the Selector class (this chooses optimal set of poses), and also the Interpolator class (this performs reconstruction using a basic curve-fitting technique).
From there try to find the
InterpolatorSet classes, which are important to understand for exporting selections and reconstructed animations. Once you get that far, you'll start to see that most objects (created from these classes) can be exported to and recovered from CSV files. The modular design means that I can create variations of the command-line tool to perform the pipeline while avoiding redundant evaluations of the analysis and selections steps. This is very important for maintaining interactive performance in other implementations.
Let me know how you get on and feel free to ask questions! I'm always happy to write a bit more on the design or, if all else fails, talk it over on chat.
I designed and implemented the Salient Poses algorithm as the core contribution in my doctorate study. If you feel particularly crazy, you can read my thesis:
- Converting Motion Capture Into Editable Keyframe Animation: Fast, Optimal, and Generic Keyframe Selection (2018).
I will also be presenting this work in December at the SIGGRAPH Asia '18 conference. Once published, I will include a link to the paper and presentation here.
Thanks to Adobe's Mixamo for the animation used in the tests.