Simple baselines and RNNs for predicting human motion in tensorflow. Presented at CVPR 17.
Switch branches/tags
Nothing to show
Clone or download
una-dinosauria Merge pull request #17 from Seleucia/master
Fix #9 - the indices used to evaluate error are taken from ground truth.
Latest commit c9a2774 Oct 20, 2017
Failed to load latest commit information.
imgs add gif of visualization Jun 19, 2017
src Update Oct 16, 2017
.gitignore ignore .h5 files Jun 19, 2017
LICENCE first commit May 8, 2017 update multiRNN to tf 1.2 API. Fixes #14 Aug 22, 2017


This is the code for the paper

Julieta Martinez, Michael J. Black, Javier Romero. On human motion prediction using recurrent neural networks. In CVPR 17.

It can be found on arxiv as well:

The code in this repository was written by Julieta Martinez and Javier Romero.


Get this code and the data

First things first, clone this repo and get the human3.6m dataset on exponential map format.

git clone
cd human-motion-prediction
mkdir data
cd data
cd ..

Quick demo and visualization

For a quick demo, you can train for a few iterations and visualize the outputs of your model.

To train, run

python src/ --action walking --seq_length_out 25 --iterations 10000

To save some samples of the model, run

python src/ --action walking --seq_length_out 25 --iterations 10000 --sample --load 10000

Finally, to visualize the samples run

python src/

This should create a visualization similar to this one

Running average baselines

To reproduce the running average baseline results from our paper, run

python src/

RNN models

To train and reproduce the results of our models, use the following commands

model arguments training time (gtx 1080) notes
Sampling-based loss (SA) python src/ --action walking --seq_length_out 25 45s / 1000 iters Realistic long-term motion, loss computed over 1 second.
Residual (SA) python src/ --residual_velocities --action walking 35s / 1000 iters
Residual unsup. (MA) python src/ --residual_velocities --learning_rate 0.005 --omit_one_hot 65s / 1000 iters
Residual sup. (MA) python src/ --residual_velocities --learning_rate 0.005 65s / 1000 iters best quantitative.
Untied python src/ --residual_velocities --learning_rate 0.005 --architecture basic 70s / 1000 iters

You can substitute the --action walking parameter for any action in

["directions", "discussion", "eating", "greeting", "phoning",
 "posing", "purchases", "sitting", "sittingdown", "smoking",
 "takingphoto", "waiting", "walking", "walkingdog", "walkingtogether"]

or --action all (default) to train on all actions.

The code will log the error in Euler angles for each action to tensorboard. You can track the progress during training by typing tensorboard --logdir experiments in the terminal and checking the board under in your browser (occasionally, tensorboard might pick another url).


If you use our code, please cite our work

  title={On human motion prediction using recurrent neural networks},
  author={Martinez, Julieta and Black, Michael J. and Romero, Javier},


The pre-processed human 3.6m dataset and some of our evaluation code (specially under src/ was ported/adapted from SRNN by @asheshjain399.