Skip to content

TensorFlow实现的3D人体姿态估计Baseline A simple baseline for 3d human pose estimation in tensorflow.

License

Notifications You must be signed in to change notification settings

lyk125/3d-pose-baseline

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

3d-pose-baseline

This is the code for the paper

Julieta Martinez, Rayat Hossain, Javier Romero, James J. Little. A simple yet effective baseline for 3d human pose estimation. In arxiv, 2017. https://arxiv.org/pdf/1705.03098.pdf.

The code in this repository was mostly written by Julieta Martinez, Rayat Hossain and Javier Romero.

We provide a strong baseline for 3d human pose estimation that also sheds light on the challenges of current approaches. Our model is lightweight and we strive to make our code transparent, compact, and easy-to-understand.

Dependencies

First of all

  1. Watch our video: https://youtu.be/Hmi3Pd9x1BE
  2. Clone this repository and get the data. We provide the Human3.6M dataset in 3d points, camera parameters to produce ground truth 2d detections, and Stacked Hourglass detections.
git clone git@github.com:una-dinosauria/3d-pose-baseline.git
cd 3d-pose-baseline
mkdir data
cd data
wget https://www.dropbox.com/s/e35qv3n6zlkouki/h36m.zip
unzip h36m.zip
rm h36m.zip
cd ..

Quick demo

For a quick demo, you can train for one epoch and visualize the results. To train, run

python src/predict_3dpose.py --camera_frame --residual --batch_norm --dropout 0.5 --max_norm --evaluateActionWise --use_sh --epochs 1

This should take about <5 minutes to complete on a GTX 1080, and give you around 75 mm of error on the test set.

Now, to visualize the results, simply run

python src/predict_3dpose.py --camera_frame --residual --batch_norm --dropout 0.5 --max_norm --evaluateActionWise --use_sh --epochs 1 --sample --load 24371

This will produce a visualization similar to this:

Visualization example

Training

To train a model with clean 2d detections, run:

python src/predict_3dpose.py --camera_frame --residual --batch_norm --dropout 0.5 --max_norm --evaluateActionWise

This corresponds to Table 2, bottom row. Ours (GT detections) (MA)

To train on Stacked Hourglass detections, run

python src/predict_3dpose.py --camera_frame --residual --batch_norm --dropout 0.5 --max_norm --evaluateActionWise --use_sh

This corresponds to Table 2, next-to-last row. Ours (SH detections) (MA)

On a GTX 1080 GPU, this takes <8 ms for forward+backward computation, and <6 ms for forward-only computation per batch of 64.

Citing

If you use our code, please cite our work

@article{martinez_2017_3dbaseline,
  title={A simple yet effective baseline for 3d human pose estimation},
  author={Martinez, Julieta and Hossain, Rayat and Romero, Javier and Little, James J.},
  journal={arXiv preprint arXiv:1705.03098},
  year={2017}
}

License

MIT

About

TensorFlow实现的3D人体姿态估计Baseline A simple baseline for 3d human pose estimation in tensorflow.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%