Skip to content
Data preparation and loader for AMASS
Jupyter Notebook Python
Branch: master
Clone or download
Latest commit a95834f Sep 13, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
amass Replace linkedin with webpage Aug 22, 2019
github_data Initialize and add sample data files Aug 12, 2019
notebooks Update 01-AMASS_Visualization.ipynb Aug 30, 2019
.gitignore Ignore more files Aug 14, 2019
LICENSE.txt Add license file Aug 12, 2019
README.md Update README.md Sep 13, 2019
setup.py Replace linkedin with webpage Aug 22, 2019

README.md

AMASS: Archive of Motion Capture as Surface Shapes

alt text

AMASS is a large database of human motion unifying different optical marker-based motion capture datasets by representing them within a common framework and parameterization. AMASS is readily useful for animation, visualization, and generating training data for deep learning.

Here we provide tools and tutorials to use AMASS in your research projects. More specifically:

  • Following the recommended splits of data by AMASS, we provide three non-overlapping train/validation/test splits.
  • AMASS uses an extended version of SMPL+H with DMPLs. Here we show how to load different components and visualize a body model with AMASS data.
  • AMASS is also compatible with SMPL and SMPL-X body models. We show how to use the body data from AMASS to animate these models.

Table of Contents

Installation

Requirements

Install from this repository for the latest developments:

pip install git+https://github.com/nghorbani/amass

Body Models

AMASS fits a statistical body model to labeled marker-based optical motion capture data. In the paper originally we use SMPL+H with extended shape space, e.g. 16 betas, and DMPLs. Please download each and put them in body_models folder of this repository after you obtained the code from GitHub.

Tutorials

We release tools and multiple Jupyter notebooks to demonstrate how to use AMASS to animate SMPLH body model.

Furthermore, as promised in the supplementary material of the paper, we release code to produce synthetic mocap using DFaust registrations.

Please refer to tutorials for further details.

Citation

Please cite the following paper if you use this code directly or indirectly in your research/projects:

@inproceedings{AMASS:2019,
  title={AMASS: Archive of Motion Capture as Surface Shapes},
  author={Mahmood, Naureen and Ghorbani, Nima and F. Troje, Nikolaus and Pons-Moll, Gerard and Black, Michael J.},
  booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
  year={2019},
  month = {Oct},
  url = {https://amass.is.tue.mpg.de},
  month_numeric = {10}
}

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions and any accompanying documentation before you download and/or use the AMASS dataset, and software, (the "Model & Software"). By downloading and/or using the Model & Software (including downloading, cloning, installing, and any other use of this GitHub repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Model & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

Contact

The code in this repository is developed by Nima Ghorbani.

If you have any questions you can contact us at amass@tuebingen.mpg.de.

For commercial licensing, contact ps-licensing@tue.mpg.de

You can’t perform that action at this time.