Skip to content

[ICCV 2023] Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling

License

Notifications You must be signed in to change notification settings

qianbo-x/AvatarJLM

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling

ByteDance
Equal contribution   *Corresponding author
🤩 Accepted to ICCV 2023

AvatarJLM uses tracking signals of the head and hands to estimate accurate, smooth, and plausible full-body motions.

📖 For more visual results, go checkout our project page


📣 Updates

[09/2023] Testing samples are available.

[09/2023] Training and testing codes are released.

[07/2023] AvatarJLM is accepted to ICCV 2023:partying_face:!

📁 Data Preparation

AMASS

  1. Please download the datasets from AMASS.
  2. Download the required body models and placed them in ./support_data/body_models directory of this repository. For the SMPL+H body model, download it from http://mano.is.tue.mpg.de/. Please download the AMASS version of the model with DMPL blendshapes. You can obtain dynamic shape blendshapes, e.g. DMPLs, from http://smpl.is.tue.mpg.de.
  3. Run ./data/prepare_data.py to preprocess the input data for faster training. The data split for training and testing data under Protocol 1 in our paper is stored under the folder ./data/data_split (from AvatarPoser).
python ./data/prepare_data.py --protocol [1, 2, 3] --root [path to AMASS]

Real-Captured Data

  1. Please download our real-captured testing data from Google Drive. The data is preprocessed to the same format as our preprocessed AMASS data.
  2. Unzip the data and place it in ./data directory of this repository.

🖥️ Requirements

🚴 Training

python train.py --protocol [1, 2, 3] --task [name of the experiment] 

🏃‍♀️ Evaluation

python test.py --protocol [1, 2, 3, real] --task [name of the experiment] --checkpoint [path to trained checkpoint] [--vis]

🍭 Trained Model

Protocol MPJRE MPJPE MPJVE Trained Model
1 3.01 3.35 21.01 Google Drive
2-CMU-Test 5.36 7.28 26.46 Google Drive
2-BML-Test 4.65 6.22 34.45 Google Drive
2-MPI-Test 5.85 6.47 24.13 Google Drive
3 4.25 4.92 27.04 Google Drive

🤟 Citation

If you find our work useful for your research, please consider citing the paper:

@inproceedings{
  zheng2023realistic,
  title={Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling},
  author={Zheng, Xiaozheng and Zhuo Su and Wen, Chao and Xue, Zhou and Xiaojie Jin},
  booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
  year={2023}
}

🗞️ License

Distributed under the MIT License. See LICENSE for more information.

🙌 Acknowledgements

This project is built on source codes shared by AvatarPoser. We thank the authors for their great job!

About

[ICCV 2023] Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%