python -m venv env
source activate env/bin/activate
bash install.sh
# Step 1: Download data from: https://graphics.tu-bs.de/people-snapshot
# Step 2: Preprocess using our script
python scripts/peoplesnapshot/preprocess_PeopleSnapshot.py --root <PATH_TO_PEOPLESNAPSHOT> --subject male-3-casual
# Step 3: Download SMPL from: https://smpl.is.tue.mpg.de/ and place the model in ./data/SMPLX/smpl/
# └── SMPLX/smpl/
# ├── SMPL_FEMALE.pkl
# ├── SMPL_MALE.pkl
# └── SMPL_NEUTRAL.pkl
Quickly learn and animate an avatar with bash ./bash/run-demo.sh
Here we use the in the wild video provided by Neuman as an example:
- create a yaml file specifying the details about the sequence in
./confs/dataset/
. In this example it's provided in./confs/dataset/neuman/seattle.yaml
. - download the data from Neuman's Repo, and run
cp <path-to-neuman-dataset>/seattle/images ./data/custom/seattle/
- run the bash script
bash scripts/custom/process-sequence.sh ./data/custom/seattle neutral
to preprocess the images, which- uses Openpose to estimate the 2D keypoints,
- uses Segment-Anything to segment the scene
- uses ROMP to estimate camera and smpl parameters
- run the bash script
bash ./bash/run-neuman-demo.sh
to learn an avatar
And you can animate the avatar easily:
We would like to acknowledge the following third-party repositories we used in this project:
Besides, we used code from:
We are grateful to the developers and contributors of these repositories for their hard work and dedication to the open-source community. Without their contributions, our project would not have been possible.
Please also check out our related projects!
@article{jiang2022instantavatar,
author = {Jiang, Tianjian and Chen, Xu and Song, Jie and Hilliges, Otmar},
title = {InstantAvatar: Learning Avatars from Monocular Video in 60 Seconds},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023},
}