Official implementation for the paper "Semantics2Hands: Transferring Hand Motion Semantics between Avatars".
@inproceedings{ye2023semantics,
title={Semantics2Hands: Transferring Hand Motion Semantics between Avatars},
author={Ye, Zijie and Jia, Jia and Xing, Junliang},
booktitle={Proceedings of the 31st ACM International Conference on Multimedia},
year={2023}
}
conda create -n s2h python=3.8
conda activate s2h
The code was tested on Python 3.8, PyTorch 1.13.1 and CUDA 12.1.
Install packages from requirements.txt
.
pip install -r requirements.txt
Install PyTorch3D. Following the official instruction of PyTorch3D, you can install with
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu113_pyt1121/download.html # Installing from prebuilt wheels is faster than using conda, but the prebuilt wheel url may differ depeding on your CUDA version.
The chumpy
package required by smplx
is no longer maintained and would raise ImportError. We need to comment line 11 in "CONDA_DIR/envs/s2h/lib/python3.8/site-packages/chumpy/init.py". CONDA_DIR
is the directory where you install conda.
from numpy import bool, int, float, complex, object, unicode, str, nan, inf
# comment the above line in "CONDA_DIR/envs/s2h/lib/python3.8/site-packages/chumpy/__init__.py", line 11
Install Blender >= 2.82 from: https://www.blender.org/download/.
Download the MANO model file from MANO. Place MANO_LEFT.pkl
and MANO_RIGHT.pkl
in artifact/smplx/models/mano
.
We use the Mixamo and InterHand2.6M datasets to train our model. You can download the preprocessed data from Google Drive. Then place the MixHand
directory in artifact
directory.
Otherwise, if you want to preprocess the dataset on your own or use different characters, please follow the instructions in DATA.md.
First, download the pretrained model and place it at artifact
.
To reproduce quantitative results, run:
python -m run.train_mixhand test --config artifact/ASRN/lightning_logs/version_0/config.yaml --ckpt_path artifact/ASRN/lightning_logs/version_0/checkpoints/best_mixamo-epoch=58-mixamo_semi_rf=-1.80.ckpt
To reproduce qualitative results, run:
python -m run.visualize_mixhand --ckpt_path artifact/ASRN/lightning_logs/version_0/checkpoints/best_mixamo-epoch=58-mixamo_semi_rf=-1.80.ckpt --config ./artifact/ASRN/lightning_logs/version_0/config.yaml --output_dir artifact/visualization
First, uncomment line 180~183 in data/combined_motion.py
.
To reproduce quantitative results, run:
python -m run.train_mixhand test --config artifact/ASRN/lightning_logs/version_0/config.yaml --ckpt_path artifact/ASRN/lightning_logs/version_0/checkpoints/best_mixamo-epoch=58-mixamo_semi_rf=-1.80.ckpt
Run:
python -m run.train_mixhand fit --config config/ASRN.yaml --trainer.default_root_dir artifact/ASRN
Run:
python -m run.train_dm fit --config config/DM.yaml --trainer.default_root_dir artifact/DM
To reproduce quantitative results, run:
python -m run.train_dm test --config artifact/DM/lightning_logs/version_0/config.yaml --ckpt_path artifact/DM/lightning_logs/version_0/checkpoints/best_mixamo-epoch=93-mixamo_semi_rf=0.00.ckpt
The name of the checkpoint file may differ.
First, uncomment line 180~183 in data/combined_motion.py
.
To reproduce quantitative results, run:
python -m run.train_dm test --config artifact/DM/lightning_logs/version_0/config.yaml --ckpt_path artifact/DM/lightning_logs/version_0/checkpoints/best_mixamo-epoch=93-mixamo_semi_rf=0.00.ckpt
The name of the checkpoint file may differ.
The code of the dataloader, the BVH parser and the Animation object are based on SAN repository.
This code is distributed under an MIT LICENSE.