docker pull dongjunku/humaninter:cu11
Please organize your project in the following structure:
ContactGen
├── body_models
| ├── smplx
| | ├── SMPLX_NEUTRAL.npz
| | ├── SMPLX_NEUTRAL.pkl
├── datasets
| ├── chi3d
| | ├── train
| | | ├── s02
| | | | ├── camera_parameters
| | | | ├── gpp
| | | | ├── joints3d_25
| | | | ├── smplx
| | | | ├── videos
| | | | ├── interaction_contact_signature.json
| | | ├── s03
| | | ├── s04
| ├── chi3d_whoisactor.pkl
| ├── contact_regions.json
| ├── r_sym_pair.pkl
├── ci3d.py
├── loss.py
├── model.py
├── optimizer.py
├── params.py
├── sample.py
├── test_diffusion.py
├── test_guidenet.py
├── train_diffusion.py
├── train_guidenet.py
├── utils.py
├── visualize.py
You can get CHI3D dataset here
You can get SMPL-X here
You can get contact_regions.json
here
The pretrained models can be downloaded here. After downloading, place checkpoint_diffusion_ci3d
and checkpoint_guidenet_ci3d
in the ContactGen/
.
Diffusion module should be trained first
python train_diffusion.py
Next you can train guidenet
python train_guidenet.py
python sample.py
It will generate samples in the output_diffusion_epoch1000_ci3d
python visualize.py output_diffusion_epoch1000_ci3d/???_human_pred.pkl
You can visualize overall diffusion steps using visualize.py