Skip to content

sha2nkt/deco

Repository files navigation

DECO: Dense Estimation of 3D Human-Scene Contact in the Wild [ICCV 2023 (Oral)]

Code repository for the paper:
DECO: Dense Estimation of 3D Human-Scene Contact in the Wild
Shashank Tripathi, Agniv Chatterjee, Jean-Claude Passy, Hongwei Yi, Dimitrios Tzionas, Michael J. Black
IEEE International Conference on Computer Vision (ICCV), 2023

arXiv Website shields.io Open In Colab Hugging Face Spaces

teaser

[Project Page] [Paper] [Video] [Poster] [License] [Contact]

News 🚩

  • [2024/01/31] The DAMON contact labels in SMPL-X format have been released. This is the conversion script.
  • [2023/10/12] The huggingface demo has been released.
  • [2023/10/10] The colab demo has been released. Huggingface demo coming soon...

Installation and Setup

  1. First, clone the repo. Then, we recommend creating a clean conda environment, activating it and installing torch and torchvision, as follows:
git clone https://github.com/sha2nkt/deco.git
cd deco
conda create -n deco python=3.9 -y
conda activate deco
pip install torch==1.13.0+cu117 torchvision==0.14.0+cu117 --extra-index-url https://download.pytorch.org/whl/cu117

Please adjust the CUDA version as required.

  1. Install PyTorch3D from source. Users may also refer to PyTorch3D-install for more details. However, our tests show that installing using conda sometimes runs into dependency conflicts. Hence, users may alternatively install Pytorch3D from source following the steps below.
git clone https://github.com/facebookresearch/pytorch3d.git
cd pytorch3d
pip install .
cd ..
  1. Install the other dependancies and download the required data.
pip install -r requirements.txt
sh fetch_data.sh
  1. Please download SMPL (version 1.1.0) and SMPL-X (v1.1) files into the data folder. Please rename the SMPL files to SMPL_FEMALE.pkl, SMPL_MALE.pkl and SMPL_NEUTRAL.pkl. The directory structure for the data folder has been elaborated below:
├── preprocess
├── smpl
│   ├── SMPL_FEMALE.pkl
│   ├── SMPL_MALE.pkl
│   ├── SMPL_NEUTRAL.pkl
│   ├── smpl_neutral_geodesic_dist.npy
│   ├── smpl_neutral_tpose.ply
│   ├── smplpix_vertex_colors.npy
├── smplx
│   ├── SMPLX_FEMALE.npz
│   ├── SMPLX_FEMALE.pkl
│   ├── SMPLX_MALE.npz
│   ├── SMPLX_MALE.pkl
│   ├── SMPLX_NEUTRAL.npz
│   ├── SMPLX_NEUTRAL.pkl
│   ├── smplx_neutral_tpose.ply
├── weights
│   ├── pose_hrnet_w32_256x192.pth
├── J_regressor_extra.npy
├── base_dataset.py
├── mixed_dataset.py
├── smpl_partSegmentation_mapping.pkl
├── smpl_vert_segmentation.json
└── smplx_vert_segmentation.json

Download the DAMON dataset

⚠️ Register account on the DECO website, and then use your username and password to login to the Downloads page.

Follow the instructions on the Downloads page to download the DAMON dataset. The provided metadata in the npz files is described as follows:

  • imgname: relative path to the image file
  • pose : SMPL pose parameters inferred from CLIFF
  • transl : SMPL root translation inferred from CLIFF
  • shape : SMPL shape parameters inferred from CLIFF
  • cam_k : camera intrinsic matrix inferred from CLIFF
  • polygon_2d_contact: 2D contact annotation from HOT
  • contact_label: 3D contact annotations on the SMPL mesh
  • contact_label_smplx: 3D contact annotation on the SMPL-X mesh
  • scene_seg: path to the scene segmentation map from Mask2Former
  • part_seg: path to the body part segmentation map

The order of values is the same for all the keys.

Converting DAMON contact labels to SMPL-X format (and back)

To convert contact labels from SMPL to SMPL-X format and vice-versa, run the following command

python reformat_contacts.py \
    --contact_npz datasets/Release_Datasets/damon/hot_dca_trainval.npz \
    --input_type 'smpl'

Run demo on images

The following command will run DECO on all images in the specified --img_src, and save rendering and colored mesh in --out_dir. The --model_path flag is used to specify the specific checkpoint being used. Additionally, the base mesh color and the color of predicted contact annotation can be specified using the --mesh_colour and --annot_colour flags respectively.

python inference.py \
    --img_src example_images \
    --out_dir demo_out

Training and Evaluation

We release 3 versions of the DECO model:

  1. DECO-HRNet ( Best performing model )
  2. DECO-HRNet w/o context branches
  3. DECO-Swin

All the checkpoints have been downloaded to checkpoints. However, please note that versions 2 and 3 have been trained solely on the RICH dataset.
We recommend using the first DECO version.

Please download the actual DAMON dataset from the website and place it in datasets/Release_Datasets following the instructions given.

Evaluation

To run evaluation on the DAMON dataset, please run the following command:

python tester.py --cfg configs/cfg_test.yml

Training

The config provided (cfg_train.yml) is set to train and evaluate on all three datasets: DAMON, RICH and PROX. To change this, please change the value of the key TRAINING.DATASETS and VALIDATION.DATASETS in the config (please also change TRAINING.DATASET_MIX_PDF as required).
Also, the best checkpoint is stored by default at checkpoints/Other_Checkpoints. Please run the following command to start training of the DECO model:

python train.py --cfg configs/cfg_train.yml

Training on custom datasets

To train on other datasets, please follow these steps:

  1. Please create an npz of the dataset, following the structure of the datasets in datasets/Release_Datasets with the corresponding keys and values.
  2. Please create scene segmentation maps, if not available. We have used Mask2Former in our work.
  3. For creating the part segmentation maps, this sample script can be referred to.
  4. Add the dataset name(s) to train.py (these lines), tester.py (these lines) and data/mixed_dataset.py (these lines), according to the body model being used (SMPL/SMPL-X)
  5. Add the path(s) to the dataset npz(s) to common/constants.py (these lines).
  6. Finally, change TRAINING.DATASETS and VALIDATION.DATASETS in the config file and you're good to go!

Citing

If you find this code useful for your research, please consider citing the following paper:

@InProceedings{tripathi2023deco,
    author    = {Tripathi, Shashank and Chatterjee, Agniv and Passy, Jean-Claude and Yi, Hongwei and Tzionas, Dimitrios and Black, Michael J.},
    title     = {{DECO}: Dense Estimation of {3D} Human-Scene Contact In The Wild},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2023},
    pages     = {8001-8013}
}

License

See LICENSE.

Acknowledgments

We sincerely thank Alpar Cseke for his contributions to DAMON data collection and PHOSA evaluations, Sai K. Dwivedi for facilitating PROX downstream experiments, Xianghui Xie for his generous help with CHORE evaluations, Lea Muller for her help in initiating the contact annotation tool, Chun-Hao P. Huang for RICH discussions and Yixin Chen for details about the HOT paper. We are grateful to Mengqin Xue and Zhenyu Lou for their collaboration in BEHAVE evaluations, Joachim Tesch and Nikos Athanasiou for insightful visualization advice, and Tsvetelina Alexiadis for valuable data collection guidance. Their invaluable contributions enriched this research significantly. We also thank Benjamin Pellkofer for help with the website and IT support. This work was funded by the International Max Planck Research School for Intelligent Systems (IMPRS-IS).

Contact

For technical questions, please create an issue. For other questions, please contact deco@tue.mpg.de.

For commercial licensing, please contact ps-licensing@tue.mpg.de.

About

Estimate vertex-level 3D human-scene and human-object contacts across the full body mesh

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published