Skip to content

dqj5182/CONTHO_RELEASE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CONTHO: Joint Reconstruction of 3D Human and Object via Contact-Based Refinement Transformer

Hyeongjin Nam*1, Daniel Sungho Jung*1, Gyeongsik Moon2, Kyoung Mu Lee1

    

1Seoul National University, 2Codec Avatars Lab, Meta
(*Equal contribution)

Python 3.7+ PyTorch License: CC BY 4.0 ArXiv

PWC PWCPWC

CVPR 2024

Logo

CONTHO jointly reconstructs 3D human and object by exploiting human-object contact as a key signal in accurate reconstruction. To this end, we integrates "3D human-object reconstruction" and "Human-object contact estimation", the two different tasks that have been separately studied in two tracks, with one unified framework.

Installation

  • We recommend you to use an Anaconda virtual environment. Install PyTorch >=1.10.1 and Python >= 3.7.0. Our latest CONTHO model is tested on Python 3.9.13, PyTorch 1.10.1, CUDA 10.2.
  • Setup the environment
    # Initialize conda environment
    conda create -n contho python=3.9
    conda activate contho 

    # Install PyTorch
    conda install pytorch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1 cudatoolkit=10.2 -c pytorch

    # Install all remaining packages
    pip install -r requirements.txt

Quick demo

  • Prepare the base_data from here and place it as ${ROOT}/data/base_data.
  • Download the pre-trained checkpoint from here.
  • Lastly, please run
python main/demo.py --gpu 0 --checkpoint {CKPT_PATH}

Data

You need to follow directory structure of the data as below.

${ROOT} 
|-- data  
|   |-- base_data
|   |   |-- annotations
|   |   |-- backbone_models
|   |   |-- human_models
|   |   |-- object_models
|   |-- BEHAVE
|   |   |-- dataset.py
|   |   |-- sequences
|   |   |   |-- Date01_Sub01_backpack_back
|   |   |   |-- Date01_Sub01_backpack_hand
|   |   |   |-- ...
|   |   |   |-- Date07_Sub08_yogamat
|   |-- InterCap
|   |   |-- dataset.py
|   |   |-- sequences
|   |   |   |-- 01
|   |   |   |-- 02
|   |   |   |-- ...
|   |   |   |-- 10
  • Download Data01~Data07 sequences from BEHAVE dataset to ${ROOT}/data/BEHAVE/sequences.
    (Option 1) Directly download BEHAVE dataset from their download page.
    (Option 2) Run the script below.
scripts/download_behave.sh
  • Download RGBD_Images.zip and Res.zip from InterCap dataset to ${ROOT}/data/InterCap/sequences.
    (Option 1) Directly download InterCap dataset from their download page.
    (Option 2) Run the script below.
scripts/download_intercap.sh

Running CONTHO

Train

To train CONTHO on BEHAVE or InterCap dataset, please run

python main/train.py --gpu 0 --dataset {DATASET}

Test

To evaluate CONTHO on BEHAVE or InterCap dataset, please run

python main/test.py --gpu 0 --dataset {DATASET} --checkpoint {CKPT_PATH}

Results

Here, we report the performance of CONTHO.
CONTHO is a fast and accurate 3D human and object reconstruction framework!

Technical Q&A

  • RuntimeError: Subtraction, the - operator, with a bool tensor is not supported. If you are trying to invert a mask, use the ~ or logical_not() operator instead: Please check reference.
  • bash: scripts/download_behave.sh: Permission denied: Please check reference.

Acknowledgement

We thank:

  • Hand4Whole for 3D human mesh reconsturction.
  • CHORE for training and testing on BEHAVE.
  • InterCap for download script of the dataset.
  • DECO for in-the-wild experiment setup.

Reference

@inproceedings{nam2024contho,    
title = {Joint Reconstruction of 3D Human and Object via Contact-Based Refinement Transformer},
author = {Nam, Hyeongjin and Jung, Daniel Sungho and Moon, Gyeongsik and Lee, Kyoung Mu},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},  
year = {2024}  
}  

About

[CVPR 2024] This repo is official PyTorch implementation of Joint Reconstruction of 3D Human and Object via Contact-Based Refinement Transformer.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published