Skip to content
This is the public repository for our accepted ICCV 2019 paper "Delving Deep Into Hybrid Annotations for 3D Human Recovery in the Wild"
Python Shell
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
install update install.md Jan 29, 2020
script
src update Feb 7, 2020
.gitignore
readme.md remove dependency on Python 2 Jan 29, 2020
train.sh

readme.md

Delving Deep Into Hybrid Annotations for 3D Human Recovery in the Wild

Yu Rong, Ziwei Liu, Cheng Li, Kaidi Cao, Chen Change Loy

Part of the code is inspired by Pytorch-CycleGAN and HMR. Thanks the contributions of their authors.

Prerequisites

Install 3rd-party packages

Please refer to install.md to install the required packages and code.

Prepare DCT code

  • Clone this repo
git clone git@github.com:penincillin/DCT_ICCV-2019.git
cd DCT_ICCV-2019

Prepare Data and Models

Download processed datasets, demo images and pretrained weights and models from Google Drive, uznip it and place it in the root directory of DCT_ICCV-2019.

Training

Prepare Real-time Visualization

Before training starts, to visualize the training results and the loss curve in real-time, please run python -m visdom.server 8097 and click the URL http://localhost:8097

Train All Dataset

Use Image as input and use all annotations

sh script/train_all_img.sh

Use Image and IUV as input and use all annotations

sh script/train_all_img_iuv.sh

Train UP-3D Dataset

Use Images as input and use all annotations

sh script/train_up3d_img_3d_dp.sh

Use Images as input and not use 3D annotations

sh script/train_up3d_img_dp.sh

Use Images and IUV maps as input and use all annotations

sh script/train_up3d_img_iuv_3d_dp.sh

Use Images and IUV maps as input and not use 3D annotations

sh script/train_up3d_img_iuv_dp.sh

Evaluation

Evaluate on UP-3D Dataset

Evaluate models use images as input

sh script/test_up3d_img.sh

Evaluate models use images and IUV maps as input

sh script/test_up3d_img_iuv.sh

After run evaluation code, the results are stored in DCT_ICCV-2019/evaluate_results. To visualize the results, run

sh script/visualize.sh

The generated images are stored in DCT_ICCV-2019/evaluate_results/images.

Run Inference on Other Images

To run the model on your own images, just center crop the images according to each person.
Then update the content of image list file stored in DCT_ICCV-2019/dct_data/demo/img_list.txt.

To run inference for the pre-processed images, please checkout to the inference branch first.
Remember to commit the current changes you made in master branch first.

Run inference code:

sh script/infer.sh

The results are stored in DCT_ICCV-2019/inference_results. To further visualize the results, run visualization code:

sh script/visualize.sh

The generated images are stored in DCT_ICCV-2019/inference_results/images.

Citation

Please cite the paper in your publications if it helps your research:

@inproceedings{Rong_2019_ICCV,
  author = {Rong, Yu and Liu, Ziwei and Li, Cheng and Cao, Kaidi and Loy, Chen Change},
  title = {Delving Deep Into Hybrid Annotations for 3D Human Recovery in the Wild},
  booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
  month = {October},
  year = {2019}
  }
You can’t perform that action at this time.