[Project Page] [Paper] [Video]
To install the necessary dependencies run the following command:
pip install -r requirements.txt
The code has been tested with Python 3.7, CUDA 10.1.
You will download the pre-trained weight and unzip the files under folder checkpoints
checkpoint
├── Garent
├── Rendernet
├── Silnet
You will convert the single view body fitting results and fashion segmentation result into the format we use. You will run matlab_code/convert_data.m where this requires the fashion segmentation results and single view 3D body model body fitting result. For segmentation, we use fashion segmentation method, and for body model fitting, we use SMPL fitting method, where we save the vertices and camera translation. Please refer to the same data.
To obtain a complete unified UV texture and garment labels from a single image, please run the following command:
python UV_modeling.py
This will generate the complete UV maps and the intermediate results in the UV_model folder.
To synthesize the person image from different body pose, you will run the following command:
python inference.py
This will generate the synthesized image in the output folder.
If you find this Model & Software useful in your research we would kindly ask you to cite:
@inproceedings{yoon2021humanani,
title={Pose-Guided Human Animation from a Single Image in the Wild},
author={Jae Shin Yoon, Lingjie Liu, Vladislav Golyanik, Kripasindhu Sarkar, Hyun Soo Park, and Christian Theobalt},
booktitle={CVPR},
year={2021}
}
The code of this repository was implemented by Jae Shin Yoon.