Skip to content
/ SHERF Public

Code for our ICCV'2023 paper "SHERF: Generalizable Human NeRF from a Single Image"

License

Notifications You must be signed in to change notification settings

skhu101/SHERF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SHERF: Generalizable Human NeRF from a Single Image

Shoukang Hu*1Fangzhou Hong*1Liang Pan1  Haiyi Mei2Lei Yang2Ziwei Liu1
1S-Lab, Nanyang Technological University  2Sensetime Research
*Equal Contribution  Corresponding Author
ICCV 2023

SHERF learns a Generalizable Human NeRF to animate 3D humans from a single image.

Figure 1. SHERF is a single image-based generalizable Human NeRF. With just one inference pass on a single image, SHERF reconstructs Human NeRF in the canonical space which can be driven and rendered for novel view and pose synthesis.

📖 For more visual results, go checkout our project page

This repository will contain the official implementation of SHERF: Generalizable Human NeRF from a Single Image.

📣 Updates

[08/2023] Training and inference codes for RenderPeople, THuman, HuMMan and ZJU-Mocap are released.

🖥️ Requirements

NVIDIA GPUs are required for this project. We recommend using anaconda to manage the python environments.

    conda create --name sherf python=3.8
    conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch
    conda install -c fvcore -c iopath -c conda-forge fvcore iopath
    conda install pytorch3d -c pytorch3d (or pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu113_pyt1110/download.html)
    pip install -r requirements.txt
    conda activate sherf

Set up Dataset

RenderPeople Dataset

Please download our rendered multi-view images of RenderPeople dataset from OneDrive.

THuman Dataset

Please follow instructions of MPS-NeRF to download the THuman dataset. After that, please download our estimated SMPL Neutral parameters.

HuMMan Dataset

Please follow instructions of HuMMan-Recon to download the HuMMan dataset.

ZJU-Mocap dataset

Please follow instructions of Neural Body to download the ZJU-Mocap dataset.

Tips: If you hope to learn how to render multi-view images, You may refer to XRFeitoria, a rendering toolbox for generating synthetic data photorealistic with ground-truth annotations.

🏃‍♀️ Inference

Download Models

The pretrained models and SMPL model are needed for inference.

The pretrained models are put in OneDrive and Baidu Pan (pin:gu1q) for downloading.

Register and download SMPL models here. Put the downloaded models in the folder smpl_models. Only the neutral one is needed. The folder structure should look like

./
├── ...
└── assets/
    ├── SMPL_NEUTRAL.pkl
cd sherf

Inference code with RenderPeople dataset

bash eval_renderpeople_512x512.sh

Inference code with THuman dataset

bash eval_THuman_512x512.sh

Inference code with HuMMan dataset

bash eval_HuMMan_640x360.sh

Inference code with ZJU-Mocap dataset

bash eval_zju_mocap_512x512.sh

🚋 Training

cd sherf

Training code with RenderPeople dataset

bash train_renderpeople_512x512.sh

Training code with THuman dataset

bash train_THuman_512x512.sh

Training code with HuMMan dataset

bash train_HuMMan_640x360.sh

Training code with ZJU_MoCap dataset

bash train_zju_mocap_512x512.sh

If you hope to evaluate the trained checkpoints, please add --test_flag True --resume CHECKPOINT.

🤟 Citation

If you find the codes of this work or the associated ReSynth dataset helpful to your research, please consider citing:

@article{hu2023sherf,
  title={SHERF: Generalizable Human NeRF from a Single Image},
  author={Hu, Shoukang and Hong, Fangzhou and Pan, Liang and Mei, Haiyi and Yang, Lei and Liu, Ziwei},
  journal={arXiv preprint arXiv:2303.12791},
  year={2023}
}

🗞️ License

Distributed under the S-Lab License. See LICENSE for more information.

🙌 Acknowledgements

This study is supported by the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOE-T2EP20221-0012), NTU NAP, and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).

This project is built on source codes shared by EG3D, MPS-NeRF and Neural Body.

About

Code for our ICCV'2023 paper "SHERF: Generalizable Human NeRF from a Single Image"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published