SHERF learns a Generalizable Human NeRF to animate 3D humans from a single image.
📖 For more visual results, go checkout our project page
This repository will contain the official implementation of SHERF: Generalizable Human NeRF from a Single Image.
[08/2023] Training and inference codes for RenderPeople, THuman, HuMMan and ZJU-Mocap are released.
NVIDIA GPUs are required for this project. We recommend using anaconda to manage the python environments.
conda create --name sherf python=3.8
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install pytorch3d -c pytorch3d (or pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu113_pyt1110/download.html)
pip install -r requirements.txt
conda activate sherf
Please download our rendered multi-view images of RenderPeople dataset from OneDrive.
Please follow instructions of MPS-NeRF to download the THuman dataset. After that, please download our estimated SMPL Neutral parameters.
Please follow instructions of HuMMan-Recon to download the HuMMan dataset.
Please follow instructions of Neural Body to download the ZJU-Mocap dataset.
Tips: If you hope to learn how to render multi-view images, You may refer to XRFeitoria, a rendering toolbox for generating synthetic data photorealistic with ground-truth annotations.
The pretrained models and SMPL model are needed for inference.
The pretrained models are put in OneDrive and Baidu Pan (pin:gu1q) for downloading.
Register and download SMPL models here. Put the downloaded models in the folder smpl_models. Only the neutral one is needed. The folder structure should look like
./
├── ...
└── assets/
├── SMPL_NEUTRAL.pkl
cd sherf
bash eval_renderpeople_512x512.sh
bash eval_THuman_512x512.sh
bash eval_HuMMan_640x360.sh
bash eval_zju_mocap_512x512.sh
cd sherf
bash train_renderpeople_512x512.sh
bash train_THuman_512x512.sh
bash train_HuMMan_640x360.sh
bash train_zju_mocap_512x512.sh
If you hope to evaluate the trained checkpoints, please add --test_flag True --resume CHECKPOINT.
If you find the codes of this work or the associated ReSynth dataset helpful to your research, please consider citing:
@article{hu2023sherf,
title={SHERF: Generalizable Human NeRF from a Single Image},
author={Hu, Shoukang and Hong, Fangzhou and Pan, Liang and Mei, Haiyi and Yang, Lei and Liu, Ziwei},
journal={arXiv preprint arXiv:2303.12791},
year={2023}
}
Distributed under the S-Lab License. See LICENSE
for more information.
This study is supported by the Ministry of Education, Singapore, under its MOE AcRF Tier 2 (MOE-T2EP20221-0012), NTU NAP, and under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).
This project is built on source codes shared by EG3D, MPS-NeRF and Neural Body.