[3DV-2022] The official repo for the paper "Dual-Space NeRF: Learning Animatable Avatars and Scene Lighting in Separate Spaces".
We provide all checkpoints and X_smpl_vertices
at here.
-
Clone this repository:
git clone https://github.com/zyhbili/Dual-Space-NeRF.git
-
Install required python packages:
pip install -r requirements.txt
-
Download the SMPL model (neutral) from:
https://smpl.is.tue.mpg.de/
and modify the
_C.DATASETS.SMPL_PATH
inconfigs/defaults.py
.
Download and unzip ZJU_Mocap, then modify the _C.DATASETS.ZJU_MOCAP_PATH
in configs/defaults.py
.
Prepare Human3.6M following Animatable_NeRF and modify the _C.DATASETS.H36M_PATH
in configs/defaults.py
.
Take ZJU-Mocap 313
as an example, other configs files are provided in configs/{h36m,zju_mocap}
.
Train Dual Space NeRF
python3 main.py -c configs/zju_mocap/313.yml --exp 313
Test Dual Space NeRF
python3 test.py -c configs/zju_mocap/313.yml --ckpt [ckpt_path.pth] --exp 313
Download CoreView_313_op3.zip, and unzip it into novelpose_examples\
python3 novel_pose_vis.py -c configs/zju_mocap/313.yml --ckpt ckpt/313/model_epoch_0000200.pth --exp 313_op3
The results are saved into motion_transfer/313_op3/
For customed pose seq, you need to prepare the SMPL vertices as provided in the ZIP file and then modify the novel_pose_dataset.vertices_dir
in novel_pose_vis.py
python3 vis_lighting.py -c configs/zju_mocap/313.yml --ckpt ckpt/313/model_epoch_0000200.pth --exp 313_lighting
The results are saved into lighting_vis/313_lighting/
@inproceedings{zhi2022dual,
title={Dual-Space NeRF: Learning Animatable Avatars and Scene Lighting in Separate Spaces},
author={Zhi, Yihao and Qian, Shenhan and Yan, Xinhao and Gao, Shenghua},
booktitle = {International Conference on 3D Vision (3DV)},
month = sep,
year = {2022},
}