KeypointNeRF: Generalizing Image-based Volumetric Avatars using Relative Spatial Encoding of Keypoints
Marko Mihajlovic · Aayush Bansal · Michael Zollhoefer . Siyu Tang · Shunsuke Saito
KeypointNeRF leverages human keypoints to instantly generate volumetric radiance representation from 2-3 input images without retraining or fine-tuning. It can represent human faces and full bodies.
- [2022/10/01] Combine ICON with our relative spatial keypoint encoding for fast and convenient monocular reconstruction, without requiring the expensive SMPL feature. More details are here.
Please install python dependencies specified in environment.yml
:
conda env create -f environment.yml
conda activate KeypointNeRF
Please see DATA_PREP.md to setup the ZJU-MoCap dataset.
After this step the data directory follows the structure:
./data/zju_mocap
├── CoreView_313
├── CoreView_315
├── CoreView_377
├── CoreView_386
├── CoreView_387
├── CoreView_390
├── CoreView_392
├── CoreView_393
├── CoreView_394
└── CoreView_396
Execute train.py
script to train the model on the ZJU dataset.
python train.py --config ./configs/zju.json --data_root ./data/zju_mocap
After the training, the model checkpoint will be stored under ./EXPERIMENTS/zju/ckpts/last.ckpt
, which is equivalent to the one provided here.
To extract render and evaluate images, execute:
python train.py --config ./configs/zju.json --data_root ./data/zju_mocap --run_val
python eval_zju.py --src_dir ./EXPERIMENTS/zju/images_v3
To visualize the dynamic results, execute:
python render_dynamic.py --config ./configs/zju.json --data_root ./data/zju_mocap --model_ckpt ./EXPERIMENTS/zju/ckpts/last.ckpt
(The first three views of an unseen subject are the input to KeypointNeRF; the last image is a rendered novel view)
We compare KeypointNeRF with recent state-of-the-art methods. The evaluation metric is SSIM and PSNR.
Models | PSNR ↑ | SSIM ↑ |
---|---|---|
pixelNeRF (Yu et al., CVPR'21) | 23.17 | 86.93 |
PVA (Raj et al., CVPR'21) | 23.15 | 86.63 |
NHP (Kwon et al., NeurIPS'21) | 24.75 | 90.58 |
KeypointNeRF* (Mihajlovic et al., ECCV'22) | 25.86 | 91.07 |
(*Note that results of KeypointNeRF are slightly higher compared to the numbers reported in the original paper due to training views not beeing shuffled during training.)
Our relative spatial encoding can be used to reconstruct humans from a single image. As a example, we leverage ICON and replace its expensive SDF feature with our relative spatial encoding.
Models | Chamfer ↓ (cm) | P2S ↓ (cm) |
---|---|---|
PIFu (Saito et al., ICCV'19) | 3.573 | 1.483 |
ICON (Xiu et al., CVPR'22) | 1.424 | 1.351 |
KeypointICON (Mihajlovic et al., ECCV'22; Xiu et al., CVPR'22) | 1.539 | 1.358 |
Check the benchmark here and more details here.
If you find our code or paper useful, please consider citing:
@inproceedings{Mihajlovic:ECCV2022,
title = {{KeypointNeRF}: Generalizing Image-based Volumetric Avatars using Relative Spatial Encoding of Keypoints},
author = {Mihajlovic, Marko and Bansal, Aayush and Zollhoefer, Michael and Tang, Siyu and Saito, Shunsuke},
booktitle={European conference on computer vision},
year={2022},
}
CC-BY-NC 4.0. See the LICENSE file.