Official PyTorch implementation of "G-NeRF: Geometry-enhanced Novel View Synthesis from Single-View Images" (CVPR 2024)
Zixiong Huang*, Qi Chen*, Libo Sun, Yifan Yang, Naizhou Wang, Mingkui Tan, Qi Wu
-
✅ [2024/04/15] Release inference code.
-
🔲 We will release the data generation code and training code later.
- CUDA toolkit 11.3 or later.
- Python libraries: see environment.yml for exact library dependencies. You can use the following commands with Miniconda3 to create and activate your Python environment:
cd g_nerf
conda env create -f environment.yml
conda activate gnerf
In our environment, we use pytorch=1.13.1+cu116.
Download our pre-trained checkpoint from huggingface and put them into checkpoints dir.
# Generate videos using pre-trained model
python ./g_nerf/gen_videos.py \
--network checkpoints/G-NeRF/network-G_ema-final.pkl \
--id_encoder checkpoints/G-NeRF/network-E-final.pkl \
--id_image samples/66667.jpg \
--outdir results \
--video_out_path results
Coming Soon
Coming Soon
@article{huang2024g,
title={G-NeRF: Geometry-enhanced Novel View Synthesis from Single-View Images},
author={Huang, Zixiong and Chen, Qi and Sun, Libo and Yang, Yifan and Wang, Naizhou and Tan, Mingkui and Wu, Qi},
journal={arXiv preprint arXiv:2404.07474},
year={2024}
}
@inproceedings{huang2024g,
author = {Zixiong, Huang and Qi, Chen and Libo, Sun and Yifan, Yang and Naizhou, Wang and Mingkui, Tan and Qi, Wu},
title = {G-NeRF: Geometry-enhanced Novel View Synthesis from Single-View Images},
booktitle = {IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR)},
year = {2024}
}
Our code is modified from EG3D. Thanks for their awesome work!