Skip to content
/ GMNR Public

(ICCV 2023) Generative Multiplane Neural Radiance for 3D Aware Image Generation.

Notifications You must be signed in to change notification settings

VIROBO-15/GMNR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Generative Multiplane Neural Radiance (GMNR) (ICCV 2023)

for 3D-Aware Image Generation

Generative Multiplane Neural Radiance for 3D-Aware Image Generation.
Amandeep Kumar, Ankan Kumar Bhunia, Sanath Narayan, Hisham Cholakkal, Rao Muhammad Anwer, Salman Khan, Ming-Hsuan Yang, Fahad Shahbaz Khan

paper video slides

Abstract: We present a method to efficiently generate 3D-aware high-resolution images that are view-consistent across multiple target views. The proposed multiplane neural radiance model, named GMNR, consists of a novel α-guided view-dependent representation (α-VdR) module for learning view-dependent information. The α-VdR module, faciliated by an α-guided pixel sampling technique, computes the view-dependent representation efficiently by learning viewing direction and position coefficients. Moreover, we propose a view-consistency loss to enforce photometric similarity across multiple views. The GMNR model can generate 3D-aware high-resolution images that are viewconsistent across multiple camera poses, while maintaining the computational efficiency in terms of both training and inference time. Experiments on three datasets demonstrate the effectiveness of the proposed modules, leading to favorable results in terms of both generation quality and inference time, compared to existing approaches. Our GMNR model generates 3D-aware images of 1024X1024 pixels with 17.6 FPS on a single V100.

🚀 News

  • September 28, 2023 : Released code for GMNR
  • July 13, 2023 : GMNR accepted in ICCV 2023    🎊

Environment Setup

This code has been tested on Ubuntu 18.04 with CUDA 10.2.

conda env create -f environment.yml

Training

Assume GMNR_ROOT represents the path to this repo:

cd /path/to/this/repo
export GMNR_ROOT=$PWD

Set Up Virtual Environments

We need MTCNN, Deep3DFaceRecon_pytorch, and DeepFace to complete the data processing and evaluation steps.

MTCNN and DeepFace

We provide the conda environment yaml files for MTCNN and DeepFace:

conda env create -f mtcnn_env.yaml      # mtcnn_env
conda env create -f deepface_env.yaml   # deepface

Deep3DFaceRecon_pytorch

Note: GMPI authors has made the modification in code we have used the same modification. Please use modified version. Please follow the official instruction to setup the virtual environments and to download the pretrained models. There are two major steps:

  1. Install some packages and setup the environment: see this link;
  2. Download some data: see this link.

Assume the code repo locates at Deep3DFaceRecon_PATH:

export Deep3DFaceRecon_PATH=/path/to/Deep3DFaceRecon_pytorch

Download StyleGAN2 Checkpoints

Download StyleGAN2's pretrained checkpoints:

mkdir -p ${GMNR_ROOT}/ckpts/stylegan2_pretrained/transfer-learning-source-nets/
cd ${GMNR_ROOT}/ckpts/stylegan2_pretrained
wget https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/transfer-learning-source-nets/ffhq-res256-mirror-paper256-noaug.pkl ./transfer-learning-source-nets    # FFHQ256
wget https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/transfer-learning-source-nets/ffhq-res512-mirror-stylegan2-noaug.pkl ./transfer-learning-source-nets   # FFHQ512
wget https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/transfer-learning-source-nets/ffhq-res1024-mirror-stylegan2-noaug.pkl ./transfer-learning-source-nets  # FFHQ1024
wget https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/afhqcat.pkl .   # AFHQCat
wget https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metfaces.pkl .   # MetFaces

Preprocessing and dataset

For complete Installation, and dataset preparation, follow guidelines here

Train

Run the following command to start training GMNR. Results will be saved in ${GMNR_ROOT}/experiments. We use 8 Tesla V100 GPUs in our experiments. We recommend 32GB GPU memory if you want to train the GMNR model.

python launch.py \
--run_dataset FFHQ1024 \
--nproc_per_node 1 \
--task-type gmnr \
--run-type train \
--master_port 8370
  • run_dataset can be in ["FFHQ256", "FFHQ512", "FFHQ1024", "AFHQCat", "MetFaces"].
  • Set nproc_per_node to be the number of GPUs you want to use.

Evaluation

For evaluating the FID/KID, Identity Metrics, Depth Metrics and pose accuracy Metric, follow the guidelines here.

Citation

If you find our work helpful, please star🌟 this repo and cite📑 our paper. Thanks for your support!

@article{kumar2023generative,
  title={Generative Multiplane Neural Radiance for 3D-Aware Image Generation},
  author={Kumar, Amandeep and Bhunia, Ankan Kumar and Narayan, Sanath and Cholakkal, Hisham and Anwer, Rao Muhammad and Khan, Salman and Yang, Ming-Hsuan and Khan, Fahad Shahbaz},
  journal={arXiv preprint arXiv:2304.01172},
  year={2023}
}

Acknowledgement

Our code is designed based on Generative Multiplane Images GMPI.

Contact

If you have any question, please create an issue on this repository or contact at amandeep.kumar@mbzuai.ac.ae


About

(ICCV 2023) Generative Multiplane Neural Radiance for 3D Aware Image Generation.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published