Skip to content

Code for "Neural Rendering in a Room: Amodal 3D Understanding and Free-Viewpoint Rendering for the Closed Scene Composed of Pre-Captured Objects", ACM ToG

License

Notifications You must be signed in to change notification settings

zju3dv/nr_in_a_room

Repository files navigation

Neural Rendering in a Room: Amodal 3D Understanding and Free-Viewpoint Rendering for the Closed Scene Composed of Pre-Captured Objects

Neural Rendering in a Room: Amodal 3D Understanding and Free-Viewpoint Rendering for the Closed Scene Composed of Pre-Captured Objects
Bangbang Yang, Yinda Zhang, Yijin Li, Zhaopeng Cui, Sean Fanello, Hujun Bao, Guofeng Zhang. SIGGRAPH 2022 (ACM ToG)

Installation

We have tested the code on pytorch 1.8.1, while a newer version of pytorch should also work.

conda create -n neural_scene python=3.8
conda activate neural_scene
pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt

Data Preparation

Please go to the data preparation.

Offline-Stage Training

After putting the necessary data into the data folder, you can run batch_train_nerf.sh to train NeRF model for each object and background. You can check the data/root_dir in the corresponding config file to make sure the data path is correct.

Online-Stage Optimization

Object Pose Optimization

We provide the example in script/pose_optim.sh to optimize object poses. You can change the input parameters (including paths, arrangement names) to optimize different scenes.

Scene Lighting Optimization

Once the object pose has been optmized properly, we can further optimize scene lighting by running script/real_scene_light_optim.sh. You might need to change the input parameters and the pose state file (e.g., state_file=debug/xxx/000480.state.ckpt) to match the scenes with optimized poses.

Pose Optim.


Lighting Optim.

The pre-trained checkpoints would be uploaded later.

Citation

If you find this work useful, please consider citing:

@article{yang2022_nr_in_a_room,
    title={Neural Rendering in a Room: Amodal 3D Understanding and Free-Viewpoint Rendering for the Closed Scene Composed of Pre-Captured Objects},
    author={Yang, Bangbang and Zhang, Yinda and Li, Yijin and Cui, Zhaopeng and Fanello, Sean and Bao, Hujun and Zhang, Guofeng},
    journal = {ACM Trans. Graph.},
    issue_date = {July 2022},
    volume = {41},
    number = {4},
    month = jul,
    year = {2022},
    pages = {101:1--101:10},
    articleno = {101},
    numpages = {10},
    url = {https://doi.org/10.1145/3528223.3530163},
    doi = {10.1145/3528223.3530163},
    publisher = {ACM},
    address = {New York, NY, USA}
}

Acknowledgement

In this project we use (parts of) the implementations of the following works:

We thank the respective authors for open sourcing their methods.

About

Code for "Neural Rendering in a Room: Amodal 3D Understanding and Free-Viewpoint Rendering for the Closed Scene Composed of Pre-Captured Objects", ACM ToG

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published