News
06/03/2022
We provide the instruction to run on custom data here.05/10/2022
To make the comparison on ScanNet easier, we provide all quantitative and qualitative results of baselines here, including COLMAP, COLMAP*, ACMP, NeRF, UNISURF, NeuS, and VolSDF.05/10/2022
To make the following works easier to compare with our model, we provide our quantitative and qualitative results, as well as the trained models on ScanNet here.05/10/2022
We upload our processed ScanNet scene data to Google Drive.
Project Page | Video | Paper
Neural 3D Scene Reconstruction with the Manhattan-world Assumption
Haoyu Guo*, Sida Peng*, Haotong Lin, Qianqian Wang, Guofeng Zhang, Hujun Bao, Xiaowei Zhou
CVPR 2022 (Oral Presentation)
conda env create -f environment.yml
conda activate manhattan
Download ScanNet scene data evaluated in the paper from Google Drive and extract them into data/
. Make sure that the path is consistent with config file.
We provide the instruction to run on custom data here.
python train_net.py --cfg_file configs/scannet/0050.yaml gpus 0, exp_name scannet_0050
python run.py --type mesh_extract --output_mesh result.obj --cfg_file configs/scannet/0050.yaml gpus 0, exp_name scannet_0050
python run.py --type evaluate --cfg_file configs/scannet/0050.yaml gpus 0, exp_name scannet_0050
If you find this code useful for your research, please use the following BibTeX entry.
@inproceedings{guo2022manhattan,
title={Neural 3D Scene Reconstruction with the Manhattan-world Assumption},
author={Guo, Haoyu and Peng, Sida and Lin, Haotong and Wang, Qianqian and Zhang, Guofeng and Bao, Hujun and Zhou, Xiaowei},
booktitle={CVPR},
year={2022}
}
- Thanks to Lior Yariv for her excellent work VolSDF.
- Thanks to Jianfei Guo for his implementation of VolSDF neurecon.
- Thanks to Johannes Schönberger for his excellent work COLMAP.
- Thanks to Shaohui Liu for his customized implementation of COLMAP as a submodule of NerfingMVS.