This is the official implementation of RNb-NeuS: Reflectance and Normal-based Multi-View 3D Reconstruction.
Baptiste Brument*, Robin Bruneau*, Yvain Quéau, Jean Mélou, François Lauze, Jean-Denis Durou, Lilian Calvet
git clone https://github.com/bbrument/RNb-NeuS.git
cd RNb-NeuS
pip install -r requirements.txt
Our data format is inspired from IDR as follows:
CASE_NAME
|-- cameras.npz # camera parameters
|-- normal
|-- 000.png # normal map for each view
|-- 001.png
...
|-- albedo
|-- 000.png # albedo for each view (optional)
|-- 001.png
...
|-- mask
|-- 000.png # mask for each view
|-- 001.png
...
One can create folders with different data in it, for instance, a normal folder for each normal estimation method.
The name of the folder must be set in the used .conf
file.
We provide the DiLiGenT-MV data as described above with normals and reflectance maps estimated with SDM-UniPS. Note that the reflectance maps were scaled over all views and uncertainty masks were generated from 100 normals estimations (see the article for further details).
Train with reflectance
python exp_runner.py --mode train_rnb --conf ./confs/CONF_NAME.conf --case CASE_NAME
Train without reflectance
python exp_runner.py --mode train_rnb --conf ./confs/CONF_NAME.conf --case CASE_NAME --no_albedo
Extract surface
python exp_runner.py --mode validate_mesh --conf ./confs/CONF_NAME.conf --case CASE_NAME --is_continue
Additionaly, we provide the five meshes of the DiLiGenT-MV dataset with our method here.
If you find our code useful for your research, please cite
@inproceedings{Brument24,
title={RNb-NeuS: Reflectance and Normal-based Multi-View 3D Reconstruction},
author={Baptiste Brument and Robin Bruneau and Yvain Quéau and Jean Mélou and François Lauze and Jean-Denis Durou and Lilian Calvet},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}