- An End-to-end 3D Neural Network for Very Sparse MVS.
- 2020TPAMI early access link.
- or Arxiv preprint version.
- Key contributions
- Proposed a Sparse-MVS benchmark (under construction)
- Comprehensive evaluation on the datasets: DTU, Tanks and Temples, etc.
- Proposed a trainable occlusion-aware view selection scheme for the volumetric MVS method, e.g., SurfaceNet[5].
- Analysed the advantages of the volumetric methods, e.g., SurfaceNet[5] and SurfaceNet+, on the Sparse-MVS problem over the depth-fusion methods, e.g., Gipuma [6], R-MVSNet[7], Point-MVSNet[8], and COLMAP[9].
- Proposed a Sparse-MVS benchmark (under construction)
Fig.1: Illustration of a very sparse MVS setting using only
Fig.2: Comparison with the existing methods in the DTU Dataset [10] with different sparsely sampling strategy. When Sparsity = 3 and Batchsize = 2, the chosen camera indexes are 1,2 / 4,5 / 7,8 / 10,11 / .... SurfaceNet+ constantly outperforms the state-of-the-art methods at all the settings, especially at the very sparse scenario.
Fig.3: Results of a tank model in the Tanks and Temples 'intermediate' set [23] compared with R-MVSNet [7] and COLMAP [9], which demonstrate the power of SurfaceNet+ of high recall prediction in the sparse-MVS setting.
If you find SurfaceNet+, the Sparse-MVS benchmark, or SurfaceNet useful in your research, please consider citing:
@article{ji2020surfacenet_plus,
title={SurfaceNet+: An End-to-end 3D Neural Network for Very Sparse Multi-view Stereopsis},
author={Ji, Mengqi and Zhang, Jinzhi and Dai, Qionghai and Fang, Lu},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2020},
publisher={IEEE}
}
@inproceedings{ji2017surfacenet,
title={SurfaceNet: An End-To-End 3D Neural Network for Multiview Stereopsis},
author={Ji, Mengqi and Gall, Juergen and Zheng, Haitian and Liu, Yebin and Fang, Lu},
booktitle={Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
pages={2307--2315},
year={2017}
}