VIPNet: A Fast and Accurate Single-View Volumetric Reconstruction by Learning Sparse Implicit Point Guidance
Dong Du, Zhiyi Zhang, Xiaoguang Han, Shuguang Cui, Ligang Liu
Published in 2020 International Conference on 3D Vision (3DV).
This implementation has been tested on Ubuntu 18.04, using Pythton 3.6.9, CUDA 10.0, PyTorch 1.2.0, and etc. The pre-trained models are provided here.
I apologize for not having the time to sort through these files. Please refer to our paper to use them.
To train/test the code, please install external libraries:
-
Chamfer Distance
Please make and replace the "cd_dist_so.so" file for the calculation of Chamfer distance. The source code and introduction can be found from Pixel2Mesh. -
utils
Please refer to Occupancy Networks to make these libraries and put them in the folder of "utils".
If you find our work helpful, please consider citing
@inproceedings{du2020vipnet,
title={Vipnet: A fast and accurate single-view volumetric reconstruction by learning sparse implicit point guidance},
author={Du, Dong and Zhang, Zhiyi and Han, Xiaoguang and Cui, Shuguang and Liu, Ligang},
booktitle={2020 International Conference on 3D Vision (3DV)},
pages={553--562},
year={2020},
organization={IEEE}
}
VIPNet is relased under the MIT License. See the LICENSE file for more details.