Skip to content

Latest commit

 

History

History
53 lines (38 loc) · 2.48 KB

README.md

File metadata and controls

53 lines (38 loc) · 2.48 KB

PSVH-3d-reconstruction

This repository is the implementation of our AAAI 2019 paper:

Deep Single-View 3D Object Reconstruction with Visual Hull Embedding

Hanqing Wang, Jiaolong Yang, Wei Liang, Xin Tong

This work is implemented using TensorFlow.

Introduction

In this paper, we present an approach which aims to preserve more shape details and improve the reconstruction quality. The key idea of our method is to lever- age object mask and pose estimation from CNNs to assist the 3D shape learning by constructing a probabilistic single-view visual hull inside of the network.

Our method works by first predicting a coarse shape as well as the object pose and silhouette using CNNs, followed by a novel 3D refinement CNN which refines the coarse shapes using the constructed probabilistic visual hulls.

Examples

Citation

If you find our work helpful for your research, please cite our paper:

@article{wang2018deep,
  title={Deep Single-View 3D Object Reconstruction with Visual Hull Embedding},
  author={Wang, Hanqing and Yang, Jiaolong and Liang, Wei and Tong, Xin},
  journal={arXiv preprint arXiv:1809.03451},
  year={2018}
}

Installation

Install python and the dependencies:

  • python 3.5
  • tensorflow 1.12.0
  • pillow

If your python environments are managed via Anaconda/Miniconda, you can install the dependencies using the following scrpit:

conda install tensorflow pillow

The checkpoint of the trained models are available here(426MB). Extract the files to the root directory.

Demo

Run python run_case.py to run the examples. The outputs are reconstruction results before and after the refinement (Please refer to our paper for more details). The results are in obj format. You can use meshlab for visulization.

Acknowlegement

License

PSVH is freely available for non-commercial use, and may be redistributed under these conditions. Please see the license for further details. For commercial license, please contact the authors.