Code release for "Multi-view Consistency as Supervisory Signal for Learning Shape and Pose Prediction"
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
benchmark
cachedir/splits/shapenet
data/synthetic
demo
docs
experiments/synthetic
external
geometry
loss
nnutils
preprocess/synthetic
renderer
utils
.gitignore
.gitmodules
README.md

README.md

Multi-view Consistency as Supervisory Signal for Learning Shape and Pose Prediction

Shubham Tulsiani, Alexei A. Efros, Jitendra Malik.

Project Page

Installation

First, you'll need a working implementation of Torch. The subsequent installation steps are:

##### Install 3D spatial transformer ######
cd external/stn3d
luarocks make stn3d-scm-1.rockspec

##### Additional Dependencies (json and matio) #####
sudo apt-get install libmatio2
luarocks install matio
luarocks install json

Training and Evaluating

To train or evaluate the (trained/downloaded) models, it is first required to download the Shapenet dataset (v1) and preprocess the data to compute renderings and voxelizations. Please see the detailed README files for Training or Evaluation of models for subsequent instructions.

Demo and Pre-trained Models

Please check out the interactive notebook which shows reconstructions using the learned models. You'll need to -

  • Install a working implementation of torch and itorch.
  • Download the pre-trained models (1.5GB) and extract them to 'cachedir/snapshots/shapenet/'
  • Edit the absolute paths to the blender executable and the provided '.blend' file in the rendering utility script.

Citation

If you use this code for your research, please consider citing:

@inProceedings{mvcTulsiani18,
  title={Multi-view Consistency as Supervisory Signal
  for Learning Shape and Pose Prediction},
  author = {Shubham Tulsiani
  and Alexei A. Efros
  and Jitendra Malik},
  booktitle={Computer Vision and Pattern Regognition (CVPR)},
  year={2018}
}