Skip to content
Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Generalizable Reconstruction (GenRe) and ShapeHD


This is a repo covering the following three papers. If you find the code useful, please cite the paper(s).

  1. Generalizable Reconstruction (GenRe)
    Learning to Reconstruct Shapes from Unseen Classes
    Xiuming Zhang*, Zhoutong Zhang*, Chengkai Zhang, Joshua B. Tenenbaum, William T. Freeman, and Jiajun Wu
    NeurIPS 2018 (Oral)
    Paper   |   BibTeX   |   Project

    * indicates equal contribution.

  2. ShapeHD
    Learning Shape Priors for Single-View 3D Completion and Reconstruction
    Jiajun Wu*, Chengkai Zhang*, Xiuming Zhang, Zhoutong Zhang, William T. Freeman, and Joshua B. Tenenbaum
    ECCV 2018
    Paper   |   BibTeX   |   Project

  3. MarrNet
    MarrNet: 3D Shape Reconstruction via 2.5D Sketches
    Jiajun Wu*, Yifan Wang*, Tianfan Xue, Xingyuan Sun, William T. Freeman, and Joshua B. Tenenbaum
    NeurIPS 2017
    Paper   |   BibTeX   |   Project

Environment Setup

All code was built and tested on Ubuntu 16.04.5 LTS with Python 3.6, PyTorch 0.4.1, and CUDA 9.0. Versions for other packages can be found in environment.yml.

  1. Clone this repo with

    # cd to the directory you want to work in
    git clone
    cd GenRe-ShapeHD

    The code below assumes you are at the repo root.

  2. Create a conda environment named shaperecon with necessary dependencies specified in environment.yml. In order to make sure trimesh is installed correctly, please run after setting up the conda environment.

    conda env create -f environment.yml

    The TensorFlow dependency in environment.yml is for using TensorBoard only. Remove it if you do not want to monitor your training with TensorBoard.

  3. The instructions below assume you have activated this environment and built the cuda extension with

    source activate shaperecon

Note that due to the deprecation of cffi from pytorch 1.0 and on, this only works for pytorch 0.4.1.

Downloading Our Trained Models and Training Data


To download our trained GenRe and ShapeHD models (1 GB in total), run

wget -P downloads/models/
tar -xvf downloads/models/genre_shapehd_models.tar -C downloads/models/
  • GenRe: and
  • ShapeHD: and


This repo comes with a few Pix3D images and ShapeNet renderings, located in downloads/data/test, for testing purposes.

For training, we make available our RGB and 2.5D sketch renderings, paired with their corresponding 3D shapes, for ShapeNet cars, chairs, and airplanes, with each object captured in 20 random views. Note that this .tar is 143 GB.

wget -P downloads/data/
mkdir downloads/data/shapenet/
tar -xvf downloads/data/shapenet_cars_chairs_planes_20views.tar -C downloads/data/shapenet/

New (Oct. 20, 2019)

For training, in addition to the renderings already included in the initial release, we now also release the Mitsuba scene .xml files used to produce these renderings. This download link is a .zip (394 MB) consisting of the three training classes: cars, chairs, and airplanes. Among other scene parameters, camera poses can now be retrieved from these .xml files, which we hope would be useful for tasks like camera/object pose estimation.

For testing, we release the data of the unseen categories shown in Table 1 of the paper. This download link is a .tar (44 GB) consisting of, for each of the unseen classes, the 500 random shapes we used for testing GenRe. Right now, nine classes are included, as we are tracking down the 10th.

Testing with Our Models

We provide .sh wrappers to perform testing for GenRe, ShapeHD, and MarrNet (without the reprojection consistency part).


See scripts/

We updated our entire pipeline to support fully differentiable end-to-end finetuning. In our NeurIPS submission, the projection from depth images to spherical maps was not implemented in a differentiable way. As a result of both the pipeline and PyTorch version upgrades, the model performace is slightly different from what was reported in the original paper.

Below we tabulate the original vs. updated Chamfer distances (CD) across different Pix3D classes. The "Original" row is from Table 2 of the paper.

Chair Bed Bookcase Desk Sofa Table Wardrobe
Updated .094 .117 .104 .110 .086 .114 .106
Original .093 .113 .101 .109 .083 .116 .109


See scripts/

After ECCV, we upgraded our entire pipeline and re-trained ShapeHD with this new pipeline. The models released here are newly trained, producing quantative results slightly better than what was reported in the ECCV paper. If you use the Pix3D repo to evaluate the model released here, you will get an average CD of 0.122 for the 1,552 untruncated, unoccluded chair images (whose inplane rotation < 5°). The average CD on Pix3D chairs reported in the paper was 0.123.

MarrNet w/o Reprojection Consistency

See scripts/

The architectures in this implementation of MarrNet are different from those presented in the original NeurIPS 2017 paper. For instance, the reprojection consistency is not implemented here. MarrNet-1 that predicts 2.5D sketches from RGB inputs is now a U-ResNet, different from its original architecture. That said, the idea remains the same: predicting 2.5D sketches as an intermediate step to the final 3D voxel predictions.

If you want to test with the original MarrNet, see the MarrNet repo for the pretrained models.

Training Your Own Models

This repo allows you to train your own models from scratch, possibly with data different from our training data provided above. You can monitor your training with TensorBoard. For that, make sure to include --tensorboard while running, and then run

python -m tensorboard.main --logdir="$logdir"/tensorboard

to visualize your losses.


Follow these steps to train the GenRe model.

  1. Train the depth estimator with scripts/
  2. Train the spherical inpainting network with scripts/
  3. Train the full model with scripts/


Follow these steps to train the ShapeHD model.

  1. Train the 2.5D sketch estimator with scripts/
  2. Train the 2.5D-to-3D network with scripts/
  3. Train a 3D-GAN with scripts/
  4. Finetune the 2.5D-to-3D network with perceptual losses provided by the discriminator of the 3D-GAN, using scripts/

MarrNet w/o Reprojection Consistency

Follow these steps to train the MarrNet model, excluding the reprojection consistency.

  1. Train the 2.5D sketch estimator with scripts/
  2. Train the 2.5D-to-3D network with scripts/
  3. Finetune the 2.5D-to-3D network with scripts/


Please open an issue if you encounter any problem. You will likely get a quicker response than via email.


  • Dec. 28, 2018: Initial release
  • Oct. 20, 2019: Added testing data of the unseen categories, and all .xml scene files used to render training data