Skip to content
This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"
Python C++ C Mako Shell
Branch: master
Clone or download

Latest commit

Latest commit 43194fe Mar 29, 2020


Type Name Latest commit message Commit time
Failed to load latest commit information.
configs fix typo in mvs configs Mar 28, 2020
im2mesh fix gitignore Mar 27, 2020
media init code commit Mar 25, 2020
scripts init code commit Mar 25, 2020
.gitignore fix gitignore Mar 27, 2020
LICENSE Create LICENSE Mar 19, 2020 Update Mar 29, 2020
environment.yaml init code commit Mar 25, 2020 init code commit Mar 25, 2020 init code commit Mar 25, 2020 init code commit Mar 25, 2020 init code commit Mar 25, 2020 init code commit Mar 25, 2020

Differentiable Volumetric Rendering

This repository contains the code for the paper Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision.

You can find detailed usage instructions for training your own models and using pre-trained models below.

If you find our code or paper useful, please consider citing

    title = {Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision},
    author = {Niemeyer, Michael and Mescheder, Lars and Oechsle, Michael and Geiger, Andreas},
    booktitle = {Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
    year = {2020}


First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called dvr using

conda env create -f environment.yaml
conda activate dvr

Next, compile the extension modules. You can do this via

python build_ext --inplace


You can now test our code on the provided input images in the demo folder. To this end, start the generation process for one of the config files in the configs/demo folder. For example, simply run

python configs/demo/demo_combined.yaml

This script should create a folder out/demo/demo_combined where the output meshes are stored. The script will copy the inputs into the generation/inputs folder and creates the meshes in the generation/meshes folder. Moreover, the script creates a generation/vis folder where both inputs and outputs are copied together.


To evaluate a pre-trained model or train a new model from scratch, you have to obtain the respective dataset. We use three different datasets in the DVR project:

  1. ShapeNet for 2.5D supervised models (using the Choy et. al. renderings as input and our renderings as supervision)
  2. ShapeNet for 2D supervised models (using the Kato et. al. renderings)
  3. A subset of the DTU multi-view dataset

You can download our preprocessed data using

bash scripts/

and following the instructions. The sizes of the datasets are 114GB (a), 34GB (b), and 0.5GB (c).

This script should download and unpack the data automatically into the data folder.


When you have installed all binary dependencies and obtained the preprocessed data, you are ready to run our pre-trained models and train new models from scratch.


To generate meshes using a trained model, use

python CONFIG.yaml

where you replace CONFIG.yaml with the correct config file.

The easiest way is to use a pre-trained model. You can do this by using one of the config files which are indicated with _pretrained.yaml.

For example, for our 2.5D supervised single-view reconstruction model run

python configs/single_view_reconstruction/multi_view_supervision/ours_depth_pretrained.yaml

or for our multi-view reconstruction from RGB images and sparse depth maps for the birds object run

python configs/multi_view_reconstruction/birds/ours_depth_mvs_pretrained.yaml

Our script will automatically download the model checkpoints and run the generation. You can find the outputs in the out/.../pretrained folders.

Please note that the config files *_pretrained.yaml are only for generation, not for training new models: when these configs are used for training, the model will be trained from scratch, but during inference our code will still use the pre-trained model.


For evaluation of the models, we provide the script You can run it using

python CONFIG.yaml

The script takes the meshes generated in the previous step and evaluates them using a standardized protocol. The output will be written to .pkl/.csv files in the corresponding generation folder which can be processed using pandas.


Finally, to train a new network from scratch, run

python CONFIG.yaml

where you replace CONFIG.yaml with the name of the configuration file you want to use.

You can monitor on http://localhost:6006 the training process using tensorboard:

tensorboard --logdir ./logs

where you replace OUTPUT_DIR with the respective output directory.

For available training options, please take a look at configs/default.yaml.

Futher Information

More Work on Implicit Representations

If you like the DVR project, please check out other works on implicit representions from our group:

Other Relevant Works

Also check out other exciting works on inferring implicit representations without 3D supervision:

You can’t perform that action at this time.