Skip to content
Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs
Branch: master
Clone or download
Latest commit 2622f49 Apr 17, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
cmake first commit May 22, 2017
data first commit May 22, 2017
docker first commit May 22, 2017
docs first commit May 22, 2017
examples first commit May 22, 2017
include added ot->binvox conversion Apr 17, 2018
matlab first commit May 22, 2017
models first commit May 22, 2017
python removed tmp files Apr 17, 2018
scripts first commit May 22, 2017
src ogn_conv cpu-only Apr 17, 2018
tools evaluation util Apr 17, 2018
CMakeLists.txt first commit May 22, 2017
CONTRIBUTING.md first commit May 22, 2017
CONTRIBUTORS.md first commit May 22, 2017
INSTALL.md first commit May 22, 2017
LICENSE first commit May 22, 2017
Makefile first commit May 22, 2017
Makefile.config.example first commit May 22, 2017
README-caffe.md first commit May 22, 2017
README.md Update README.md Feb 2, 2018
caffe.cloc first commit May 22, 2017
thumbnail.png thumbnail May 22, 2017

README.md

OGN

Source code accompanying the paper "Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs" by M. Tatarchenko, A. Dosovitskiy and T. Brox. The implementation is based on Caffe, and extends the basic framework by providing layers for octree-specific features.

Build

For compilation instructions please refer to the official or unofficial CMake build guidelines for Caffe. Makefile build is not supported.

Data

Octrees are stored as text-based serialized std::map containers. The provided utility (tools/ogn_converter) can be used to convert binvox voxel grids into octrees. Three of the datasets used in the paper (ShapeNet-cars, FAUST and BlendSwap) can be downloaded from here. For ShapeNet-all, we used the voxelizations(ftp://cs.stanford.edu/cs/cvgl/ShapeNetVox32.tgz) and the renderings(ftp://cs.stanford.edu/cs/cvgl/ShapeNetRendering.tgz) provided by Choy et al. for their 3D-R2N2 framework.

Usage

Example models can be downloaded from here. Run one of the scripts (train_known.sh, train_pred.sh or test.sh) from the corresponding experiment folder. You should have the caffe executable in your $PATH.

Visualization

There is a python script for visualizing .ot files in Blender. To use it, run

$ blender -P $CAFFE_ROOT/python/rendering/render_model.py your_model.ot

License and Citation

All code is provided for research purposes only and without any warranty. Any commercial use requires our consent. When using the code in your research work, please cite the following paper:

 @InProceedings{ogn2017,
  author       = "M. Tatarchenko and A. Dosovitskiy and T. Brox",
  title        = "Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs",
  booktitle    = "IEEE International Conference on Computer Vision (ICCV)",
  year         = "2017",
  url          = "http://lmb.informatik.uni-freiburg.de/Publications/2017/TDB17b"
}
 
You can’t perform that action at this time.