Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image

This repository contains the code that accompanies our CVPR 2020 paper Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image


You can find detailed usage instructions for training your own models and using our pretrained models below.

If you found this work influential or helpful for your research, please consider citing

     title = {Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image},
     author = {Paschalidou, Despoina and Luc van Gool and Geiger, Andreas},
     booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
     year = {2020}

Installation & Dependencies

Our codebase has the following dependencies:

For the visualizations, we use simple-3dviz, which is our easy-to-use library for visualizing 3D data using Python and ModernGL and matplotlib for the colormaps. Note that simple-3dviz provides a lightweight and easy-to-use scene viewer using wxpython. If you wish you use our scripts for visualizing the reconstructed primitives, you will need to also install wxpython.

The simplest way to make sure that you have all dependencies in place is to use conda. You can create a conda environment called hierarchical_primitives using

conda env create -f environment.yaml
conda activate hierarchical_primitives

Next compile the extenstion modules. You can do this via

python build_ext --inplace
pip install -e .


As soon as you have installed all dependencies you can now start training new models from scratch, evaluate our pre-trained models and visualize the recovered primitives using one of our pre-trained models.


To visualize the predicted primitives using a trained model, we provide the script. In particular, it performs the forward pass and visualizes the predicted primitives using simple-3dviz. To execute it simply run To run the script you need to run

python path_to_config_yaml path_to_output_dir --weight_file path_to_weight_file --model_tag MODEL_TAG --from_fit

where the argument --weight_file specifies the path to a trained model and the argument --model_tag defines the model_tag of the input to be reconstructed.

Hierarchy Reconstruction


Finally, to train a new network from scratch, we provide the script. To execute this script, you need to specify the path to the configuration file you wish to use and the path to the output directory, where the trained models and the training statistics will be saved. Namely, to train a new model from scratch, you simply need to run

python path_to_config_yaml path_to_output_dir

Note tha it is also possible to start from a previously trained model by specifying the --weight_file argument, which should contain the path to a previously trained model. Furthermore, by using the arguments --model_tag and --category_tag, you can also train your network on a particular model (e.g. a specific plane, car, human etc.) or a specific object category (e.g. planes, chairs etc.).

Also make sure to update the dataset_directory argument in the provided config file based on the path where your dataset is stored.


Contributions such as bug fixes, bug reports, suggestions etc. are more than welcome and should be submitted in the form of new issues and/or pull requests on Github.


Our code is released under the MIT license which practically allows anyone to do anything with it. MIT license found in the LICENSE file.

Relevant Research

Below we list some papers that are relevant to our work.


  • Neural Parts: Learning Expressive 3D Shape Abstractions with Invertible Neural Networks pdf
  • Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image pdf
  • Superquadrics Revisited: Learning 3D Shape Parsing beyond Cuboids pdf blog

By Others:

  • Learning Shape Abstractions by Assembling Volumetric Primitives pdf
  • 3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks pdf
  • Im2Struct: Recovering 3D Shape Structure From a Single RGB Image pdf
  • Learning shape templates with structured implicit functions pdf
  • CvxNet: Learnable Convex Decomposition pdf

Below we also list some more papers that are more closely related to superquadrics

  • Equal-Distance Sampling of Supercllipse Models pdf
  • Revisiting Superquadric Fitting: A Numerically Stable Formulation link
  • Segmentation and Recovery of Superquadric Models using Convolutional Neural Networks pdf


Code for "Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image", CVPR 2020







No releases published


No packages published