Skip to content
This is the dataset generation code for ADEPT (Approximate Derenderer, Extended Physics, and Tracking).
Python Shell
Branch: master
Clone or download
Latest commit 3df982f Aug 29, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
assets Initial commit Aug 29, 2019
phys_sim Initial commit Aug 29, 2019
.gitignore Initial commit Aug 29, 2019
requirements.txt Initial commit Aug 29, 2019


This is the dataset generation code for ADEPT (Approximate Derenderer, Extended Physics, and Tracking).

Modeling Expectation Violation in Intuitive Physics with Coarse Probabilistic Object Representations

Kevin Smith*, Lingjie Mei*, Shunyu Yao, Jiajun Wu, Elizabeth S. Spelke, Joshua B. Tenenbaum, Tomer Ullman (* indicates equal contribution)


Paper BibTeX Website

For the model, see ADEPT-Model-Release


  • Linux
  • Python3
  • Blender as a python module
  • Other modules required specified in requirements.txt

Getting started

  1. Clone this directory

    git clone
    cd ADEPT-Dataset-Release

    And replace CONTENT_FOLDER in utils.constants and phys_sim/data/builder/ with the absolute path to your directory.

  2. Create a conda environment for ADEPT Dataset Generation, and install the requirements.

    conda create --n adept-dataset
    conda activate adept-dataset
    pip install -r requirements.txt

For installation of Blender as a python module, see Blender wiki.

You may also try using Blender's bundled python, by replacing `python *.py --arg1 --value1` with `blender -b --python -- --arg1 --value1`
  1. (Optional) If you have multiple machines, you may change get_host_id in utils/ to reflect the id of your machine. With that in hand, you may speed up all following processes by using --stride N arguments, where you have N machines with consecutive ids.

  2. To render ShapeNet objects, please download ShapeNet Core V2 from its official website. Change SHAPE_NET_FOLDER in phys_sim/data/builder/ to the path of ShapeNet meshes, and run thar script.

    To turn them into .blend files, run

    # Single machine
    python3 render/data/builder/ #Map phase
    python3 render/data/builder/ --reduce #Reduce phase
    # Multiple (e.g. 8) machines
    python3 render/data/builder/ --stride 8 #On each machine
    python3 render/data/builder/ --reduce --stride 8 #On a single machine

Dataset generation

  1. Generate training set (e.g. with 1000 videos) by running

    # Single machine
    python3 dataset/ --end 1000
    # Multiple (e.g. 8) machines
    python3 dataset/ --end 1000 --stride 8 #On each machine
  2. Generate human test set by running

    # Single machine
    python3 dataset/human/ --end 1000
    # Multiple (e.g. 8) machines
    python3 dataset/human/ --end 1000 --stride 8 #On each machine


  1. Evaluating the relative accuracy on human test set. If you have a experiment output folder that contains .txt files containing the scores of all human test cases, run
    python3 dataset/human/ --summary_folder ${SUMMARY_FOLDER} #Score in SUMMARY_FOLDER/results
    python3 dataset/human/ --summary_folder ${SUMMARY_FOLDER} --output_folder ${OUTPUT_FOLDER} #Custom output folder
    Or you may get the relative accuracy from a json file that contains a dictionary mapping case name to its score:
    python3 dataset/human/ --summary_file ${SUMMARY_FILE} --output_folder ${OUTPUT_FOLDER} #Custom output folder
You can’t perform that action at this time.