Skip to content
Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time


Python code and download links to the data of Bill et al., "Hierarchical structure is employed by humans during visual motion perception" (preprint).

This repository allows you to:

  • Generate figures 2, 3, 4 and 5 from the main paper,
  • Collect your own data,
  • Run the full analysis pipeline (if you are willing to dig into the code, a bit).

In case of questions, please contact Johannes Bill (

Table of contents


We assume a Ubuntu-based Linux installation. On Mac, you should be able to homebrew with sip and pyqt. In the cloned repository, we suggest to use a virtual environment with Python 3.6+:

$ python3 -m pip install --user --upgrade pip   # Install pip (if not yet installed)
$ sudo apt-get install python3-venv             # May be needed for environment creation
$ python3.6 -m venv env                         # Create environment with the right python interpreter (must be installed)
$ source env/bin/activate                       # Activate env
$ python3 -m pip install --upgrade pip          # Make sure the local pip is up to date
$ pip3 install wheel                            # Install wheel first
$ pip3 install -r requirements.txt              # Install other required packages
$ deactivate                                    # Deactivate env


Always start your session by running source and end it with source These will set up the virtual environment and python path. Here are some cookbooks.

Plot figures

Re-plotting the figures from the main paper is quick and easy:

$ source
$ cd plot
$ python3   # Plot Figure 2
$ python3   # Plot Figure 3
$ python3   # Plot Figure 4
$ python3   # Plot Figure 5
$ cd ..
$ source

All figures will be saved in ./plot/fig/ as png and pdf.

Collect your own data

MOT experiment

This experiment requires Python as well as MATLAB with Psychtoolbox. Please make sure to have at least 2GB of disk space available per participant. Questions on the data collection for the MOT experiment can also be directed to Hrag Pailian (

  1. Generate trials:
  • $ source
  • $ cd rmot/generate_stim
  • Adjust nSubjects=... in file to your needs.
  • Generate trials via $ ./ (This may take a while depending on processor power.)
  • Resulting trials are written to:
    • data/rmot/myexp/trials for the Python data (will be needed for simulations and analyzes)
    • data/rmot/myexp/matlab_trials for the data collection with MATLAB
  1. Run the experiment: For each participant n=1,..
  • Copy the content of data/rmot/myexp/matlab_trials/participant_n/ into rmot/matlab_gui/Trials/.
  • $ cd ../matlab_gui
  • Determine the participant's speed via repeated execution of Part_1_Thresholding.m (will prompt for speed on start).
  • Conduct the main experiment via Part_2_Test.m (will prompt for speed and n).
  • Copy the saved responses to data/rmot/myexp/responses/ and rename the file to Response_File_Test_Pn.mat.
  1. Convert the data back to Python format:
  • $ cd ../ana
  • For each participant n=1,.., run
    $ python3 data/myexp/responses/Response_File_Test_Pn.mat.
  • $ cd ../..
  • $ source

Continue with the data analysis (see below).

Prediction experiment

This experiment is fully Python-based.

$ source
$ cd pred/gui
$ python3 presets/example_trials/ -f -T 10   # EITHER: try out 10 trials (ca. 2 min)
$ ./ -u 12345                        # OR: run the full experiment (ca. 75 min)
$ cd ../..
$ source

Continue with the data analysis (below).

If you run the full experiment, your data will be stored in /data/pred/myexp/. Please refer to /pred/gui/ for further information -- especially to ensure a stable frame rate before running a full experiment.

Data download

The data from the publication can be downloaded here:

For below analyses, unzip the content of these archives into the directories data/rmot/paper and data/pred/paper respectively. Then, execute steps 1. and 3. (replacing myexp with paper) in the description of Collect your own data >> MOT experiment.

Data analysis

Remark: The following description for the data analysis still refers to the 1st version of the manuscript. The data and analyses are generally identical with the 2nd version, but do not yet include the Bayesian model comparison across motion structures and the alternative observer models in the MOT task, presented in Figure 3. An updated description will be provided soon.

Use the following analysis chain to recreate the aggregate data files provided in /data from the raw data in /data/rmot/paper and /data/pred/paper -- or to analyze your own data (see above). The analysis may require some understanding of the Python code. So, please, do not expect a direct copy-and-paste workflow.

MOT experiment

$ source
$ cd rmot/ana
  1. Set up a data set labels (DSL) file to link human data to simulation data:
  • You can use as a template.
  • Adjust exppath and subjects. Make sure simpath exists.
  • For each participant, create an entry block and enter the participant's ["speed"] (from above 'thresholding').
  • The ["sim"] entries will be filled later.
  1. Set up the file for simulations:
  • You can use as a template.
  • Adjust the import to import from your DSL file and ensure that cfg["global"]["outdir"] exists.
  • Adjust cfg["observe"]["datadir"] to point to the (Python) trials.
  • You may want to reduce reps_per_trial from 25 to 1 to speed up the simulation (optional).
  1. Prepare the simulations in
  • Adjust lines 8-11 to match your DSLs, config, and trial directory.
  1. Run observer models with different motion structure priors on the experiment trials:
  • For each participant and stimulus condition:
    • Adjust lines 6 and 7 in
    • Run $ ./
    • Enter the DSL of the simulation in your DSL file's ["sim"] entry of the respective participant and condition.
    • Warning: The simulations may take a while (we used the HMS cluster).
  • Collect all results via $ python3 (adjust line 7).
  • Copy the created file to the repository's /data/ directory.
  1. Plot the figure:
  • $ cd ../../plot
  • Adjust fname_data= to point to your data in
  • $ python3 # Plot Figure 2
$ cd ..
$ source

Prediction experiment

$ source
$ cd pred/ana
  1. Run Kalman filters with different motion priors on the experiment trials:
  • In file, direct cfg["observe"]["datadir"] to the experiment data.
  • For each participant and stimulus condition:
    • In, enter GROUNDTRUTH= and datadsl=.
    • Run $ python3 config_datarun_MarApr2019
    • Keep track of the data set labels (DSLs) linking experiment and simulation data, in a file similiar to
  1. Fit all observer models (for Fig. 3):
  • Update the parameters section in, especially:
    exppath, outFilename, and import from your DSL file.
  • $ python3
  • Copy the outFilename file to the repository's /data/ directory.
  1. Bias-variance analysis (for Fig. 4):
  • Update the parameters section in, especially:
    path_exp, outfname_data, and import from your DSL file.
  • $ python3
  • Copy the outfname_data file to the repository's /data/ directory.
  1. Plot the figures:
  • $ cd ../../plot
  • Adjust fname_data= to point to your data in and
  • $ python3 # Plot Figure 3
  • $ python3 # Plot Figure 4
$ cd ..
$ source


List of directories

  • data: Experiment data and simulation/analysis results
  • pckg: Python imports of shared classes and functions
  • plot: Plotting scripts for Figures 2, 3 and 4
  • pred: Simulation and analyis scripts for the prediction task
  • rmot: Simulation and analyis scripts for the rotational MOT task


If the 'Arial' font is not installed already:

$ sudo apt-get install ttf-mscorefonts-installer
$ sudo fc-cache
$ python3 -c "import matplotlib.font_manager; matplotlib.font_manager._rebuild()"

...and if you really want it all: the stars in Figure 3 indicating significance use font type "FreeSans".


Python code and data for Bill et al. "Hierarchical structure is employed by humans during visual motion perception"







No packages published