Skip to content
HDDM is a python module that implements Hierarchical Bayesian parameter estimation of Drift Diffusion Models (via PyMC).
Jupyter Notebook Python C R Matlab Shell
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.



Author: Thomas V. Wiecki, Imri Sofer, Michael J. Frank
Web site:
Mailing list:
Copyright: This document has been placed in the public domain.
License:HDDM is released under the BSD 2 license.
Version: 0.5beta2


HDDM is a python toolbox for hierarchical Bayesian parameter estimation of the Drift Diffusion Model (via PyMC). Drift Diffusion Models are used widely in psychology and cognitive neuroscience to study decision making.


  • Uses hierarchical Bayesian estimation (via PyMC) of DDM parameters to allow simultaneous estimation of subject and group parameters, where individual subjects are assumed to be drawn from a group distribution. HDDM should thus produce better estimates when less RT values are measured compared to other methods using maximum likelihood for individual subjects (i.e. DMAT or fast-dm).
  • Heavily optimized likelihood functions for speed (Navarro & Fuss, 2009).
  • Flexible creation of complex models tailored to specific hypotheses (e.g. estimation of separate drift-rates for different task conditions; or predicted changes in model parameters as a function of other indicators like brain activity).
  • Easy specification of models via configuration file fosters exchange of models and research results.
  • Built-in Bayesian hypothesis testing and several convergence and goodness-of-fit diagnostics.


The following is a minimal python script to load data, run a model and examine its parameters and fit.

import hddm

# Load data from csv file into a NumPy structured array
data = hddm.load_csv('simple_difficulty.csv')

# Create a HDDM model multi object
model = hddm.HDDM(data, depends_on={'v':'difficulty'})

# Create model and start MCMC sampling
model.sample(10000, burn=5000)

# Print fitted parameters and other model statistics

# Plot posterior distributions and theoretical RT distributions



The easiest way is to download and install Anaconda or the Enthought Python Distribution (EPD) which is free for academic use.

We recommend using pip to download and install HDDM. The easiest way to install pip is via easy_install. Start the windows command shell (cmd.exe) and type

easy_install pip

Then install kabuki and HDDM:

pip install kabuki
pip install hddm

Linux (Debian based, such as Ubuntu)

Most of HDDM's dependencies are available from your repository, you can install them by typing

apt-get install python python-dev python-numpy python-scipy python-matplotlib cython python-pip gfortran liblapack-dev

which requires sudo rights.

Optional dependencies for can be installed via

apt-get install python-wxgtk2.8 python-traitsui

Then install kabuki and HDDM:

sudo pip install pandas
sudo pip install pymc
sudo pip install kabuki
sudo pip install hddm


We recommend installing the SciPy Superpack maintained by Chris Fonnesbeck.

The Superpack requires you to install XCode which apparently does not bundle with gcc anymore (which is required by HDDM). This repository provides some appropriate installers:

Then install kabuki and HDDM:

sudo pip install kabuki
sudo pip install hddm

How to cite

If HDDM was used in your research, please cite the following:

Wiecki, T. V., Sofer, I. and Frank, M. J. Hierarchical Bayesian estimation of the drift diffusion model: quantitative comparison with maximum likelihood. Program No. 494.13/CCC30. 2012 Neuroscience Meeting Planner. New Orleans, LA: Society for Neuroscience, 2012.

Getting started

Check out the tutorial on how to get started. Further information can be found in howto and the documentation.

Join our low-traffic mailing list.

Something went wrong with that request. Please try again.