Bayesian inference in Python
Pull request Compare This branch is 3050 commits behind pymc-devs:master.
Failed to load latest commit information.
blas/BLAS flib_blas builds on windows. Jan 26, 2008
cephes Von Mises and trailing whitespace patches from Andrew Straw Jan 15, 2009
docs Changed reference to downloads page Jun 7, 2012
lapack/double Added Jan 15, 2009
pymc Fixed binning formula bug #120 Jun 2, 2012
.gitignore Updated README.rst, removed pair_posterior from Matplot. Apr 7, 2012
CREDITS.rst Initializing model with pre-existing database now re-associates trace… Feb 1, 2012
DEVELOPERS.txt updated the INSTALL file with more recent version numbers. Changed th… Jan 6, 2010
INSTALL.rst updated the INSTALL file with more recent version numbers. Changed th… Jan 6, 2010
LICENSE Von Mises and trailing whitespace patches from Andrew Straw Jan 15, 2009 added docs/UserGuide.pdf to the MANIFEST file. Dec 1, 2008
README.rst Examples, tutorial links in README to point github Jun 7, 2012
builddocs Removed JSS build steps from builddocs Feb 11, 2012
builddpkg Ran code_maintenance May 6, 2009
buildosx Fixed PDMatrixMetropolis competence for issue #108 Apr 16, 2012 Removed von Mises constraint Jan 15, 2009
epydoc.conf changed :math:'blabla' to :math: (valid ReST) in Mi… Apr 29, 2008
pymc-requirements Added requirements file for pip installation Jul 15, 2010 Changed version to 2.2 May 7, 2012 Small changes to setup files for OSX. Jul 24, 2010
stdeb_all.cfg Added python-numpy to build-depends in stdeb_all.cfg Mar 27, 2009



Date: August 16, 2011
Version: 2.1
Authors: Chris Fonnesbeck
Anand Patil
David Huard
John Salvatier
Web site:
Copyright: This document has been placed in the public domain.
License:PyMC is released under the MIT license.


PyMC is a python module that implements Bayesian statistical models and fitting algorithms, including Markov chain Monte Carlo. Its flexibility and extensibility make it applicable to a large suite of problems. Along with core sampling functionality, PyMC includes methods for summarizing output, plotting, goodness-of-fit and convergence diagnostics.


PyMC provides functionalities to make Bayesian analysis as painless as possible. Here is a short list of some of its features:

  • Fits Bayesian statistical models with Markov chain Monte Carlo and other algorithms.
  • Includes a large suite of well-documented statistical distributions.
  • Uses NumPy for numerics wherever possible.
  • Includes a module for modeling Gaussian processes.
  • Sampling loops can be paused and tuned manually, or saved and restarted later.
  • Creates summaries including tables and plots.
  • Traces can be saved to the disk as plain text, Python pickles, SQLite or MySQL database, or hdf5 archives.
  • Several convergence diagnostics are available.
  • Extensible: easily incorporates custom step methods and unusual probability distributions.
  • MCMC loops can be embedded in larger programs, and results can be analyzed with the full power of Python.

What's new in version 2

This second version of PyMC benefits from a major rewrite effort. Substantial improvements in code extensibility, user interface as well as in raw performance have been achieved. Most notably, the PyMC 2 series provides:

  • New flexible object model and syntax (not backward-compatible).
  • Reduced redundant computations: only relevant log-probability terms are computed, and these are cached.
  • Optimized probability distributions.
  • New adaptive blocked Metropolis step method.
  • Much more!


First, define your model in a file, say (with comments, of course!):

# Import relevant modules
import pymc
import numpy as np

# Some data
n = 5*np.ones(4,dtype=int)
x = np.array([-.86,-.3,-.05,.73])

# Priors on unknown parameters
alpha = pymc.Normal('alpha',mu=0,tau=.01)
beta = pymc.Normal('beta',mu=0,tau=.01)

# Arbitrary deterministic function of parameters
def theta(a=alpha, b=beta):
    """theta = logit^{-1}(a+b)"""
    return pymc.invlogit(a+b*x)

# Binomial likelihood for data
d = pymc.Binomial('d', n=n, p=theta, value=np.array([0.,1.,3.,5.]),\

Save this file, then from a python shell (or another file in the same directory), call:

import pymc
import mymodel

S = pymc.MCMC(mymodel, db='pickle')
S.sample(iter=10000, burn=5000, thin=2)

This example will generate 10000 posterior samples, thinned by a factor of 2, with the first half discarded as burn-in. The sample is stored in a Python serialization (pickle) database.


PyMC began development in 2003, as an effort to generalize the process of building Metropolis-Hastings samplers, with an aim to making Markov chain Monte Carlo (MCMC) more accessible to non-statisticians (particularly ecologists). The choice to develop PyMC as a python module, rather than a standalone application, allowed the use MCMC methods in a larger modeling framework. By 2005, PyMC was reliable enough for version 1.0 to be released to the public. A small group of regular users, most associated with the University of Georgia, provided much of the feedback necessary for the refinement of PyMC to a usable state.

In 2006, David Huard and Anand Patil joined Chris Fonnesbeck on the development team for PyMC 2.0. This iteration of the software strives for more flexibility, better performance and a better end-user experience than any previous version of PyMC.

PyMC 2.1 was released in early 2010. It contains numerous bugfixes and optimizations, as well as a few new features. This user guide is written for version 2.1.

Relationship to other packages

PyMC in one of many general-purpose MCMC packages. The most prominent among them is WinBUGS, which has made MCMC and with it Bayesian statistics accessible to a huge user community. Unlike PyMC, WinBUGS is a stand-alone, self-contained application. This can be an attractive feature for users without much programming experience, but others may find it constraining. A related package is JAGS, which provides a more UNIX-like implementation of the BUGS language. Other packages include Hierarchical Bayes Compiler and a number of R packages of varying scope.

It would be difficult to meaningfully benchmark PyMC against these other packages because of the unlimited variety in Bayesian probability models and flavors of the MCMC algorithm. However, it is possible to anticipate how it will perform in broad terms.

PyMC's number-crunching is done using a combination of industry-standard libraries (NumPy and the linear algebra libraries on which it depends) and hand-optimized Fortran routines. For models that are composed of variables valued as large arrays, PyMC will spend most of its time in these fast routines. In that case, it will be roughly as fast as packages written entirely in C and faster than WinBUGS. For finer-grained models containing mostly scalar variables, it will spend most of its time in coordinating Python code. In that case, despite our best efforts at optimization, PyMC will be significantly slower than packages written in C and on par with or slower than WinBUGS. However, as fine-grained models are often small and simple, the total time required for sampling is often quite reasonable despite this poorer performance.

We have chosen to spend time developing PyMC rather than using an existing package primarily because it allows us to build and efficiently fit any model we like within a full-fledged Python environment. We have emphasized extensibility throughout PyMC's design, so if it doesn't meet your needs out of the box chances are you can make it do so with a relatively small amount of code. See the testimonials page on the wiki for reasons why other users have chosen PyMC.

Getting started

This guide provides all the information needed to install PyMC, code a Bayesian statistical model, run the sampler, save and visualize the results. In addition, it contains a list of the statistical distributions currently available. More examples of usage as well as tutorials are available from the PyMC web site.