mpi4py_map provides a simple map() interface to mpi4py that allows easy parallelization of function evaluations over sequential input.
Latest commit 8b15826 Mar 6, 2015 @twiecki Merge pull request #1 from MSardelich/patch-1
fix typos
Failed to load latest commit information.
mpi4py_map fix typos Mar 6, 2015
README.rst DOC: Add depracation warning. May 9, 2014 DOC: Tweaked README.rst. Bumped version to 0.2. Feb 16, 2012 Initial import. Jan 9, 2012


Provides a map() interface to mpi4py.

License: MIT

Copyright (c) 2012 Thomas Wiecki (thomas.wiecki[at]

Depracation warning

You probably do not want to use this as there are much better alternatives like IPython parallel. See here for a blog post in which I describe my current setup:

The code might be instructive on how to use MPI4Py however.


MPI (Message Passing Interface) is a library that allows processes running on different hosts to communicate and exchange information. It is ideally suited for parallel computations on a cluster. mpi4py provides a convenient python wrapper for this library. However, it still a pretty bare exposition of MPI functions which requires you to code most of the interprocess-communication by hand. Other parallel python libraries such as the multiprocessing package (which only allows process management on a single host) have more simple interfaces. For example, you can create a worker pool and use the map() function to easily parallelize evaluations of your function over a given sequence (e.g. x: x**2, range(500)) will square the list in parallel).

mpi4py_map brings this simple but powerful map() functionality to mpi4py. It takes care of assigning the jobs to the workers, queueing if all workers are busy and cleanly shutting down all workers after the job is complete.


  • Easy, transparent parallelization of your Python code on a PC or cluster.
  • Sequential items do not have to be serializable.



Create a python file (e.g.

from mpi4py_map import map
print map(lambda x, y: x**y, [1,2,3,4], 2) # square the sequence

Then call your program with mpirun, e.g.:

mpi4run -n 4