Skip to content
MPI wrappers for Julia
Branch: master
Clone or download
PhilipVinc and barche Implement MPI_IN_PLACE for MPI Operations (#232)
* Extract the value of MPI_IN_PLACE during build phase
* Add non allocating version of MPI_Allreduce, support for MPI_IN_PLACE, and tests.
* Add non allocating version of MPI_Scatter, another method supporting MPI_IN_PLACE, and tests.
* Add non allocating version of MPI_Allgather, support for MPI_IN_PLACE, and tests.
* Add non allocating version of MPI_Allgatherv, support for MPI_IN_PLACE, and tests.
* Add non allocating version of MPI_Alltoall, support for MPI_IN_PLACE, and tests.
* Add non allocating version of Alltoallv and tests
* Modify definition of IN_PLACE and revert function signatures to the old type.
* Make the compiler check if a function can accept ConstantPtr.
* Allreduce:
 - Rename `allreduce(sendbuf, op, comm)` to `Allreduce(sendbuf, op, comm)` for consistency + fix test
 - Move `Allreduce!(send, recv, count op::Function, comm)` converting user provided-functions from `mpi-op.jl` to `mpi-base.jl`
* Reduce:
	- Add nonallocating version `Reduce!`
	- Move and rename `Reduce(send, recv, count op::Function, comm)` converting user provided-functions from `mpi-op.jl` to `mpi-base.jl`
* Add Reduce_in_place! function
* Modify allreduce tests to test N-dimensional tensors
* Gather: Add a non-allocating version
* Add Gather_in_place!
* Add non-allocating Gatherv and support for MPI.IN_PLACE + tests
* Add Gatherv_in_place!
* Add non-allocating Scatterv! and Scatterv_in_place
Latest commit bc2e979 Feb 7, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
conf Remove oclint uninstall Nov 15, 2018
examples Update examples for Julia version 0.7.0/1.0.0 Aug 8, 2018
src Implement MPI_IN_PLACE for MPI Operations (#232) Feb 7, 2019
test Implement MPI_IN_PLACE for MPI Operations (#232) Feb 7, 2019
.travis.yml Fix clustermanager for 0.7 Aug 2, 2018
Project.toml Fix clustermanager for 0.7 Aug 2, 2018 Implement MPI_IN_PLACE for MPI Operations (#232) Feb 7, 2019
REQUIRE Fix clustermanager for 0.7 Aug 2, 2018
UNLICENSE Update documentation Sep 6, 2014

MPI interface for the Julia language

Build Status Build status Coverage Status

This is a basic Julia wrapper for the portable message passing system Message Passing Interface (MPI). Inspiration is taken from mpi4py, although we generally follow the C and not the C++ MPI API. (The C++ MPI API is deprecated.)


Unix systems (OSX and Linux)

CMake is used to piece together the MPI wrapper. Currently a shared library MPI installation for C and Fortran is required (tested with Open MPI and MPICH). To install MPI.jl using the Julia packaging system, run




which will build and install the wrapper into $HOME/.julia/vX.Y/MPI.

Platform specific notes:

  • If you are trying to build on OSX with Homebrew, the necessary Fortran headers are not included in the OpenMPI bottle. To workaround this you can build OpenMPI from source: brew install --build-from-source openmpi

Overriding compilers

Currently, MPI.jl relies on CMake for building a few C/Fortran source files needed by the library. Unfortunately, CMake does not follow the PATH variable when determining which compiler to use, which could cause problem if the compiler you want to use does not reside in a standard directory like /usr/bin. You can override CMake's detection of the compiler by specifying the environment variables CC, CXX, and FC on the command line. The following example forces the compilation process to use the compilers found in the path:

CC=$(which gcc) CXX=$(which g++) FC=$(which gfortran) julia -e 'Pkg.add("MPI")'

Overriding the auto-detected MPI version

It may be that CMake selects the wrong MPI version, or that CMake fails to correctly detect and configure your MPI implementation. You can override CMake's mechanism by setting certain environment variables:


This will set MPI_C_LIBRARIES, MPI_Fortran_INCLUDE_PATH, and MPI_Fortran_LIBRARIES when calling CMake as described in its FindMPI module. You can set these variables either in your shell startup file, or e.g. via your ~/.juliarc file. Here is an example:

ENV["JULIA_MPI_C_LIBRARIES"] = "-L/opt/local/lib/openmpi-gcc5 -lmpi"
ENV["JULIA_MPI_Fortran_INCLUDE_PATH"] = "-I/opt/local/include"
ENV["JULIA_MPI_Fortran_LIBRARIES"] = "-L/opt/local/lib/openmpi-gcc5 -lmpi_usempif08 -lmpi_mpifh -lmpi"

You can set other configuration variables as well (by adding a JULIA_ prefix); the full list of variables currently supported is



You need to install the Microsoft MPI runtime on your system (the SDK is not required). Then simply add the MPI.jl package with


If you would like to wrap an MPI function on Windows, keep in mind you may need to add its signature to src/win_mpiconstants.jl.

Usage : MPI-Only mode

To run a Julia script with MPI, first make sure that using MPI or import MPI is included at the top of your script. You should then be able to run the MPI job as expected, e.g., with

mpirun -np 3 julia 01-hello.jl


In Julia code building on this package, it may happen that you want to run MPI cleanup functions in a finalizer. This makes it impossible to manually call MPI.Finalize(), since the Julia finalizers may run after this call. To solve this, a C atexit hook to run MPI.Finalize() can be set using MPI.finalize_atexit(). It is possible to check if this function was called by checking the global Ref MPI.FINALIZE_ATEXIT.

Usage : MPI and Julia parallel constructs together

In order for MPI calls to be made from a Julia cluster, it requires the use of MPIManager, a cluster manager that will start the julia workers using mpirun

Currently MPIManager only works with Julia 0.4 . It has three modes of operation

  • Only worker processes execute MPI code. The Julia master process executes outside of and is not part of the MPI cluster. Free bi-directional TCP/IP connectivity is required between all processes

  • All processes (including Julia master) are part of both the MPI as well as Julia cluster. Free bi-directional TCP/IP connectivity is required between all processes.

  • All processes are part of both the MPI as well as Julia cluster. MPI is used as the transport for julia messages. This is useful on environments which do not allow TCP/IP connectivity between worker processes

MPIManager (only workers execute MPI code)

An example is provided in examples/05-juliacman.jl. The julia master process is NOT part of the MPI cluster. The main script should be launched directly, MPIManager internally calls mpirun to launch julia/mpi workers. All the workers started via MPIManager will be part of the MPI cluster.

MPIManager(;np=Sys.CPU_THREADS, mpi_cmd=false, launch_timeout=60.0)

If not specified, mpi_cmd defaults to mpirun -np $np stdout from the launched workers is redirected back to the julia session calling addprocs via a TCP connection. Thus the workers must be able to freely connect via TCP to the host session. The following lines will be typically required on the julia master process to support both julia and mpi

# to import MPIManager
using MPI

# need to also import Distributed to use addprocs()
using Distributed

# specify, number of mpi workers, launch cmd, etc.

# start mpi workers and add them as julia workers too.

To execute code with MPI calls on all workers, use @mpi_do.

@mpi_do manager expr executes expr on all processes that are part of manager

For example: @mpi_do manager (comm=MPI.COMM_WORLD; println("Hello world, I am $(MPI.Comm_rank(comm)) of $(MPI.Comm_size(comm))")) executes on all mpi workers belonging to manager only

examples/05-juliacman.jl is a simple example of calling MPI functions on all workers interspersed with Julia parallel methods. cd to the examples directory and run julia 05-juliacman.jl

A single instation of MPIManager can be used only once to launch MPI workers (via addprocs). To create multiple sets of MPI clusters, use separate, distinct MPIManager objects.

procs(manager::MPIManager) returns a list of julia pids belonging to manager mpiprocs(manager::MPIManager) returns a list of MPI ranks belonging to manager

Fields j2mpi and mpi2j of MPIManager are associative collections mapping julia pids to MPI ranks and vice-versa.


(TCP/IP transport - all processes execute MPI code)

  • Useful on environments which do not allow TCP connections outside of the cluster
  • An example is in examples/06-cman-transport.jl

mpirun -np 5 julia 06-cman-transport.jl TCP

This launches a total of 5 processes, mpi rank 0 is the julia pid 1. mpi rank 1 is julia pid 2 and so on.

The program must call MPI.start(TCP_TRANSPORT_ALL) with argument TCP_TRANSPORT_ALL. On mpi rank 0, it returns a manager which can be used with @mpi_do On other processes (i.e., the workers) the function does not return


(MPI transport - all processes execute MPI code)

MPI.start must be called with option MPI_TRANSPORT_ALL to use MPI as transport. mpirun -np 5 julia 06-cman-transport.jl MPI will run the example using MPI as transport.

Julia MPI-only interface


Julia interfaces to the Fortran versions of the MPI functions. Since the C and Fortran communicators are different, if a C communicator is required (e.g., to interface with a C library), this can be achieved with the Fortran to C communicator conversion:

juliacomm = MPI.COMM_WORLD
ccomm = MPI.CComm(juliacomm)

Currently wrapped MPI functions

Convention: MPI_Fun => MPI.Fun

Constants like MPI_SUM are wrapped as MPI.SUM. Note also that arbitrary Julia functions f(x,y) can be passed as reduction operations to the MPI Allreduce and Reduce functions.

Administrative functions

Julia Function (assuming import MPI) Fortran Function
MPI.Abort MPI_Abort
MPI.Comm_dup MPI_Comm_dup
MPI.Comm_free MPI_Comm_free
MPI.Comm_get_parent MPI_Comm_get_parent
MPI.Comm_rank MPI_Comm_rank
MPI.Comm_size MPI_Comm_size
MPI.Comm_spawn MPI_Comm_spawn
MPI.Finalize MPI_Finalize
MPI.Finalized MPI_Finalized
MPI.Get_address MPI_Get_address
MPI.Init MPI_Init
MPI.Initialized MPI_Initialized
MPI.Intercomm_merge MPI_Intercomm_merge
MPI.mpitype MPI_Type_create_struct and MPI_Type_commit (see: mpitype note)

mpitype note: This is not strictly a wrapper for MPI_Type_create_struct and MPI_Type_commit, it also is an accessor for previously created types.

Point-to-point communication

Julia Function (assuming import MPI) Fortran Function
MPI.Cancel! MPI_Cancel
MPI.Get_count MPI_Get_count
MPI.Iprobe MPI_Iprobe
MPI.Irecv! MPI_Irecv
MPI.Isend MPI_Isend
MPI.Probe MPI_Probe
MPI.Recv! MPI_Recv
MPI.Send MPI_Send
MPI.Test! MPI_Test
MPI.Testall! MPI_Testall
MPI.Testany! MPI_Testany
MPI.Testsome! MPI_Testsome
MPI.Wait! MPI_Wait
MPI.Waitall! MPI_Waitall
MPI.Waitany! MPI_Waitany
MPI.Waitsome! MPI_Waitsome

Collective communication (assuming import MPI)

Non-Allocating Julia Function Allocating Julia Function Fortran Function Supports MPI_IN_PLACE
MPI.Allgather! MPI.Allgather MPI_Allgather
MPI.Allgatherv! MPI.Allgatherv MPI_Allgatherv
MPI.Allreduce! MPI.Allreduce MPI_Allreduce
MPI.Alltoall! MPI.Alltoall MPI_Alltoall
MPI.Alltoallv! MPI.Alltoallv MPI_Alltoallv
-- MPI.Barrier MPI_Barrier
MPI.Bcast! MPI.Bcast! MPI_Bcast
-- MPI.Exscan MPI_Exscan
MPI.Gather! MPI.Gather MPI_Gather Gather_in_place!
MPI.Gatherv! MPI.Gatherv MPI_Gatherv Gatherv_in_place!
MPI.Reduce! MPI.Reduce MPI_Reduce Reduce_in_place!
MPI.Scan MPI.Scan MPI_Scan missing
MPI.Scatter! MPI.Scatter MPI_Scatter Scatter_in_place!
MPI.Scatterv! MPI.Scatterv MPI_Scatterv Scatterv_in_place!

The non-allocating Julia functions map directly to the corresponding MPI operations, after asserting that the size of the output buffer is sufficient to store the result.

The allocating Julia functions allocate an output buffer and then call the non-allocating method.

All-to-all collective communications support in place operations by passing MPI.IN_PLACE with the same syntax documented by MPI. One-to-All communications support it by calling the function *_in_place!, calls the MPI functions with the right syntax on root and non root process.

One-sided communication

Julia Function (assuming import MPI) Fortran Function
MPI.Win_create MPI_Win_create
MPI.Win_create_dynamic MPI_Win_create_dynamic
MPI.Win_allocate_shared MPI_Win_allocate_shared
MPI.Win_shared_query MPI_Win_shared_query
MPI.Win_attach MPI_Win_attach
MPI.Win_detach MPI_Win_detach
MPI.Win_fence MPI_Win_fence
MPI.Win_flush MPI_Win_flush
MPI.Win_free MPI_Win_free
MPI.Win_sync MPI_Win_sync
MPI.Win_lock MPI_Win_lock
MPI.Win_unlock MPI_Win_unlock
MPI.Fetch_and_op MPI_Fetch_and_op
MPI.Accumulate MPI_Accumulate
MPI.Get_accumulate MPI_Get_accumulate
You can’t perform that action at this time.