MPI interface for the Julia language
This is a basic Julia wrapper for the portable message passing system Message Passing Interface (MPI). Inspiration is taken from mpi4py, although we generally follow the C and not the C++ MPI API. (The C++ MPI API is deprecated.)
CMake is used to piece together the MPI wrapper. Currently a shared library MPI installation for C and Fortran is required (tested with Open MPI and MPICH). To install MPI.jl using the Julia packaging system, run
which will build and install the wrapper into
Usage : MPI-Only mode
To run a Julia script with MPI, first make sure that
using MPI or
import MPI is included at the top of your script. You should then be
able to run the MPI job as expected, e.g., with
mpirun -np 3 julia 01-hello.jl
Usage : MPI and Julia parallel constructs together
In order for MPI calls to be made from a Julia cluster, it requires the use of
MPIManager, a cluster manager that will start the julia workers using
Currently MPIManager only works with Julia 0.4 . It has three modes of operation
Only worker processes execute MPI code. The Julia master process executes outside of and is not part of the MPI cluster. Free bi-directional TCP/IP connectivity is required between all processes
All processes (including Julia master) are part of both the MPI as well as Julia cluster. Free bi-directional TCP/IP connectivity is required between all processes.
All processes are part of both the MPI as well as Julia cluster. MPI is used as the transport for julia messages. This is useful on environments which do not allow TCP/IP connectivity between worker processes
MPIManager (only workers execute MPI code)
An example is provided in
The julia master process is NOT part of the MPI cluster. The main script should be
launched directly, MPIManager internally calls
mpirun to launch julia/mpi workers.
All the workers started via MPIManager will be part of the MPI cluster.
MPIManager(;np=Sys.CPU_CORES, mpi_cmd=false, launch_timeout=60.0)
If not specified,
mpi_cmd defaults to
mpirun -np $np
STDOUT from the launched workers is redirected back to the julia session calling
addprocs via a TCP connection.
Thus the workers must be able to freely connect via TCP to the host session.
The following lines will be typically required on the julia master process to support both julia and mpi
# to import MPIManager using MPI # specify, number of mpi workers, launch cmd, etc. manager=MPIManager(np=4) # start mpi workers and add them as julia workers too. addprocs(manager)
To execute code with MPI calls on all workers, use
@mpi_do manager expr executes
expr on all processes that are part of
@mpi_do manager (comm=MPI.COMM_WORLD; println("Hello world, I am $(MPI.Comm_rank(comm)) of $(MPI.Comm_size(comm))")
executes on all mpi workers belonging to
examples/05-juliacman.jl is a simple example of calling MPI functions on all workers
interspersed with Julia parallel methods.
cd to the
examples directory and run
A single instation of
MPIManager can be used only once to launch MPI workers (via
To create multiple sets of MPI clusters, use separate, distinct
procs(manager::MPIManager) returns a list of julia pids belonging to
mpiprocs(manager::MPIManager) returns a list of MPI ranks belonging to
MPIManager are associative collections mapping julia pids to MPI ranks and vice-versa.
(TCP/IP transport - all processes execute MPI code)
- Useful on environments which do not allow TCP connections outside of the cluster
- An example is in
mpirun -np 5 julia 06-cman-transport.jl TCP
This launches a total of 5 processes, mpi rank 0 is the julia pid 1. mpi rank 1 is julia pid 2 and so on.
The program must call
MPI.start(TCP_TRANSPORT_ALL) with argument
On mpi rank 0, it returns a
manager which can be used with
On other processes (i.e., the workers) the function does not return
(MPI transport - all processes execute MPI code)
MPI.start must be called with option
MPI_TRANSPORT_ALL to use MPI as transport.
mpirun -np 5 julia 06-cman-transport.jl MPI will run the example using MPI as transport.