Skip to content
MPI bindings for Rust
Branch: master
Clone or download
colin-daniels and bsteinb fix libffi pinning and bump crate versions
Changes the libffi dependency to version 0.7.0 instead of pinning it at
0.6.4 which was yanked due to bindgen version issues. Since libffi
0.7.0 is semver-incompatible, the mpi crate versions were also bumped
to 0.6.0 for mpi and 0.2.0 for mpi-sys.
Latest commit f9aec8b May 19, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
build-probe-mpi Run cargo-fmt Feb 21, 2019
ci Runs Travis CI on Windows. Disables default-features on Windows so as… Dec 10, 2018
examples appease clippy Apr 28, 2019
mpi-sys fix libffi pinning and bump crate versions May 20, 2019
src appease clippy Apr 28, 2019
.gitignore chore: create low-level bindings in output dir Mar 26, 2016
Cargo.toml fix libffi pinning and bump crate versions May 20, 2019
LICENSE-APACHE Relicense to dual MIT/Apache-2.0 Jan 11, 2016
LICENSE-MIT appease clippy Apr 28, 2019

MPI bindings for Rust

Travis build status Documentation: hosted License: Apache License 2.0 or MIT latest GitHub release crate on

The Message Passing Interface (MPI) is a specification for a message-passing style concurrency library. Implementations of MPI are often used to structure parallel computation on High Performance Computing systems. The MPI specification describes bindings for the C programming language (and through it C++) as well as for the Fortran programming language. This library tries to bridge the gap into a more rustic world.


An implementation of the C language interface that conforms to MPI-3.1. rsmpi is currently tested with these implementations:

For a reasonable chance of success with rsmpi any MPI implementation that you want to use with it should satisfy the following assumptions that rsmpi currently makes:

  • The implementation should provide a C compiler wrapper mpicc.
  • mpicc -show should print the full command line that is used to invoke the wrapped C compiler.
  • The result of mpicc -show contains the libraries, library search paths, and header search paths in a format understood by GCC (e.g. -lmpi, -I/usr/local/include, ...).

Since the MPI standard leaves some details of the C API unspecified (e.g. whether to implement certain constants and even functions using preprocessor macros or native C constructs, the details of most types, ...) rsmpi takes a two step approach to generating functional low-level bindings.

First, it uses a thin static library written in C (see rsmpi.h and rsmpi.c) that tries to capture the underspecified identifiers and re-exports them with a fixed C API. This library is built from using the gcc crate.

Second, to generate FFI definitions tailored to each MPI implementation, rsmpi uses rust-bindgen which needs libclang. See the bindgen project page for more information.

Furthermore, rsmpi uses the libffi crate which installs the native libffi which depends on certain build tools. See the libffi project page for more information.


Add the mpi crate as a dependency in your Cargo.toml:

mpi = "0.5"

Then use it in your program like this:

extern crate mpi;

use mpi::request::WaitGuard;
use mpi::traits::*;

fn main() {
    let universe = mpi::initialize().unwrap();
    let world =;
    let size = world.size();
    let rank = world.rank();

    let next_rank = if rank + 1 < size { rank + 1 } else { 0 };
    let previous_rank = if rank > 0 { rank - 1 } else { size - 1 };

    let msg = vec![rank, 2 * rank, 4 * rank];
    mpi::request::scope(|scope| {
        let _sreq = WaitGuard::from(
                .immediate_send(scope, &msg[..]),

        let (msg, status) = world.any_process().receive_vec();

            "Process {} got message {:?}.\nStatus is: {:?}",
            rank, msg, status
        let x = status.source_rank();
        assert_eq!(x, previous_rank);
        assert_eq!(vec![x, 2 * x, 4 * x], msg);

        let root_rank = 0;
        let root_process = world.process_at_rank(root_rank);

        let mut a;
        if world.rank() == root_rank {
            a = vec![2, 4, 8, 16];
            println!("Root broadcasting value: {:?}.", &a[..]);
        } else {
            a = vec![0; 4];
        root_process.broadcast_into(&mut a[..]);
        println!("Rank {} received value: {:?}.", world.rank(), &a[..]);
        assert_eq!(&a[..], &[2, 4, 8, 16]);


The bindings follow the MPI 3.1 specification.

Currently supported:

  • Groups, Contexts, Communicators:
    • Group and (Intra-)Communicator management from section 6 is mostly complete.
    • no Inter-Communicators
    • no process topologies
  • Point to point communication:
    • standard, buffered, synchronous and ready mode send in blocking and non-blocking variants
    • receive in blocking and non-blocking variants
    • send-receive
    • probe
    • matched probe/receive
  • Collective communication:
    • barrier
    • broadcast
    • (all) gather
    • scatter
    • all to all
    • varying counts operations
    • reductions/scans
    • blocking and non-blocking variants
  • Datatypes: Bridging between Rust types and MPI basic types as well as custom MPI datatypes which can act as views into buffers.

Not supported (yet):

  • Process management
  • One-sided communication (RMA)
  • MPI parallel I/O
  • A million small things


Every public item of rsmpi should at least have a short piece of documentation associated with it. Documentation can be generated via:

cargo doc

Documentation for the latest version of the crate released to is hosted on Github pages.


See files in examples/. These examples also act as integration tests.


Licensed under either of

at your option.


Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

You can’t perform that action at this time.