Skip to content

An interface for calling MPICH-compatible MPI implementations from standalone Julia executables

License

Notifications You must be signed in to change notification settings

brenhinkeller/StaticMPI.jl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

StaticMPI

Dev CI CI (Julia nightly)

An MPICH-compatible interface for calling MPI from StaticCompiler.jl compile_executable'd standalone Julia executables, building upon the StaticTools.jl approach.

For all purposes other than compiling standalone exacutables, see MPI.jl instead.

Currently, only MPICH-style implementations of libmpi are supported. A relatively complete low-level interface is provided in the Mpich submodule, while the more Julian top-level implementation is still relatively incomplete and under active development.

Note that the functions herein insert LLVM IR that directly calls functions from libmpi. As such, they will only work when linked against a valid libmpi during compilation. If you want to use them interactively in the REPL (e.g., for debugging), a valid libmpi must first have been dlopened with mode RTLD_GLOBAL. By default, StaticMPI will attempt to automatically dlopen (on init) the libmpi from MPICH_jll.jl. If you want to use a different libmpi interactively, just dlopen it before using StaticMPI

julia> using Libdl

julia> dlopen("/opt/local/lib/mpich-mp/libmpi", RTLD_GLOBAL)
Ptr{Nothing} @0x00007fd5c4c40f40

julia> using StaticMPI

julia> MPI_Init() == MPI_SUCCESS
true

If any MPI functions herein were somehow ever called without linking to libmpi one way or another, expect segfaults!

Examples

Hello World:

julia> using StaticCompiler, StaticTools, StaticMPI

julia> function mpihello(argc, argv)
           MPI_Init(argc, argv)

           comm = MPI_COMM_WORLD
           world_size, world_rank = MPI_Comm_size(comm), MPI_Comm_rank(comm)

           printf((c"Hello from ", world_rank, c" of ", world_size, c" processors!\n"))
           MPI_Finalize()
       end
mpihello (generic function with 1 method)

julia> compile_executable(mpihello, (Int, Ptr{Ptr{UInt8}}), "./";
           cflags=`-lmpi -L/opt/local/lib/mpich-mp/`
           # -lmpi instructs compiler to link against libmpi.so / libmpi.dylib
           # -L/opt/local/lib/mpich-mp/ provides path to my local MPICH installation where libmpi can be found
       )

ld: warning: object file (./mpihello.o) was built for newer OSX version (12.0) than being linked (10.13)
"/Users/me/code/StaticTools.jl/mpihello"

shell> mpiexec -np 4 ./mpihello
Hello from 1 of 4 processors!
Hello from 3 of 4 processors!
Hello from 2 of 4 processors!
Hello from 0 of 4 processors!

Send and recieve:

using StaticCompiler, StaticTools, StaticMPI, MPICH_jll
libpath = joinpath(first(splitdir(MPICH_jll.PATH[])), "lib")

# Function to compile
function mpisendrecv(argc, argv)
    MPI_Init(argc, argv) == MPI_SUCCESS || error(c"MPI failed to initialize\n")

    comm = MPI_COMM_WORLD
    world_size, world_rank = MPI_Comm_size(comm), MPI_Comm_rank(comm)
    nworkers = world_size - 1

    if world_rank == 0
        # Gather results on root note
        buffer = mfill(0, nworkers)
        requests = mfill(MPI_REQUEST_NULL, nworkers)
        for i  1:nworkers
            MPI_Irecv(buffer[i:i], i, 10, MPI_COMM_WORLD, requests[i:i])
            # Note that this [i:i] syntax only works because contiguous indexing
            # of `StaticTools.MallocArray`s returns a view
        end
        MPI_Waitall(requests)
        printf((c"Rank 0 recieved:\n", buffer, c"\n"))
        printdlm(c"results.csv", buffer)
        free(requests), free(buffer)
    else
        # Send results back to root node
        # Best to use malloc'd things when Sending (or esp. Isend-ing),
        # to ensure they won't get unwound before send happens
        x = mfill(world_rank)
        printf((c"Rank ", world_rank, c", sending ", x[], c"\n"))
        MPI_Send(x, 0, 10, comm)
        free(x)
    end
    MPI_Finalize()
end

# Compile it to binary executable
compile_executable(mpisendrecv, (Int, Ptr{Ptr{UInt8}}), "./";
    cflags=`-lmpi -L$libpath -Wl,-rpath,$libpath`
    # -lmpi instructs compiler to link against libmpi.so / libmpi.dylib
    # -L$libpath tells the compiler about the path to libmpi
    # -Wl,-rpath,$libpath tells the linker about the path to libmpi (not needed on all systems)
)

Since this example linked to the libmpi from MPICH_jll, let's also use the mpiexec from MPICH_jll.

shell> $(MPICH_jll.PATH[])/mpiexec -np 6 ./mpisendrecv
Rank 1, sending 1
Rank 2, sending 2
Rank 3, sending 3
Rank 4, sending 4
Rank 5, sending 5
Rank 0 recieved:
1
2
3
4
5

Since we're compiling to standalone executables, we've used the special non-GC-allocating arrays, strings, and IO from StaticTools.jl throughout the above examples.

About

An interface for calling MPICH-compatible MPI implementations from standalone Julia executables

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages