This guide is designed to provide you with a comprehensive understanding of the Message Passing Interface (MPI) and how to use it in your parallel programming projects. MPI is a widely used parallel programming framework that allows multiple processors to communicate and coordinate with each other. It is commonly used in scientific computing, numerical simulations, and other high-performance computing applications. We will be covering both mpi4py for Python and OpenMPI for C.
- Overview
- Communicator
- Point to Point Communication: One to One Communication
- Collectives: One to Many or Many to Many Communication
- Hybrid Systems
On each computer node run:
pip install mpi4py
To run MPI programs on your system you also need to download MSMPI from https://www.microsoft.com/en-us/download/details.aspx?id=100593
Do not forget to add MSMPI to Environment variables once it's installed.
To start MPI Programming you need to install MPI resources. Run the command below to install MPI on your system.
sudo apt install openmpi-bin libopenmpi-dev
mpiexec -n numprocesses python -m mpi4py pyfile
Example:
mpiexec -n 2 python -m mpi4py hello_world.py
To compile a MPI program written in C run the command:
mpicc file_name.c -o file_name
After compiling a file you need to run it to see the output. Compiling a program means the program is converted into machine language. To see the output you have to run the following command:
mpirun -n numprocesses ./file_name
In some programs you might have to use mpirun instead of mpiexec.
Example:
mpicc hello_world.c -o hello_world
mpirun -n 2 ./hello_world
This repository provides a starting point for learning about message passing interface. The examples provided here can be used as a reference for implementing MPI in python and C. If you have any questions or suggestions, feel free to open an issue or submit a pull request.