-
Notifications
You must be signed in to change notification settings - Fork 7
mpi4py
Shawfeng Dong edited this page Nov 6, 2016
·
3 revisions
mpi4py is available in the following 4 modules:
- python/2.7
- yt
- yt-3.0
- yt-dev
module load python
Here is a sample "Hello world" program written using mpi4py
#!/usr/bin/env python from mpi4py import MPI import sys comm = MPI.COMM_WORLD size = comm.Get_size() rank = comm.Get_rank() name = MPI.Get_processor_name() # print "Hello, world! I am process %d of %d running on %s" % (rank, size, name) sys.stdout.write("Hello, world! I am process %d of %d running on %s\n" % (rank, size, name))
Note that print seems to be nonatomic.
Make the python script executable:
chmod +x mpi_hello.py
Create a PBS script named mpi4py.pbs, with the following content:
#!/bin/bash #PBS -N mpi4py #PBS -l nodes=4:ppn=16 #PBS -l walltime=0:10:00 module load yt cd $PBS_O_WORKDIR mpirun -genv I_MPI_FABRICS shm:ofa -n 64 ./mpi_hello.py
Annotations of mpi4py.pbs:
- #PBS -N mpi4py: the job name is mpi4py
- #PBS -l nodes=4:ppn=16: the job will run on 4 nodes (64 cores) in the default normal queue
- if we want to submit the job to the hyper queue instead, replace #PBS -l nodes=4:ppn=16 with the following 2 lines:
- #PBS -q hyper
- #PBS -l nodes=2:ppn=32
- module load yt: load one of the modules that provides mpi4py
- cd $PBS_O_WORKDIR: required; PBS starts the scripts from the home directory on the executing computing node
- -env I_MPI_FABRICS shm:ofa: we use shared memory for intra-node communication and OFED verbs for inter-node communication
qsub mpi4py.pbs
$ module load yt-dev $ pip install mpi4py
$ module load yt-3.0 $ pip install mpi4py