Skip to content

Commit

Permalink
Added hybrid MPI OpenMP job script example.
Browse files Browse the repository at this point in the history
  • Loading branch information
dajon committed May 23, 2017
1 parent 9b6effd commit 53aa644
Show file tree
Hide file tree
Showing 2 changed files with 70 additions and 0 deletions.
18 changes: 18 additions & 0 deletions jobs/examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,25 @@ Save it to a file (e.g. run.sh) and submit it with::

$ sbatch run.sh

Example for a hybrd MPI OpenMP job
----------------------------------

.. literalinclude:: files/slurm-MPI-OMP.sh
:language: bash
:linenos:

Save it to a file (e.g. run.sh) and submit it with::

$ sbatch run.sh

If you want to start more than one MPI rank per node you can
use ``--ntasks-per-node`` in combination with ``--nodes``::
#SBATCH --nodes=4 --ntasks-per-node=2 --cpus-per-task=8

will start 2 MPI tasks each on 4 nodes, where each task can use up
to 8 threads.

Running many sequential jobs in parallel using job arrays
---------------------------------------------------------

Expand Down
52 changes: 52 additions & 0 deletions jobs/files/slurm-MPI-OMP.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
#!/bin/bash

#######################################
# example for a hybrid MPI OpenMP job #
#######################################

#SBATCH --job-name=example

# we ask for 2 MPI tasks with 20 cores each
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=20

# run for five minutes
# d-hh:mm:ss
#SBATCH --time=0-00:05:00

# short partition should do it
#SBATCH --partition short

# 500MB memory per core
# this is a hard limit
#SBATCH --mem-per-cpu=500MB

# turn on all mail notification
#SBATCH --mail-type=ALL

# you may not place bash commands before the last SBATCH directive

# define and create a unique scratch directory
SCRATCH_DIRECTORY=/global/work/${USER}/example/${SLURM_JOBID}
mkdir -p ${SCRATCH_DIRECTORY}
cd ${SCRATCH_DIRECTORY}

# we copy everything we need to the scratch directory
# ${SLURM_SUBMIT_DIR} points to the path where this script was submitted from
cp ${SLURM_SUBMIT_DIR}/my_binary.x ${SCRATCH_DIRECTORY}

# we set OMP_NUM_THREADS to the number cpu cores per MPI task
export OMP_NUM_THREADS=${SLURM_CPUS_PER_TASK}

# we execute the job and time it
time ./my_binary.x > my_output

# after the job is done we copy our output back to $SLURM_SUBMIT_DIR
cp ${SCRATCH_DIRECTORY}/my_output ${SLURM_SUBMIT_DIR}

# we step out of the scratch directory and remove it
cd ${SLURM_SUBMIT_DIR}
rm -rf ${SCRATCH_DIRECTORY}

# happy end
exit 0

0 comments on commit 53aa644

Please sign in to comment.