Skip to content

Commit

Permalink
update README
Browse files Browse the repository at this point in the history
Signed-off-by: Sebastien Varrette <sebastien.varrette@uni.lu>
  • Loading branch information
Sebastien Varrette committed Nov 12, 2013
1 parent fdf0277 commit 2587eb0
Show file tree
Hide file tree
Showing 2 changed files with 50 additions and 34 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ hopefully more efficient) on the following typical workflows:
* serial (or sequential) tasks having varying durations, run on one node
* serial (or sequential) tasks having varying durations, run on multiple nodes

* MPI run on n processes (ex: HPL) with abstraction of the MPI stack, MPI script, option to compile the code etc.
* MPI run on n process per node (ex: OSU benchs)
* MPI run on n processes (ex: [HPL](https://github.com/ULHPC/tutorials/tree/devel/advanced/HPL)) with abstraction of the MPI stack, MPI script, option to compile the code etc.
* MPI run on n process per node (ex: [OSU Micro-benchmarks](https://github.com/ULHPC/tutorials/tree/devel/advanced/OSU_MicroBenchmarks))

We propose here two types of contributions:

Expand Down
80 changes: 48 additions & 32 deletions bash/MPI/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Copyright (c) 2013 [Sebastien Varrette](mailto:<Sebastien.Varrette@uni.lu>) [www](http://varrette.gforge.uni.lu)

Time-stamp: <Dim 2013-11-10 18:50 svarrette>
Time-stamp: <Mar 2013-11-12 13:24 svarrette>

-------------------

Expand All @@ -30,7 +30,7 @@ You might also be interested to follow the [tutorial on running HPL](https://git
Connect to your favorite cluster frontend (here: `chaos`)

$> ssh chaos-cluster

Reserve interactively two full nodes, ideally belonging to the same enclosure:

(access-chaos)$> oarsub -I -l enclosure=1/nodes=2,walltime=8
Expand Down Expand Up @@ -63,7 +63,7 @@ Prepare your working directory
$> ln -s ~/git/ULHPC/launcher-scripts/examples/include .
## MPI helloworld
## Basic example: MPI helloworld

As you cloned the repository, you'll find everything ready to test the MPI
helloword example in `~/git/ULHPC/launcher-scripts/examples/MPI/helloworld`.
Expand All @@ -78,7 +78,8 @@ Now you can check everything work (in interactive mode), for instance with the

$> module load OpenMPI
$> make
$> mpirun -hostfile $OAR_NODEFILE ./mpi_hello_and_sleep
$> cp mpi_hello_and_sleep mpi_hello_and_sleep.openmpi
$> mpirun -x LD_LIBRARY_PATH -hostfile $OAR_NODEFILE ./mpi_hello_and_sleep
[node 0]: Total Number of processes : 32
[node 0]: Input n = 2
[node 1]: Helloword! I'll now sleep for 2s
Expand All @@ -93,12 +94,13 @@ For [MVAPICH2](http://mvapich.cse.ohio-state.edu/overview/mvapich2/):
$> module purge
$> module load MVAPICH2
$> make
$> mpirun -hostfile $OAR_NODEFILE ./mpi_hello_and_sleep
$> cp mpi_hello_and_sleep mpi_hello_and_sleep.mvapich2
$> mpirun -launcher ssh -launcher-exec /usr/bin/oarsh -hostfile $OAR_NODEFILE ./mpi_hello_and_sleep
[Node 0] Total Number of processes : 32
[Node 0] Input n = 1
[Node 0] [Node 1] Helloword! I'll now sleep for 4s
[Node 2] Helloword! I'll now sleep for 4s
[Node 3] Helloword! I'll now sleep for 4s
[Node 0] [Node 1] Helloword! I'll now sleep for 1s
[Node 2] Helloword! I'll now sleep for 1s
[Node 3] Helloword! I'll now sleep for 1s
[...]
Helloword! I'll now sleep for 1s
[node 0]: Elapsed time: 1.000727 s
Expand All @@ -110,6 +112,7 @@ For the
$> module purge
$> module load ictce
$> make
$> cp mpi_hello_and_sleep mpi_hello_and_sleep.impi
$> mpirun -hostfile $OAR_NODEFILE ./mpi_hello_and_sleep
[node 0]: Total Number of processes : 32
[node 0]: Input n = 4
Expand All @@ -126,32 +129,45 @@ Now that the interactive run succeed, it's time to embedded the command into a
launcher.
You can obviously add the correct `mpirun` command into a `bash` script (or
`python`/`ruby`/whatever).
You can also use the proposed generic MPI launcher :

$> ln -s /home/users/svarrette/git/ULHPC/launcher-scripts/bash/MPI/mpi_launcher.sh .

Simply create a configuration file `mpi_launcher.default.conf` containing (at least) the
definition of the

$> vim mpi_launcher.default.conf
[...]
$> cat mpi_launcher.default.conf
MPI_PROG=mpi_hello_and_sleep
MPI_PROG_ARG=3
$> ./mpi_launcher.sh
/mpi_launcher.sh
overwriting default configuration
=> performing MPI run mpi_hello_and_sleep @ Tue Apr 2 16:45:47 CEST 2013
=> preparing the logfile /tmp//run/mpi_launcher.sh/2013-04-02/435561_results_mpi_hello_and_sleep_16h45m47.log
[node 0]: Total Number of processes : 32
[node 1]: Helloword! I'll now sleep for 3s
[node 2]: Helloword! I'll now sleep for 3s
[...]
[node 25]: Helloword! I'll now sleep for 3s
[node 0]: Elapsed time: 3.000222 s
You can also use the proposed [generic MPI launcher](https://github.com/ULHPC/launcher-scripts/blob/devel/bash/MPI/mpi_launcher.sh) :

$> ln -s ~/git/ULHPC/launcher-scripts/bash/MPI/mpi_launcher.sh launcher_mpi_helloworld

You can run again each program as follows:

$> ./launcher_mpi_helloworld --module OpenMPI --exe mpi_hello_and_sleep.openmpi
$> ./launcher_mpi_helloworld --module MVAPICH2 --exe mpi_hello_and_sleep.mvapich2
$> ./launcher_mpi_helloworld --module ictce --exe mpi_hello_and_sleep.impi


The symbolic link approach is quite flexible as the script allows you to
predefine a set of variable you would normally pass as command line (run with
the `--help` option) in a configuration file name `<scriptname>.default.conf`.

For instance, assuming you create a configuration file `launcher_mpi_helloworld.default.conf` as follows:

$> cat launcher_mpi_helloworld.default.conf
# Run MPI Helloworld
NAME=openmpi

MODULE_TO_LOAD=OpenMPI

MPI_PROGstr=mpi_hello_and_sleep.openmpi

You can run the OpenMPI approach far more simply by

$> ./launcher_mpi_helloworld


You can also exit your reservation to re-run it in passive mode:

$> exit
$> oarsub -l enclosure=1/nodes=2,walltime=8 $WORK/tutorials/MPI/helloworld/mpi_launcher.sh
$> oarsub -l enclosure=1/nodes=2,walltime=8 "$WORK/tutorials/MPI/helloworld/mpi_launcher.sh --args 2"


## Advanced example: [OSU micro-benchmarks](http://mvapich.cse.ohio-state.edu/benchmarks/) and [HPL](http://www.netlib.org/benchmark/hpl/)

You can find more advanced example using the [generic MPI launcher](https://github.com/ULHPC/launcher-scripts/blob/devel/bash/MPI/mpi_launcher.sh) (or adapted version) in the following [UL HPC tutorials](https://github.com/ULHPC/tutorials):

* [running the OSU Micro-Banchmarks](https://github.com/ULHPC/tutorials/tree/devel/advanced/OSU_MicroBenchmarks)
* [running HPL](https://github.com/ULHPC/tutorials/tree/devel/advanced/HPL)

0 comments on commit 2587eb0

Please sign in to comment.