A typical experiment in HPC assumes many things from the environment: an NFS mount point available in compute nodes, a batch scheduler, applications installed/compiled directly on the host (i.e. without any type of virtualization), among others. In this case, Popper is followed to record the scripts used to compile, install and run the experiment, as well as analyze its results. We assume SLURM as the batch scheduler and use spack to install the software stack.
installinstalls the dependencies via
spackinstalls dependencies from source, the
installstage should be executed in a node with the same architecture as the one of the compute nodes where LULESH will run (e.g. in a "head" node of the machine).
run. Executes LULESH by sending the job to the SLURM batch scheduler.
analyze. Post-process the results that are gathered by
mpiP. Once the experiment finishes,
mpiPplaces a text file in the
results/folder (a text file file ending in
.mpiP) that contains MPI runtime metrics. The
analyze.shscript launches a Jupyter notebook server (using Docker) that analyzes the output of
mpiPand generates a graph summarizing MPI statistics. To see an example of how the notebook looks see here.