NERSC provides High Performance Computing system to support research in the Office of Science program offices. Currently, NERSC provides two HPC systems including Perlmutter and Cori. In example below we see the configuration for Cori and Perlmutter. Note that we can define a single configuration for both systems. Perlmutter is using Lmod while Cori is running environment-modules. We define Local executors and Slurm executors for each system which are mapped to qos provided by our Slurm cluster.
In-order to use bigmem
, xfer
, or gpu
qos at Cori, we need to specify escori cluster (i.e sbatch --clusters=escori
).
NERSC buildtest configuration
Ascent is a training system for Summit at OLCF, which is using a IBM Load Sharing Facility (LSF) as their batch scheduler. Ascent has two queues batch and test. To declare LSF executors we define them under lsf
section within the executors
section.
The default launcher is bsub which can be defined under defaults
. The pollinterval
will poll LSF jobs every 10 seconds using bjobs
. The pollinterval
accepts a range between 10 - 300 seconds as defined in schema. In order to avoid polling scheduler excessively pick a number that is best suitable for your site
Ascent buildtest configuration
../../tests/settings/ascent.yml
Joint Laboratory for System Evaluation (JLSE) provides a testbed of emerging HPC systems, the default scheduler is Cobalt, this is defined in the cobalt
section defined in the executor field.
We set default launcher to qsub defined with launcher: qsub
. This is inherited for all batch executors. In each cobalt executor the queue
property will specify the queue name to submit job, for instance the executor yarrow
with queue: yarrow
will submit job using qsub -q yarrow
when using this executor.
JLSE buildtest configuration
../../tests/settings/jlse.yml