We recommend that you create a run script for your GEOS-Chem simulation. This is a bash script containing the commands to run GEOS-Chem.
A sample GEOS-Chem run script is provided for you in the GEOS-Chem Classic run directory <rundir>
. You can edit this script as necessary for your own computational system.
Navigate to your run directory. Then copy the runScriptSamples/geoschem.run
sample run script into the run directory:
cp ./runScriptSamples/geoschem.run .
The geoschem.run
script looks like this:
#!/bin/bash
#SBATCH -c 8
#SBATCH -N 1
#SBATCH -t 0-12:00
#SBATCH -p MYQUEUE
#SBATCH --mem=15000
#SBATCH --mail-type=END
###############################################################################
### Sample GEOS-Chem run script for SLURM
### You can increase the number of cores with -c and memory with --mem,
### particularly if you are running at very fine resolution (e.g. nested-grid)
###############################################################################
# Set the proper # of threads for OpenMP
# SLURM_CPUS_PER_TASK ensures this matches the number you set with -c above
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# Set the stacksize memory to the highest possible limit
ulimit -s unlimited
export OMP_STACKSIZE=500m
# Run GEOS-Chem. The "time" command will return CPU and wall times.
# Stdout and stderr will be directed to the "GC.log" log file
# (you can change the log file name below if you wish)
srun -c $OMP_NUM_THREADS time -p ./gcclassic > GC.log 2>&1
# Exit normally
exit 0
The sample run script contains commands for the SLURM scheduler, which is used on many HPC sytems.
Note
If your computer system uses a different scheduler (such as LSF or PBS), then you can replace the SLURM-specific commands with commands for your scheduler. Ask your IT staff for more information.
Important commands in the run script are listed below:
#SBATCH -c 8
Tells SLURM to request 8 computational cores.
#SBATCH -N 1
Tells SLURM to request 1 computational node.
Important
GEOS-Chem Classic uses OpenMP, which is a shared-memory parallelization model. Using OpenMP limits GEOS-Chem Classic to one computational node.
#SBATCH -t 0-12:00
Tells SLURM to request 12 hours of computational time. The format is D-hh:mm
or (days-hours:minutes
).
#SBATCH -p MYQUEUE
Tells SLURM to run GEOS-Chem Classic in the computational partition named MYQUEUE
. Ask your IT staff for a list of the available partitions on your system.
#SBATCH --mem=15000
Tells SLURM to reserve 15000 MB (15 GB) of memory for the simulation.
#SBATCH --mail-type=END
Tells SLURM to send an email upon completion (successful or unsuccesful) of the simulation.
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
Specifies how many computational cores that GEOS-Chem Classic should use. The environment variable SLURM_CPUS_PER_TASK
will fill in the number of cores requested (in this example, we used #SBATCH -c 8
, which requests 8 cores).
ulimit -s unlimited
Tells the bash shell to remove any restrictions on stack memory. This is the place in GEOS-Chem's memory where temporary variables (including PRIVATE variables for OpenMP parallel loops
<parallel-guide-faq-private>
) get created.
export OMP_STACKSIZE=500m
Tells the GEOS_Chem executable to use as much memory as it needs for allocating PRIVATE variables in OpenMP parallel loops
<parallel-guide-faq-private>
.
srun -c $OMP_NUM_THREADS
Tells SLURM to run the GEOS-Chem Classic executable using the number of cores specified in OMP_NUM_THREADS
.
time -p ./gcclassic > GC.log 2>&1
Executes the GEOS-Chem Classic executable and pipes the output (both stdout and stderr streams) to a file named GC.log
.
The time -p
command will print the amount of time (both CPU time and wall time) that the simulation took to complete to the end of GC.log
.