Skip to content

Commit

Permalink
Merge pull request #34 from mathiasbockwoldt/jobscripts
Browse files Browse the repository at this point in the history
Jobscripts and partitions
  • Loading branch information
bast committed Mar 23, 2018
2 parents fabd07b + 9ac1c09 commit 45af585
Show file tree
Hide file tree
Showing 7 changed files with 26 additions and 29 deletions.
2 changes: 1 addition & 1 deletion applications/chemistry/ADF/advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ This is a brief introduction to how to create fragments necessary for among othe
**Running with fragments:**

* Download and modify script for fragment create run, e.g. this template: Create.TZP.sh (modify ACCOUNT, add desired atoms and change to desired basis and desired functional)
* Run the create job in the same folder as the one where you want to run your main job(s) (qsub Create.TZP.sh).
* Run the create job in the same folder as the one where you want to run your main job(s) (sbatch Create.TZP.sh).
* Put the line cp $init/t21.* . in your ADF run script (in your $HOME/bin directory)
* In job.inp, specify correct file name in FRAGMENT section, e.g. “H t21.H_tzp”. * Submit job.inp as usual.

Expand Down
15 changes: 6 additions & 9 deletions applications/chemistry/files/job_adf.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@
#-------------------------------------
# This script asks for a given set of cores nodes and cores. Stallo has got 16 or 20 cores/node,
# asking for something that adds up to both is our general recommodation (80, 160 etc), you would
# then need to use --ntasks instead of -N and --ntasks-per-node. (replace both).
# then need to use --ntasks instead of --nodes and --ntasks-per-node. (replace both).
# Runtime for this job is 59 minutes; syntax is hh:mm:ss.
# Memory is set to the maximum advised for a full node, 1500MB/core - giving a total
# of 30000MB/node and leaving some for the system to use. Memory
# can be specified pr core, virtual or total pr job (be carefull).
#-------------------------------------
# SLURM-section
#SBATCH --job-name=adf_runex
#SBATCH -N 2
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=20
#SBATCH --time=00:59:00
#SBATCH --mem-per-cpu=1500MB # Be ware of memory needs, might be a lot higher if you are running Zora basis, for example.
Expand Down Expand Up @@ -43,11 +43,9 @@ mkdir -p $SCM_TMPDIR
# Preparing and moving inputfiles to tmp:

submitdir=$SLURM_SUBMIT_DIR
tempdir=$SCM_TMPDIR

cd $submitdir
cp ${input}.${ext} $tempdir
cd $tempdir
cp ${input}.${ext} $SCM_TMPDIR
cd $SCM_TMPDIR

# In case necessary, set SCM_IOBUFFERSIZE
#export SCM_IOBUFFERSIZE=1024 # Or higher if necessary.
Expand All @@ -57,7 +55,7 @@ cd $tempdir

# Running the program:

time adf -n $cores < ${input}.${ext} > adf_$input.out
time adf -n $cores < ${input}.${ext} > adf_$input.out

# Cleaning up and moving files back to home/submitdir:
# Make sure to move all essential files specific for the given job/software.
Expand All @@ -75,8 +73,7 @@ echo $(ls -ltr)

# ALWAYS clean up after yourself. Please do uncomment the following line
#cd $submitdir
#rm $tempdir/*
#rmdir $tempdir
#rm -r $tempdir/*

echo "Job finished at"
date
Expand Down
4 changes: 2 additions & 2 deletions applications/chemistry/files/job_band.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@
#-------------------------------------
# This script asks for a given set of cores nodes and cores. Stallo has got 16 or 20 cores/node,
# asking for something that adds up to both is our general recommodation (80, 160 etc), you would
# then need to use --ntasks instead of -N and --ntasks-per-node. (replace both).
# then need to use --ntasks instead of --nodes and --ntasks-per-node. (replace both).
# Runtime for this job is 59 minutes; syntax is hh:mm:ss.
# Memory is set to the maximum advised for a full node,
# of 30000MB/node and leaving some for the system to use. Memory
# can be specified pr core, virtual or total pr job (be carefull).
#-------------------------------------
# SLURM-section
#SBATCH --job-name=band_runex
#SBATCH -N 2
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=20
#SBATCH --time=00:59:00
#SBATCH --mem-per-cpu=1500MB # Be ware of memory needs!
Expand Down
2 changes: 1 addition & 1 deletion applications/chemistry/files/job_g09.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
#-------------------------------------
# SLURM-section
#SBATCH --job-name=g09_runex
#SBATCH -N 2
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=20
#SBATCH --time=00:59:00
#SBATCH --mem-per-cpu=1500MB
Expand Down
4 changes: 2 additions & 2 deletions applications/chemistry/files/job_molcas.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@
#-------------------------------------
# This script asks for a given set of cores nodes and cores. Stallo has got 16 or 20 cores/node,
# asking for something that adds up to both is our general recommodation (80, 160 etc), you would
# then need to use --ntasks instead of -N and --ntasks-per-node. (replace both).
# then need to use --ntasks instead of --nodes and --ntasks-per-node. (replace both).
# Runtime for this job is 59 minutes; syntax is hh:mm:ss.
# Memory is set to the maximum advised for a full node, 1500MB/core - giving a total
# of 30000MB/node and leaving some for the system to use. Memory
# can be specified pr core, virtual or total pr job (be carefull).
#-------------------------------------
# SLURM-section
#SBATCH --job-name=molcas_runex
#SBATCH -N 1
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=20
##SBATCH --ntasks=20
#SBATCH --time=00:59:00
Expand Down
23 changes: 10 additions & 13 deletions applications/chemistry/files/job_vasp.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
#-------------------------------------
# SLURM-section
#SBATCH --job-name=vasp_runex
#SBATCH -N 2
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=20
#SBATCH --time=02:00:00
##SBATCH --mem-per-cpu=1500MB
Expand All @@ -25,29 +25,27 @@
# Section for defining job variables and settings:

proj=CeO2job # Name of job folder
input=${proj}/{INCAR,KPOINTS,POTCAR,POSCAR} # Input files from job folder
input=$(ls ${proj}/{INCAR,KPOINTS,POTCAR,POSCAR}) # Input files from job folder

# We load all the default program system settings with module load:

module purge
module load VASP/5.4.1.plain
module load VASP/5.4.1.plain-intel-2016a
# You may check other available versions with "module avail VASP"

# Now we create working directory and temporary scratch for the job(s):
# Necessary variables are defined in the notur and the software modules.

export VASP_WORKDIR=/global/work/$USER/$SLURM_JOB_ID

mkdir -p /global/work/$USER/$SLURM_JOB_ID
mkdir -p $VASP_WORKDIR

# Preparing and moving inputfiles to tmp:

submitdir=$SLURM_SUBMIT_DIR
tempdir=$VASP_WORKDIR

cd $submitdir
cp $input $tempdir
cd $tempdir
cp $input $VASP_WORKDIR
cd $VASP_WORKDIR

######################################
# Section for running the program and cleaning up:
Expand All @@ -59,11 +57,11 @@ time mpirun vasp_std
# Cleaning up and moving files back to home/submitdir:
# Make sure to move all essential files specific for the given job/software.

cp OUTCAR $submitdir/${input}.OUTCAR
cp OUTCAR $submitdir/${proj}.OUTCAR

# To zip some of the output might be a good idea!
#gzip $resultszip
#mv $resultzip.gz $submitdir/
#gzip results.gz OUTCAR
#mv $results.gz $submitdir/

# Investigate potentially other files to keep:
echo $(pwd)
Expand All @@ -72,8 +70,7 @@ echo $(ls -ltr)
# ALWAYS clean up after yourself. Please do uncomment the following line
# If we have to, we get really grumpy!
#cd $submitdir
#rm $tempdir/*
#rmdir $tempdir
#rm -r $VASP_WORKDIR/*

echo "Job finished at"
date
Expand Down
5 changes: 4 additions & 1 deletion jobs/partitions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,12 @@ multinode:
Request this partition if you ask for more resources than you will find on
one node and request walltime longer than 48 hrs.

highmem:
Use this partition to use the high memory nodes with 128 GB. You will have to apply for access to this partition by sending us an email explaining why you need these high memory nodes.

To figure out the walltime limits for the various partitions, type::

$ sinfo --format="%P %l"
$ sinfo --format="%P %l" # small L

As a service to users that needs to submit short jobs for testing and debugging, we have a service called devel.
These jobs have higher priority, with a maximum of 4 hrs of walltime and no option for prolonging runtime.
Expand Down

0 comments on commit 45af585

Please sign in to comment.