Skip to content
This repository has been archived by the owner on Apr 19, 2023. It is now read-only.

Commit

Permalink
Merge pull request #405 from njheimbach/master
Browse files Browse the repository at this point in the history
Typos fixed & example for XCPEngine with SLURM
  • Loading branch information
a3sha2 committed Nov 17, 2020
2 parents 08ac19c + 2f111bd commit 85acd26
Showing 1 changed file with 56 additions and 6 deletions.
62 changes: 56 additions & 6 deletions docs/containers/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Using xcpEngine with Singularity_
---------------------------------

The easiest way to get started with xcpEngine on a HPC system is
to build a Singularity_ image from the xcpEngine released on
to build a Singularity image from the xcpEngine released on
dockerhub.::

$ singularity build xcpEngine.simg docker://pennbbl/xcpengine:latest
Expand Down Expand Up @@ -68,6 +68,8 @@ in a ``$TMPDIR`` variable specific to the job. If you want to use
a different temporary directory, be sure that it's accessible from
inside the container and provide the container-bound path to it.

.. _Docker:

Using xcpEngine with Docker_
-----------------------------

Expand All @@ -83,18 +85,18 @@ substituted for ``-v``. Here is an example:::
-o /data/study/output \
-i $TMPDIR

Mounting directories in Docker_ is easier than with Singularity_.
Mounting directories in Docker is easier than with Singularity.


Using SGE to parallelize across subjects
----------------------------------------

By running xcpEngine from a container, you lose the ability to submit jobs
to the cluster directly from xcpEngine. Here is a way to split your cohort
file and submit a qsub job for each line. *Note that we are using
`my_cohort_rel_container.csv`, which means we don't need to specify
an `-r` flag. If your cohort file uses paths relative to the host's
file system you will need to specify `-r`*::
file and submit a qsub job for each line. Note that we are using
``my_cohort_rel_container.csv``, which means we don't need to specify
an ``-r`` flag. If your cohort file uses paths relative to the host's
file system you will need to specify ``-r``::

#!/bin/bash
FULL_COHORT=/data/study/my_cohort_rel_container.csv
Expand Down Expand Up @@ -133,6 +135,54 @@ file system you will need to specify `-r`*::

You will need to collate group-level outputs after batching subjects with the script ``${XCPEDIR}/utils/combineOutput`` script, provided in ``utils``.


Using SLURM to parallelize across subjects
----------------------------------------
By running xcpEngine from a container, you lose the ability to submit jobs to the cluster directly from xcpEngine. Here is a way to split your cohort file and submit an sbatch job for each line. Note that we are using ``my_cohort_rel_host.csv``, which means we need to specify an ``-r`` flag. If your cohort file uses paths relative to the container you dont need to specify ``-r``.::

#!/bin/bash
# Adjust these so they work on your system
FULL_COHORT=/data/study/my_cohort_rel_host.csv
NJOBS=`wc -l < ${FULL_COHORT}`
HEADER="$(head -n 1 $FULL_COHORT)"
SIMG=/data/containers/xcpEngine.simg
# memory, CPU and time depend on the designfile and your dataset. Adjust values correspondingly
XCP_MEM=0G
XCP_C=0
XCP_TIME=0:0:0

if [[ ${NJOBS} == 0 ]]; then
exit 0
fi

cat << EOF > xcpParallel.sh
#!/bin/bash -l
#SBATCH --array 1-${NJOBS}
#SBATCH --job-name xcp_engine
#SBATCH --mem $XCP_MEM
#SBATCH -c $XCP_C
#SBATCH --time $XCP_TIME
#SBATCH --workdir /my_working_directory
#SBATCH --output /my_working_directory/logs/slurm-%A_%a.out


LINE_NUM=\$( expr \$SLURM_ARRAY_TASK_ID + 1 )
LINE=\$(awk "NR==\$LINE_NUM" $FULL_COHORT)
TEMP_COHORT=${FULL_COHORT}.\${SLURM_ARRAY_TASK_ID}.csv
echo $HEADER > \$TEMP_COHORT
echo \$LINE >> \$TEMP_COHORT

singularity run -B /home/user/data:/data $SIMG \\
-d /home/user/data/study/my_design.dsn \\
-c /home/user\${TEMP_COHORT} \\
-o /home/user/data/study/output \\
-r /data \\
-i \$TMPDIR

EOF
sbatch xcpParallel.sh


Using the bundled software
----------------------------

Expand Down

0 comments on commit 85acd26

Please sign in to comment.