Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse code

DOC: updated parallel engine docs to include default script information

  • Loading branch information...
commit 45e3e05d21252b00f797f6709e7116cb4e86f7ad 1 parent 1362bd4
Satrajit Ghosh authored September 22, 2010

Showing 1 changed file with 47 additions and 4 deletions. Show diff stats Hide diff stats

  1. 51  docs/source/parallel/parallel_process.txt
51  docs/source/parallel/parallel_process.txt
@@ -142,7 +142,22 @@ By default, ipcluster will generate and submit a job script to launch the engine
142 142
 
143 143
     $ ipcluster pbs -n 12 -q hpcqueue -s mypbscript.sh
144 144
 
145  
-NOTE: ipcluster relies on using PBS job arrays to start the engines. If you specify your own job script without specifying the job array settings ipcluster will automatically add the job array settings (#PBS -t 1-N) to your script.
  145
+For example the default autogenerated script looks like::
  146
+
  147
+	#PBS -q hpcqueue
  148
+	#!/bin/sh
  149
+	#PBS -V
  150
+	#PBS -t 1-12
  151
+	#PBS -N ipengine
  152
+	eid=$(($PBS_ARRAYID - 1))
  153
+	ipengine --logfile=ipengine${eid}.log
  154
+
  155
+.. note:: 
  156
+
  157
+   ipcluster relies on using PBS job arrays to start the
  158
+   engines. If you specify your own job script without specifying the
  159
+   job array settings ipcluster will automatically add the job array
  160
+   settings (#PBS -t 1-N) to your script.
146 161
 
147 162
 Additional command line options for this mode can be found by doing::
148 163
 
@@ -165,7 +180,22 @@ By default, ipcluster will generate and submit a job script to launch the engine
165 180
 
166 181
     $ ipcluster sge -n 12 -q hpcqueue -s mysgescript.sh
167 182
 
168  
-NOTE: ipcluster relies on using SGE job arrays to start the engines. If you specify your own job script without specifying the job array settings ipcluster will automatically add the job array settings (#$ -t 1-N) to your script.
  183
+For example the default autogenerated script looks like::
  184
+
  185
+	#$ -q hpcqueue
  186
+	#$ -V
  187
+	#$ -S /bin/sh
  188
+	#$ -t 1-12
  189
+	#$ -N ipengine
  190
+	eid=$(($SGE_TASK_ID - 1))
  191
+	ipengine --logfile=ipengine${eid}.log    #$ -V
  192
+
  193
+.. note::
  194
+
  195
+    ipcluster relies on using SGE job arrays to start the engines. If
  196
+    you specify your own job script without specifying the job array
  197
+    settings ipcluster will automatically add the job array settings (#$ -t
  198
+    1-N) to your script.
169 199
 
170 200
 Additional command line options for this mode can be found by doing::
171 201
 
@@ -186,9 +216,22 @@ The above command will launch an LSF job array with 12 tasks using the default q
186 216
 
187 217
 By default, ipcluster will generate and submit a job script to launch the engines. However, if you need to use your own job script use the -s option:
188 218
 
189  
-    $ ipcluster lsf -n 12 -q hpcqueue -s mysgescript.sh
  219
+    $ ipcluster lsf -n 12 -q hpcqueue -s mylsfscript.sh
  220
+
  221
+For example the default autogenerated script looks like::
  222
+
  223
+	#BSUB -q hpcqueue
  224
+	#!/bin/sh
  225
+	#BSUB -J ipengine[1-12]
  226
+	eid=$(($LSB_JOBINDEX - 1))
  227
+	ipengine --logfile=ipengine${eid}.log
  228
+
  229
+.. note::
190 230
 
191  
-NOTE: ipcluster relies on using LSF job arrays to start the engines. If you specify your own job script without specifying the job array settings ipcluster will automatically add the job array settings (#BSUB -J ipengine[1-N]) to your script.
  231
+   ipcluster relies on using LSF job arrays to start the engines. If you
  232
+   specify your own job script without specifying the job array settings
  233
+   ipcluster will automatically add the job array settings (#BSUB -J
  234
+   ipengine[1-N]) to your script.
192 235
 
193 236
 Additional command line options for this mode can be found by doing::
194 237
 

0 notes on commit 45e3e05

Please sign in to comment.
Something went wrong with that request. Please try again.