Skip to content
No description or website provided.
Pull request Compare This branch is 42 commits behind sjfahr:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.
linear_no_damage
.gitignore
README
assemble.qsub
benchmark.ini
control_980Dog1.ini
control_980Dog2.ini
control_980Dog3.ini
control_980Dog4.ini
control_biotex01Dec2011.ini
control_biotex02Apr2009.ini
control_biotex10Jan2011.ini
control_biotex17Dec2010.ini
control_biotex18Mar2009.ini
control_biotex25May2012.ini
control_biotexNonlin01Dec2011.ini
control_biotexNonlin18Mar2009.ini
control_dbg.ini
control_dog1.ini
control_dog1Nonlin.ini
control_dog2.ini
control_dog2Nonlin.ini
control_dog3.ini
control_dog3Nonlin.ini
control_dog4.ini
control_dog4Nonlin.ini
convertexodus.py
dakota_param_8d.in
dummy_driver
ibrun_par_driver
irodsMetaData.py
launcher.ls.sge
launcher.ranger.sge
makefile
meshTemplateFull2.e
meshTemplateQuarter1.e
meshTemplateQuarter2.e
moonen_setup_driver
mpich2_par_driver
mpich2_setup_driver
parse.in
pre_ini.ini
ranger_dbg.qsub
ranger_pce.qsub
ranger_setup_driver
realization.qsub
rosenbrock.C
runThermalgPC
setupgPC
shamu_par_driver
time_par_driver
yaro.ini

README

------------------- Github procedure -----------------------------
1. You should be FREQUENTLY be pushing and pulling. Pushing and pulling often will prevent ‘non-fast-forward’ errors (NFFE). NFFE are very hard to deal with but should arise unless your history and Github are very different. Josh and I were unable to create one, nor have we found useful guides on how to rectify them. So don’t let it happen; push and pull often.

Common commands:
	git push -u	(move local history and update Github)
	git pull -u	(update local with Github)
	git commit -a 	(updates your local history with any changes you have made on tracked files)

git add [file/directory] (Git starts tracking these files)	
	git diff		(identifies if there are any differences between the local commit and Github)

git status	(identifies if there are uncommitted changes to tracked file and identifies untracked files)

*Run git status and git diff to see if you have to commit, push, or pull.

2. If you try to push, and it has a ‘non-fast-forward’ merge error, pull first. A merge error just means your history is not the same as the Github. This tends to arise when two people are trying to commit to the same Github repo. If the merge error is from changes at different lines, you should be able to pull. The resulting document should have both your edits and the Github edits.  Make sure the new pulled code doesn’t break anything.

Now push.

3. If you try to push and it fails because the merge error is on the same line, Github will write the two versions in the document. Only one version gets to stay. Make your decision on what should stay, make the edit, commit it, and push it.

-----------------------------------------------------------------

step (1) checkout repo
step (2) preprocess needed dakota.in files and setup directory hierarchy
step (3) run the FEM simulations for each realization on the grid, 
         soln.*.out files should appear in realization.* after this step
step (4) post process statistics for each time step and write out exodus file
step (5) post process collect statistics into one exodus file
step (6) post process convert exodus to vtk/matlab

---------------------  RUN ON RANGER ------------------------------

(1) cd $WORK; git clone git@github.com:ImageGuidedTherapyLab/uq_canine_feb07.git uq_canine_feb07
(2) python $DDDAS_SRC/Examples/TreatmentPlanning/gPCWFS.py --config_ini=control_dog1.ini --pre_run=ranger_setup_driver --w_0_healthy
(3) two ways
    (i)  python gPCWFS.py --run_queue=dog1_0 --execution=qsub   (single qsub per execution)
    (ii) qsub -v WORKDIR=dog1_0 realization.qsub   (single qsub per execution)
(4-5) qsub -v WORKDIR=dog1_0 launcher.ranger.sge ( post processing and assembly combined in one script)
(6) python convertExodusToMatlab.py --data_set=dog1 --exodus_file=/Full/path/to/your/exodus/file.e

---------------------  ASSEMBLE ON LONESTAR ------------------------------

(4-5) qsub -v WORKDIR=dog1_0 launcher.ls.sge ( post processing and assembly combined in one script)

--------------------  Fahrenholtz directions updates ------------------------------

1) Check out the repo; use github password (if that doesn’t work, use Ranger password)

cd $WORK; git clone git@github.com:ImageGuidedTherapyLab/uq_canine_feb07.git uq_canine_feb07

2) Setup gPC run;

Look in the file ‘control_dog1.ini’ for the setup. Modify it as necessary. Semicolons ‘;’ are comments. I pretty certain pounds ‘#’ are comments, too.

The control_dog1.ini file is split into sections labeled with brackets ‘[ ]’. E.g. the first section is [exec] and [dakota] is the second. Following the [ ], there are options for the section.

In the [dakota] section, set the response levels to what you need.

‘responselevels’ has two two response functions (response functions are what our model is solving for and recording); 0 = temperature; 1 = thermal dose (Sam) or 1 = thermal conductivity (Carlos). Response levels look at the probability for a given variable value. The default response level is [(0,[42.0,57.0,90.0]),(1,[0.5,0.8,1.0])]. That means for variable 0 (temperature), the simulation will record the cumulative distribution function (CDF) at each value (42, 57, and 90 degrees Celsius). Each ‘level’ of any kind is recorded in the FEM mesh at the end (the ‘.e’ file).

Set the probability levels to what you need. The probability levels are the complement of the response levels. Probability levels record the response functions at a particular value of the CDF. Common probability levels are 0.01, 0.05, 0.5 (the median), 0.95, and 0.99 or 1%, 5%, etc.

I don’t know what the reliability levels are... The 5.2 User’s and Reference manuals should tell us.

In the [pre_run] section, uncommented ‘mesh_files’ list two meshes. Number 1, i.e. ‘meshTemplateQuarter1.e’ is coarse. Number 2 is finer. I think using the finer one by itself should be fine.

In the [timestep] section, I don’t know what the ‘acquisitiontime’ or ‘nsubstep’ are. I suspect they have something to do with the conversion of simulation time steps and real time. Perhaps ‘acquisitiontime’ is the number of seconds in one time step. ‘nsubstep’ might be the number of time steps between recorded time steps.

‘powerhistory’ is correct for dog 1. It can be modified for any other experiment. The first array in the power history is the timing in units of time steps (1 time step = 6 seconds, I think). The second array is the magnitude of the power in Watts.

In the [initial_condition] section, ‘u_init’ is the initial, background temperature.

In the [perfusion] section, you make the choices about perfusion. ‘w_0_healthy’ is the only default option (default being what is in the repo). We have to figure out what the other perfusion options are. I have figured out that ‘w_0_healthy_lb’ and ‘w_0_healthy_ub’ are respectively the healthy perfusion’s upper and lower bounds for a uniform distribution. w_0_healthy is the mean perfusion for healthy tissue for dog. It’s interesting that there are default values for w_0_healthy even without the upper and lower bounds defined. Likewise, even though there are no default lines for k_0_healthy, mu_a_healthy, and mu_s_healthy; I know it is correct when you try to use them (see following directions to see how to choose gPC expanded variables).

That’s the last of the sections.

Once your run is prepared using the ‘.ini’ file, use this command to write your run:

python $DDDAS_SRC/Examples/TreatmentPlanning/gPCWFS.py --config_ini=control_dog1.ini --pre_run=ranger_setup_driver --w_0_healthy

‘--w_0_healthy’ is the argument that indicates what input variables have gPC expansions, in this case the simulation will use the healthy perfusion and expand it with gPC. You can use gPC on multiple parameters by just adding more parameter arguments.

You can check the available arguments for more input gPC parameters with this command (the same one without arguments):

python $DDDAS_SRC/Examples/TreatmentPlanning/gPCWFS.py

After you use that command, you should have two new directories with many more subdirectories. The subdirectories are time.000.* (temperature response function), time.001.* (dose response funciton), realization.* (the realization), and worst.*  (permutations of the worst case scenario; DF thinks the first and last realizations should be the true extremes).

3) Submit to the queue. Change the directory as needed (dog1_0 is the example here).

 qsub -v WORKDIR=dog1_0 realization.qsub

You can ask the supercomputer to e-mail you about its progress. Just open ‘realization.qsub’ and add the two following lines:

#$ -M samuel.fahrenholtz@gmail.com         		
#$ -m be							
(‘-M’ is your e-mail)
(‘-m’ indicates when you should get the e-mail; in this case it is for the beginning and end, hence ‘be’)

























































Something went wrong with that request. Please try again.