Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Once TerraFERMA has been installed we still need to build an executable to run. These are options specific so need to be built for each options file. There are a number of ways of doing this. All use CMake to configure the build but this can be invoked manually, through a script or through the TerraFERMA simulation harness.
diamond <tfml file name>
Here we assume a tfml file is already prepared but more information about describing problems in TerraFERMA is available in the documentation section. In particular, worked examples are given in the cookbook, with corresponding completed tfml files in the subdirectories of the tutorials` folder of the TerraFERMA source:
git clone https://github.com/TerraFERMA/TerraFERMA.git
Assuming we're in a directory with a suitable tfml file, we can use CMake to perform an out-of-tree build:
mkdir build cd build cmake -DOPTIONSFILE=../<tfml file name> $TF_CMAKE_PATH
where the name of the tfml file should be editted.
$TF_CMAKE_PATH should have been set in your environment at the end of the
Once CMake completes the configuration process, you can build the executable:
We also provide a shortcut for running the simulation:
This shortcut has the advantage of automatically rebuilding the executable when changes are made to compile-time options in the tfml file. If re-compilation isn't required then it is equivalent to:
./terraferma -vINFO -l ../<tfml file name>
-v controls the verbosity and
-l redirects stdout and stderr to
The log level through
make run defaults to
INFO but can be changed through CMake at configuration time. Similarly the
executable name defaults to
terraferma, which can also be changed.
Available log levels are:
- ERROR - only errors will be printed (quietest)
- WARNING - warnings and errors will be printed
- INFO - general information about the simulation (plus warnings and errors)
- DBG - extra information useful for debugging (most verbose)
This log level is also passed to DOLFIN unless specified independently using
Some components of PETSc logging can be controlled from the options file using the solver monitors.
In addition, the command line argument
-p, turns on more verbose PETSc output.
Full documentation of the command line arguments is available by typing:
which produces information about the version of the TerraFERMA buckettools library being used as well as the available arguments. For example:
GitHash: cf3da8d6c8fbe5877530637e4c9277dd2517c0ac Tue May 19 16:24:34 2015 +0100 CompileTime: Jun 10 2015 23:40:46 Usage: ./terraferma [options ...] <simulation-file> Options: -v <level>, --verbose <level> Verbose output, defaults to WARNING (30) if unset. Available options: ERROR (40), WARNING (30), INFO (20), DEBUG (10), DBG (10) or any integer. -d <level>, --dolfin-verbose <level> Verbose DOLFIN output, defaults to match -v if unset. Available options: CRITICAL (50), ERROR (40), WARNING (30), INFO (20), PROGRESS (16), TRACE (13), DEBUG (10), DBG (10) or any integer. -p, --petsc-info Verbose PETSc output. -l, --log Create log (redirects stdout) and error (redirects stderr) files for each process. -V, --version Prints version information then exits. -h, --help Help! Prints this message then exits.
Note that the commands used above to run TerraFERMA assume that all required input (e.g. mesh files) are input into the tfml file
with the correct path relative to the executable or are copied into the
build directory. This can be a bit of a hassle when using
out-of-tree builds like above but see below for directions for better managing input files using the simulation
With a few exceptions, most features of TerraFERMA are available in parallel (mostly because of the parallelism of the underlying FEniCS and PETSc libraries(https://www.mcs.anl.gov/petsc)). To run in parallel, simply follow the instructions above for building the executable then run with:
mpiexec -np <number of processes> ./terraferma -vINFO -l ../<tfml file name>
where, again the tfml file name needs to be entered, and now the number of processes being requested should also be editted. Note
-l argument will now produce a log and error file per process but other than that, the principal outputs should be
To simplify the build process described above a very simple build script,
tfbuild, is provided with TerraFERMA. To use simply
tfbuild <tfml file name>
where a suitable tfml file name needs to be named. This automates the initial steps described above for manual
builds, creating a build directory (the name of which defaults to
build) and calling CMake to configure the
build. For this step to work, your environment needs to be set correctly following installation. The default
directory name can be changed along with other options at the command line. For more details see the documentation for
Once configuration is complete the instructions become identical to the manual process:
cd build make
to build and:
./<tfml file basename> -vINFO -l ../<tfml file name>
to run, where
tfbuild has changed the default executable name to the basename of the tfml file rather than
terraferma in the
Similarly, in parallel simply run:
mpiexec -np <number of processes> ./<tfml file basename> -vINFO -l ../<tfml file name>
where number of processes needs to be set as well as the executable name (tfml file basename) and the tfml file name.
The TerraFERMA simulation harness is a tool for managing the builds and runtime output of TerraFERMA simulations. It takes as
input its own options file, with extension
.shml for simulation harness markup language. Similar to a tfml file, a shml file is
written using an xml syntax with rules provided by the simulation harness options schema. Also like a tfml
file, it is editted using diamond.
shml files can be relatively simple if just being used to manage a single run from a single tfml file, however the simulation harness's real utility is when performing parameter suites where it can edit the base tfml and generate multiple runs organized by parameter into a directory structure. It can also interrogate the output of multiple runs, collate the data from them and produce output and/or test the data. If a simulation has any dependencies, such as mesh generation, these can also be run from the harness, with relevant parameters passed to the dependency. Required input can also be associated with the input base tfml file, which will then be automatically copied along with any dependency output to the run directory of the simulation.
For more details of how to use the simulation harness please see the additional tools section. Here we will assume that a
valid shml file is available. We use shml files for all our tests and an increasing number of our benchmarks so an example
should not be far away. There are also examples of using shml files in the cookbook with corresponding
worked examples in the
tutorials directory of the source.
Given a shml file corresponding to a tfml file that you want to run, simply type:
tfsimulationharness --run <shml file name>
This will configure, build and run the simulation(s). To run any of these steps individually use
--run respectively. Also, if the shml file contains tests then
--test can be used.
The simulation harness separates the simulation executable builds and runs into different directories (because there aren't
necessarily the same number of builds as runs if parameters only affect runtime options, see tools).
Output from a run can be found in
<tfml file name>.run/.../run_0 where additional intermediate directories represent levels for any parameters used.
Running simulations in parallel in the simulation harness is a little different to running TerraFERMA directly. Because the shml
may describe multiple simulations each one can be set to run on a different number of processes. The simplest way of doing this is
to open the shml file in diamond and set the option
/simulations/simulation/number_processes to the number of processes you want
to run that simulation on. This defaults to 1 if not activated. Alternatively, if you are sweeping over a parameter space using
the harness and want different parameter values to run on different numbers of processes use the option
/simulations/simulation/parameter_sweep/parameter/process_scale and provide a list of process scales of equal length to the list
of parameter values. These are scales not values because they scale by the base number of processes set in the first option and any
other scales used on any other parameters (if sweeping over multiple parameters). Once one or both of these options is set you can
use the simulation harness in exactly the same manner as in serial.