Testing

Cian Wilson edited this page Oct 27, 2016 · 9 revisions

TerraFERMA Home | Wiki Home

The underlying libraries PETSc and FEniCS provide the majority of TerraFERMA's functionality and are used extensively and successfully in numerous other projects. However, in general, these projects are standalone and, although functionality is shared through their use of the libraries, there is still significant scope for implementational bugs or misuse of the libraries. TerraFERMA aims to share more of the common model infrastructure between models by providing access to a subset of the libraries' functionality through a common interface. This means that testing one model provides some reassurance as to the validity, or at least the correct functionality, of another as they will both now interact with the libraries through a common interface.

Testing, therefore, is an essential component of TerraFERMA's strategy. Without testing each model run through TerraFERMA may as well be written independently. This would involve debugging not only the physical and numerical strategies of the model but also its implementation. Such implementational bugs still exist in TerraFERMA however once found they are fixed for all subsequent models (and tested on a representative sample of old ones). This still leaves ample scope for errors in the user input, for example singular equations or erroneous solver strategies, but at this level the user is focussing on the physics of the problem rather than (in the majority of cases) the potential for errors in its implementation.

Since there is still ample scope within this strategy for user errors, tests also act as a repository of example input files on which new models can be based or to which they can be compared. Owing to the hierarchical options tree used in TerraFERMA, such comparison of different models is relatively easy and, thanks to SPuD, subsets of options can be (with a little care) copied between models without unnecessary duplication of any underlying code.

Testing not only helps ensure that changes to TerraFERMA do not adversely affect the results of previous simulations but also that changes to the underlying libraries are also backwards-compatible. PETSc and FEniCS are cutting-edge numerical libraries that are under active development and so frequently change. Creating a common interface to their functionality means that updates to the library interfaces need only be performed in one location, not across multiple independent files as would be the case with standalone models. This insulates users somewhat from changing dependency versions as TerraFERMA's interface does not generally change and, when it does, old input files can be easily updated. Meanwhile testing helps ensure that any updates to the underlying libraries (or TerraFERMA's interface to them) that change the results of previous TerraFERMA simulations can be investigated before being released.

Testing therefore helps ensure reproducibility. This is also facilitated by TerraFERMA's design where all available options are saved in an input file, which is what the entire model is built from (other runtime input files such as meshes may also supplement this). In many standalone models command line options systems from the libraries do provide some flexibility to users but they cannot give the ease of reproducibility offered by an input file system, particularly a hierarchical one that guides user input. Library command line options are also dependent on what has been implemented in the model so, even if a library offers a particular option, it does not mean the model supports it because the correct interface to the library may not have been implemented. Using TerraFERMA's hierarchical options system, only options that are available are offered and the test suites can be used to check how well tested any particular option is.

TerraFERMA is tested through three different sets of input files (.tfmls), those in the tests directory of the main repository, those in a separate benchmarks repository and those in the tutorials directory of the main repository.
Subsets of these are built, run and/or tested by the TerraFERMA buildbot through a common simulation harness that uses its own options file system (.shml) to describe and (optionally) test TerraFERMA simulations.

In the directory/repository or choice it is possible to run simulations that have an .shml file using the command:

tfsimulationharness --test -r -- '*.shml'

but note that, even only selecting the short tests (by using the -l short option), this will likely take a very long time so it is normally best to select some subset of the .shml files to test. An individual test can be run using:

tfsimulationharness --test <shml file name>

Tests

The tests directory of the TerraFERMA repository contains the main testing suite. Each subdirectory contains at least one .tfml TerraFERMA input file and one .shml simulation harness input file (plus any additional required input files, e.g. meshes). The former describes the problem and is used to build and run TerraFERMA, the latter describes the test and is used to run the simulation harness.

To minimize the runtime of the tests, they tend to be quite simple numerical verification tests including:

  • very simple tests that just project from different coefficient types to fields (e.g. projection_*)
  • convergence tests using the method of manufactured solutions for different equations
    • Poisson's equation with different boundary conditions, null-spaces and discretizations (e.g. mms_poisson*)
    • Stoke's equations with different discretizations (e.g. mms_stokes*)
    • steady state advection-diffusion equation (e.g. mms_advdiff_excludemass)
    • nonlinear Poisson's equation with different nonlinear solvers (e.g. mms_nonlinear_coupled_poisson)
  • additional tests of specific pieces of functionality
    • adaptive timestepping (e.g. *adaptive_dt_1d)
    • semi-Lagrangian advection (e.g. semilagrangian)
    • expression assignment by region (e.g. region_ids)
    • "variational inequality" bounded nonlinear solvers (e.g. dg_advection_1d_vi)

In addition to these tests of TerraFERMA itself the diamond_validation test checks that all the test and tutorial (see below input files are valid against the current schemas.

Tests are run regularly by every TerraFERMA and Dorsal buildbot builder and on Ubuntu after every commit to the master and dolfin-master branches.

Users are encouraged to run the short tests after installation to verify a successful build. This can be done directly in the tests directory using the simulation harness:

tfsimulationharness -l short --test -r -- '*.shml'

or in the build directory using:

make run_shorttests

or in a Dorsal directory using:

run_tftests

As this can still take some time the first two can be sped up by running tests over multiple processes:

tfsimulationharness -l short -n <number of processes> --test -r -- '*.shml'

or, in the build directory:

THREADS=<number of processes> make run_shorttests

Note that these may use more than the specified number of processes due to tests individual tests running in parallel.

Benchmarks

We have performed an extensive range of geodynamic benchmarks using TerraFERMA. The input files for these are maintained in their own repository and (unlike the test suite above) only work with the master branch of TerraFERMA. To get the repository, simply clone it to a suitable location:

git clone https://github.com/TerraFERMA/benchmarks.git

The benchmarks considered include:

The full benchmarks as described in the above publications can take a long time to run to completion (e.g. if they run a time-dependent simulation that must reach steady state). We therefore provide a brief overview of each benchmark and the results obtained from a full run in the descriptions folder. The latest compiled version of this is available on figshare:

Shorter version of the benchmarks where we, for example, jump to a steady state solution directly rather than time-stepping towards it, are also available. See:

tfsimulationharness -l short --just-list -r -- '*.shml'

Some of these are run regularly on buildbot. For example:

buildbot
Blankenbach et al., 1989 (unknown status)
Simpson and Spiegelman, 2011 (unknown status)

Additionally, all benchmark input files are build tested by the buildbot on all platforms.

Tutorials

The tutorials directory of the TerraFERMA directory contains example input files referenced by the cookbook and the source files for the cookbook itself (in the manual directory).

While the tutorials are not verification tests themselves they are build tested on Ubuntu by the buildbot after every commit. In addition some are run and regression tested. This ensures that they stay up to date.

You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.