Skip to content

Latest commit

 

History

History
203 lines (148 loc) · 6.56 KB

test-case.rst

File metadata and controls

203 lines (148 loc) · 6.56 KB

Add a test case

A test case is composed of an input directory with:

  • the input files required by the ,
  • a file with the settings,
  • any optional python modules for performing additional tests.

Warning

The input directory of a test case shall not contain any of the files created by the execution of the or of the additional python modules, otherwise they may badly interfere with the executions done by . In other words: do not run anything in the input directory of a test case, this directory shall only contain input data.

The file is used by for several things. When this file is found, will:

  1. create the output directory of the test case and, if needed, its parents,
  2. execute the tests defined in the default test module,
  3. execute the tests defined in the additional test modules.
  4. execute the tests defined in the parent directories.

The parents of an output directory are created such that the path from the directory where is executed to the input directory of the test case is the same but for the first parent. This way, the directories hierarchy below the first parent of both the inputs and the outputs trees are the same.

If is empty, then the default settings are used. If --exe-default-settings is not set, the default settings are the builtin ones:

../src/pytest_executable/test-settings.yaml

The following gives a description of the contents of .

Note

If other settings not described below exist in , they will be ignored by . This means that you can use to store settings for other purposes than .

Runner section

The purpose of this section is to be able to precisely define how to run the for each test case. The runner section contains key-value pairs of settings to be used for replacing placeholders in the passed to --exe-runner. For a key to be replaced, the shall contain the key between double curly braces.

For instance, if of a test case contains:

runner:
   nproc: 10

and the passed to --exe-runner contains:

mpirun -np {{nproc}} executable

then this line in the actual used to run the test case will be:

mpirun -np 10 executable

The runner section may also contain the timeout key to set the maximum duration of the execution. When this duration is reached and if the execution is not finished then the execution is failed and likely the other tests that rely on the outcome of the . If timeout is not set then there is no duration limit. The duration can be expressed with one or more numbers followed by its unit and separated by a space, for instance:

runner:
   timeout: 1h 2m 3s

The available units are:

  • y, year, years
  • m, month, months
  • w, week, weeks
  • d, day, days
  • h, hour, hours
  • min, minute, minutes
  • s, second, seconds
  • ms, millis, millisecond, milliseconds

Reference section

The reference files are used to check for regressions on the files created by the . Those checks can be done by comparing the files with a tolerance , see yaml-tol. The references section shall contain a list of paths to the files to be compared. A path shall be defined relatively to the test case output directory, it may use any shell pattern like **, *, ?, for instance:

references:
   - output/file
   - '**/*.txt'

Note that does not know how to check for regression on files, you have to implement the tests by yourself. To get the path to the references files in a test function, use the fixture regression-path-fixtures.

Tolerances section

A tolerance is used to define how close shall be 2 data to be considered as equal. It can be used when checking for regression by comparing files, see yaml-ref. To set the tolerances for the data named data-name1 and data-name2:

tolerances:
    data-name1:
        abs: 1.
    data-name2:
        rel: 0.
        abs: 0.

For a given name, if one of the tolerance value is not defined, like the rel one for the data-name1, then its value will be set to 0..

Note that does not know how to use a tolerance, you have to implement it by yourself in a tests. To get the tolerance in a test function, use the tolerances-fixtures.

Marks section

A mark is a feature that allows to select some of the tests to be executed, see mark_usage. This is how to add marks to a test case, for instance the slow and big marks:

marks:
   - slow
   - big

Such a declared mark will be set to all the test functions in the directory of a test case, either from the default test module or from an additional module.

You can also use the marks that already existing. In particular, the skip and xfail marks provided by can be used. The skip mark tells pytest to record but not execute the built-in test events of a test case. The xfail mark tells pytest to expect that at least one of the built-in test events will fail.

Marks declaration

The marks defined in all test cases shall be declared to in order to be used. This is done in the file pytest.ini that shall be created in the parent folder of the test inputs directory tree, where the command is executed. This file shall have the format:

[pytest]
markers =
    slow: one line explanation of what slow means
    big: one line explanation of what big means