From a0fdc99f32b06513e966bba784d7cd3e8e5c53c3 Mon Sep 17 00:00:00 2001 From: Vasileios Karakasis Date: Tue, 9 Mar 2021 23:35:12 +0100 Subject: [PATCH 1/3] Update tutorial examples to use the parameter builtin. Also add an example that shows the `RegressionMixin`. --- docs/regression_test_api.rst | 10 +- docs/tutorial_advanced.rst | 203 ++++++++++-------- docs/tutorial_basics.rst | 49 +++-- docs/tutorial_deps.rst | 99 +++++---- .../advanced/containers/container_test.py | 12 +- tutorials/advanced/makefiles/maketest.py | 18 +- .../advanced/makefiles/maketest_mixin.py | 37 ++++ tutorials/advanced/parameterized/stream.py | 8 +- tutorials/basics/hello/hello2.py | 8 +- tutorials/deps/osu_benchmarks.py | 8 +- 10 files changed, 272 insertions(+), 180 deletions(-) create mode 100644 tutorials/advanced/makefiles/maketest_mixin.py diff --git a/docs/regression_test_api.rst b/docs/regression_test_api.rst index 46f43f6bfa..fb406a100f 100644 --- a/docs/regression_test_api.rst +++ b/docs/regression_test_api.rst @@ -48,7 +48,7 @@ This provides the ReFrame internals with further control over the user's input, In essence, these builtins exert control over the test creation, and they allow adding and/or modifying certain attributes of the regression test. -.. py:function:: reframe.core.pipeline.RegressionTest.parameter(values=None, inherit_params=False, filter_params=None) +.. py:function:: RegressionTest.parameter(values=None, inherit_params=False, filter_params=None) Inserts or modifies a regression test parameter. If a parameter with a matching name is already present in the parameter space of a parent class, the existing parameter values will be combined with those provided by this method following the inheritance behavior set by the arguments ``inherit_params`` and ``filter_params``. @@ -67,7 +67,7 @@ In essence, these builtins exert control over the test creation, and they allow else: do_other() - One of the most powerful features about these built-in functions is that they store their input information at the class level. + One of the most powerful features about these built-in functions is that they store their input information at the class level. However, a parameter may only be accessed from the class instance and accessing it directly from the class body is disallowed. With this approach, extending or specializing an existing parametrized regression test becomes straightforward, since the test attribute additions and modifications made through built-in functions in the parent class are automatically inherited by the child test. For instance, continuing with the example above, one could override the :func:`__init__` method in the :class:`Foo` regression test as follows: @@ -83,7 +83,7 @@ In essence, these builtins exert control over the test creation, and they allow Note that this built-in parameter function provides an alternative method to parameterize a test to :func:`reframe.core.decorators.parameterized_test`, and the use of both approaches in the same test is currently disallowed. The two main advantages of the built-in :func:`parameter` over the decorated approach reside in the parameter inheritance across classes and the handling of large parameter sets. - As shown in the example above, the parameters declared with the built-in :func:`parameter` are automatically carried over into derived tests through class inheritance, whereas tests using the decorated approach would have to redefine the parameters on every test. + As shown in the example above, the parameters declared with the built-in :func:`parameter` are automatically carried over into derived tests through class inheritance, whereas tests using the decorated approach would have to redefine the parameters on every test. Similarly, parameters declared through the built-in :func:`parameter` are regarded as fully independent from each other and ReFrame will automatically generate as many tests as available parameter combinations. This is a major advantage over the decorated approach, where one would have to manually expand the parameter combinations. This is illustrated in the example below, consisting of a case with two parameters, each having two possible values. @@ -114,11 +114,11 @@ In essence, these builtins exert control over the test creation, and they allow This only has an effect if used with ``inherit_params=True``. -.. py:function:: reframe.core.pipeline.RegressionTest.variable(*types, value=None) +.. py:function:: RegressionTest.variable(*types, value=None) Inserts a new regression test variable. Declaring a test variable through the :func:`variable` built-in allows for a more robust test implementation than if the variables were just defined as regular test attributes (e.g. ``self.a = 10``). - Using variables declared through the :func:`variable` built-in guarantees that these regression test variables will not be redeclared by any child class, while also ensuring that any values that may be assigned to such variables comply with its original declaration. + Using variables declared through the :func:`variable` built-in guarantees that these regression test variables will not be redeclared by any child class, while also ensuring that any values that may be assigned to such variables comply with its original declaration. In essence, by using test variables, the user removes any potential test errors that might be caused by accidentally overriding a class attribute. See the example below. diff --git a/docs/tutorial_advanced.rst b/docs/tutorial_advanced.rst index 4e4e2564b7..cdbada6b09 100644 --- a/docs/tutorial_advanced.rst +++ b/docs/tutorial_advanced.rst @@ -28,22 +28,12 @@ Here is the adapted code with the relevant parts highlighted (for simplicity, we .. literalinclude:: ../tutorials/advanced/parameterized/stream.py :lines: 6- - :emphasize-lines: 5,7-10,18-19 + :emphasize-lines: 7,10-11,20-21 -A parameterized test needs to be decorated with the :func:`@parameterized_test ` decorator and must define its constructor such as to accept a set of parameters. -In this case, the test takes a single parameter, which is the size of the benchmark's working set in bytes. -Let's explain now the strange syntax of the arguments to the decorator. -The :func:`@parameterized_test ` decorator accepts a variable set of arguments, where each argument is a set of parameters (as an iterable) to be used for instantiating the test. -To better contemplate this, let's decorate this test in an equivalent, but much more verbose way: - -.. code:: python - - @rfm.parameterized_test([524288], [1048576], [2097152], [4194304], - [8388608], [16777216], [33554432], [67108864], - [134217728], [268435456], [536870912]) - -For each of the argument lists passed to the decorator, ReFrame will instantiate a regression test with those arguments. -So in this example, ReFrame will generate automatically 11 tests with different ``num_bytes`` parameters. +Any ordinary ReFrame test becomes a parameterized one if the user defines parameters inside the class body of the test. +This is done using the :py:func:`~reframe.core.pipeline.RegressionTest.parameter` ReFrame built-in function, which accepts the list of parameter values. +For each parameter value ReFrame will instantiate a different regression test by assigning the corresponding value to an attribute named after the parameter. +So in this example, ReFrame will generate automatically 11 tests with different values for their :attr:`num_bytes` attribute. From this point on, you can adapt the test based on the parameter values, as we do in this case, where we compute the STREAM array sizes, as well as the number of iterations to be performed on each benchmark, and we also compile the code accordingly. Let's try listing the generated tests: @@ -56,37 +46,38 @@ Let's try listing the generated tests: .. code-block:: none [ReFrame Setup] - version: 3.4-dev2 (rev: f52a96d8) + version: 3.6.0-dev.0+2f8e5b3b command: './bin/reframe -c tutorials/advanced/parameterized/stream.py -l' launched by: user@tresa.local working directory: '/Users/user/Repositories/reframe' - settings file: '/Users/user/Repositories/reframe/tutorials/config/settings.py' + settings file: 'tutorials/config/settings.py' check search path: '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py' stage directory: '/Users/user/Repositories/reframe/stage' output directory: '/Users/user/Repositories/reframe/output' [List of matched checks] - - StreamMultiSysTest_524288 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') - - StreamMultiSysTest_8388608 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') - StreamMultiSysTest_2097152 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') - - StreamMultiSysTest_33554432 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') - - StreamMultiSysTest_268435456 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') - - StreamMultiSysTest_134217728 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') - StreamMultiSysTest_67108864 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') - - StreamMultiSysTest_16777216 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') + - StreamMultiSysTest_1048576 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') - StreamMultiSysTest_536870912 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') - StreamMultiSysTest_4194304 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') - - StreamMultiSysTest_1048576 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') + - StreamMultiSysTest_33554432 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') + - StreamMultiSysTest_8388608 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') + - StreamMultiSysTest_268435456 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') + - StreamMultiSysTest_16777216 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') + - StreamMultiSysTest_524288 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') + - StreamMultiSysTest_134217728 (found in '/Users/user/Repositories/reframe/tutorials/advanced/parameterized/stream.py') Found 11 check(s) - Log file(s) saved in: '/var/folders/h7/k7cgrdl13r996m4dmsvjq7v80000gp/T/rfm-kk15vaow.log' + + Log file(s) saved in: '/var/folders/h7/k7cgrdl13r996m4dmsvjq7v80000gp/T/rfm-s_ty1l50.log' ReFrame generates 11 tests from the single parameterized test that we have written and names them by appending a string representation of the parameter value. Test parameterization in ReFrame is very powerful since you can parameterize your tests on anything and you can create complex parameterization spaces. A common pattern is to parameterize a test on the environment module that loads a software in order to test different versions of it. -For this reason, ReFrame offers the :func:`reframe.utility.find_modules` function, which allows you to parameterize test on the available modules for a given programming environment and partition combination. -The following will create a test for each ``GROMACS`` module found on the software stack associated with a system partition and programming environment (toolchain): +For this reason, ReFrame offers the :func:`~reframe.utility.find_modules` function, which allows you to parameterize a test on the available modules for a given programming environment and partition combination. +The following example will create a test for each ``GROMACS`` module found on the software stack associated with a system partition and programming environment (toolchain): .. code:: python @@ -94,19 +85,15 @@ The following will create a test for each ``GROMACS`` module found on the softwa import reframe.utility as util - @rfm.parameterized_test(*util.find_modules('GROMACS')) - class MyTest(rfm.RunOnlyRegressionTest): - def __init__(self, s, e, m): - self.descr = f'GROMACS test ({s}, {e}, {m})' + @rfm.simple_test + class MyTest(rfm.RegressionTest): + module_info = parameter(util.find_modules('GROMACS')) + + def __init__(self): + s, e, m = self.module_info self.valid_systems = [s] self.valid_prog_environs = [e] self.modules = [m] - ... - - -.. note:: - - ReFrame 3.4 extends further the test parameterization concept by introducing the more powerful :func:`parameter` class directive, which allows you to have hierarchies of parameterized tests and expand or reduce the parameterization space dynamically. @@ -148,10 +135,10 @@ Let's have a look at the test itself: .. literalinclude:: ../tutorials/advanced/makefiles/maketest.py :lines: 6-22 - :emphasize-lines: 11,13-14 + :emphasize-lines: 13,15-16 First, if you're using any build system other than ``SingleSource``, you must set the :attr:`executable` attribute of the test, because ReFrame cannot know what is the actual executable to be run. -We then set the build system to :class:`Make ` and set the preprocessor flags as we would do with the :class:`SingleSource` build system. +We then set the build system to :class:`~reframe.core.buildsystems.Make` and set the preprocessor flags as we would do with the :class:`SingleSource` build system. Let's inspect the build script generated by ReFrame: @@ -177,7 +164,7 @@ Let's inspect the build script generated by ReFrame: The compiler variables (``CC``, ``CXX`` etc.) are set based on the corresponding values specified in the `coniguration `__ of the current environment. -We can instruct the build system to ignore the default values from the environment by setting its :attr:`flags_from_environ ` attribute to false: +We can instruct the build system to ignore the default values from the environment by setting its :attr:`~reframe.core.buildsystems.Make.flags_from_environ` attribute to false: .. code-block:: python @@ -190,8 +177,8 @@ In this case, ``make`` will be invoked as follows: make -j 1 CPPFLAGS="-DELEM_TYPE=float" Notice that the ``-j 1`` option is always generated. -We can increase the build concurrency by setting the :attr:`max_concurrency ` attribute. -Finally, we may even use a custom Makefile by setting the :attr:`Make ` attribute: +We can increase the build concurrency by setting the :attr:`~reframe.core.buildsystems.Make.max_concurrency` attribute. +Finally, we may even use a custom Makefile by setting the :attr:`~reframe.core.buildsystems.Make.makefile` attribute: .. code-block:: python @@ -200,7 +187,7 @@ Finally, we may even use a custom Makefile by setting the :attr:`Make ` refers to a directory. +ReFrame could automatically figure out the correct build system if :attr:`~reframe.core.pipeline.RegressionTest.sourcepath` refers to a directory. ReFrame will inspect the directory and it will first try to determine whether this is a CMake or Autotools-based project. More details on ReFrame's build systems can be found `here `__. @@ -211,7 +198,7 @@ Retrieving the source code from a Git repository It might be the case that a regression test needs to clone its source code from a remote repository. This can be achieved in two ways with ReFrame. -One way is to set the :attr:`sourcesdir` attribute to :class:`None` and explicitly clone a repository using the :attr:`prebuild_cmds `: +One way is to set the :attr:`sourcesdir` attribute to :class:`None` and explicitly clone a repository using the :attr:`~reframe.core.pipeline.RegressionTest.prebuild_cmds`: .. code-block:: python @@ -237,9 +224,9 @@ Adding a configuration step before compiling the code It is often the case that a configuration step is needed before compiling a code with ``make``. To address this kind of projects, ReFrame aims to offer specific abstractions for "configure-make" style of build systems. -It supports `CMake-based `__ projects through the :class:`CMake ` build system, as well as `Autotools-based `__ projects through the :class:`Autotools ` build system. +It supports `CMake-based `__ projects through the :class:`~reframe.core.buildsystems.CMake` build system, as well as `Autotools-based `__ projects through the :class:`~reframe.core.buildsystems.Autotools` build system. -For other build systems, you can achieve the same effect using the :class:`Make ` build system and the :attr:`prebuild_cmds ` for performing the configuration step. +For other build systems, you can achieve the same effect using the :class:`~reframe.core.buildsystems.Make` build system and the :attr:`~reframe.core.pipeline.RegressionTest.prebuild_cmds` for performing the configuration step. The following code snippet will configure a code with ``./custom_configure`` before invoking ``make``: .. code-block:: python @@ -273,9 +260,9 @@ Here is the full regression test: :lines: 6- :emphasize-lines: 6 -There is nothing special for this test compared to those presented so far except that it derives from the :class:`RunOnlyRegressionTest `. +There is nothing special for this test compared to those presented so far except that it derives from the :class:`~reframe.core.pipeline.RunOnlyRegressionTest`. Run-only regression tests may also have resources, as for instance a pre-compiled executable or some input data. -These resources may reside under the ``src/`` directory or under any directory specified in the :attr:`sourcesdir ` attribute. +These resources may reside under the ``src/`` directory or under any directory specified in the :attr:`~reframe.core.pipeline.RegressionTest.sourcesdir` attribute. These resources will be copied to the stage directory at the beginning of the run phase. @@ -283,7 +270,7 @@ Writing a Compile-Only Regression Test -------------------------------------- ReFrame provides the option to write compile-only tests which consist only of a compilation phase without a specified executable. -This kind of tests must derive from the :class:`CompileOnlyRegressionTest ` class provided by the framework. +This kind of tests must derive from the :class:`~reframe.core.pipeline.CompileOnlyRegressionTest` class provided by the framework. The following test is a compile-only version of the :class:`MakefileTest` presented `previously <#more-on-building-tests>`__ which checks that no warnings are issued by the compiler: .. code-block:: console @@ -292,13 +279,59 @@ The following test is a compile-only version of the :class:`MakefileTest` presen .. literalinclude:: ../tutorials/advanced/makefiles/maketest.py - :lines: 25-33 + :lines: 27-35 :emphasize-lines: 2 -What is worth noting here is that the standard output and standard error of the test, which are accessible through the :attr:`stdout ` and :attr:`stderr ` attributes, correspond now to the standard output and error of the compilation command. +What is worth noting here is that the standard output and standard error of the test, which are accessible through the :attr:`~reframe.core.pipeline.RegressionTest.stdout` and :attr:`~reframe.core.pipeline.RegressionTest.stderr` attributes, correspond now to the standard output and error of the compilation command. Therefore sanity checking can be done in exactly the same way as with a normal test. +Grouping parameter packs +------------------------ + +.. versionadded:: 3.4.2 + + +In the dot product example shown above, we had two independent tests that defined the same :attr:`elem_type` parameter. +And the two tests cannot have a parent-child relationship, since one of them is a run-only tests and the other is a compile-only one. +ReFrame offers the :class:`~reframe.core.pipeline.RegressionMixin` class that allows you to group parameters and variables that are meant to be reused over otherwise unrelated tests. +In the example below, we create an :class:`ElemTypeParam` mixin that holds the definition of the :attr:`elem_type` parameter is inherited by both the concrete test classes: + +.. literalinclude:: ../tutorials/advanced/makefiles/maketest_mixin.py + :lines: 6- + :emphasize-lines: 5-6,10,25 + + +Notice how the parameters are expanded in each of the individual tests: + +.. code-block:: console + + ./bin/reframe -c tutorials/advanced/makefiles/maketest_mixin.py -l + + +.. code-block:: none + + [ReFrame Setup] + version: 3.6.0-dev.0+2f8e5b3b + command: './bin/reframe -c tutorials/advanced/makefiles/maketest_mixin.py -l' + launched by: user@tresa.local + working directory: '/Users/user/Repositories/reframe' + settings file: 'tutorials/config/settings.py' + check search path: '/Users/user/Repositories/reframe/tutorials/advanced/makefiles/maketest_mixin.py' + stage directory: '/Users/user/Repositories/reframe/stage' + output directory: '/Users/user/Repositories/reframe/output' + + [List of matched checks] + - MakeOnlyTest_double (found in '/Users/user/Repositories/reframe/tutorials/advanced/makefiles/maketest_mixin.py') + - MakeOnlyTest_float (found in '/Users/user/Repositories/reframe/tutorials/advanced/makefiles/maketest_mixin.py') + - MakefileTest_double (found in '/Users/user/Repositories/reframe/tutorials/advanced/makefiles/maketest_mixin.py') + - MakefileTest_float (found in '/Users/user/Repositories/reframe/tutorials/advanced/makefiles/maketest_mixin.py') + Found 4 check(s) + + Log file(s) saved in: '/var/folders/h7/k7cgrdl13r996m4dmsvjq7v80000gp/T/rfm-e384bvkd.log' + + + Applying a Sanity Function Iteratively -------------------------------------- @@ -327,31 +360,31 @@ Here is the corresponding regression test: :emphasize-lines: 12- First, we extract all the generated random numbers from the output. -What we want to do is to apply iteratively the :func:`assert_bounded ` sanity function for each number. -The problem here is that we cannot simply iterate over the ``numbers`` list, because that would trigger prematurely the evaluation of the :func:`extractall `. +What we want to do is to apply iteratively the :func:`~reframe.utility.sanity.assert_bounded` sanity function for each number. +The problem here is that we cannot simply iterate over the ``numbers`` list, because that would trigger prematurely the evaluation of the :func:`~reframe.utility.sanity.extractall`. We want to defer also the iteration. -This can be achieved by using the :func:`map() ` ReFrame sanity function, which is a replacement of Python's built-in :py:func:`map` function and does exactly what we want: it applies a function on all the elements of an iterable and returns another iterable with the transformed elements. -Passing the result of the :py:func:`map` function to the :func:`all ` sanity function ensures that all the elements lie between the desired bounds. +This can be achieved by using the :func:`~reframe.utility.sanity.map` ReFrame sanity function, which is a replacement of Python's built-in :py:func:`map` function and does exactly what we want: it applies a function on all the elements of an iterable and returns another iterable with the transformed elements. +Passing the result of the :py:func:`map` function to the :func:`~reframe.utility.sanity.all` sanity function ensures that all the elements lie between the desired bounds. There is still a small complication that needs to be addressed. -As a direct replacement of the built-in :py:func:`all` function, ReFrame's :func:`all() ` sanity function returns :class:`True` for empty iterables, which is not what we want. +As a direct replacement of the built-in :py:func:`all` function, ReFrame's :func:`~reframe.utility.sanity.all` sanity function returns :class:`True` for empty iterables, which is not what we want. So we must make sure that all 100 numbers are generated. -This is achieved by the ``sn.assert_eq(sn.count(numbers), 100)`` statement, which uses the :func:`count() ` sanity function for counting the generated numbers. +This is achieved by the ``sn.assert_eq(sn.count(numbers), 100)`` statement, which uses the :func:`~reframe.utility.sanity.count` sanity function for counting the generated numbers. Finally, we need to combine these two conditions to a single deferred expression that will be assigned to the test's :attr:`sanity_patterns`. -We accomplish this by using again the :func:`all() ` sanity function. +We accomplish this by using again the :func:`~reframe.utility.sanity.all` sanity function. For more information about how exactly sanity functions work and how their execution is deferred, please refer to :doc:`deferrables`. .. note:: .. versionadded:: 2.13 - ReFrame offers also the :func:`allx() ` sanity function which, conversely to the builtin :func:`all()` function, will return :class:`False` if its iterable argument is empty. + ReFrame offers also the :func:`~reframe.utility.sanity.allx` sanity function which, conversely to the builtin :func:`all()` function, will return :class:`False` if its iterable argument is empty. Customizing the Test Job Script ------------------------------- It is often the case that we need to run some commands before or after the parallel launch of our executable. -This can be easily achieved by using the :attr:`prerun_cmds ` and :attr:`postrun_cmds ` attributes of a ReFrame test. +This can be easily achieved by using the :attr:`~reframe.core.pipeline.RegressionTest.prerun_cmds` and :attr:`~reframe.core.pipeline.RegressionTest.postrun_cmds` attributes of a ReFrame test. The following example is a slightly modified version of the random numbers test presented `above <#applying-a-sanity-function-iteratively>`__. The lower and upper limits for the random numbers are now set inside a helper shell script in ``limits.sh`` located in the test's resources, which we need to source before running our tests. @@ -400,12 +433,12 @@ The ``prepare_cmds`` are commands that can be emitted before the test environmen These can be specified with the :js:attr:`prepare_cmds <.systems[].partitions[].prepare_cmds>` partition configuration option. The ``env_load_cmds`` are the necessary commands for setting up the environment of the test. These include any modules or environment variables set at the `system partition level `__ or any `modules `__ or `environment variables `__ set at the test level. -Then the commands specified in :attr:`prerun_cmds ` follow, while those specified in the :attr:`postrun_cmds ` come after the launch of the parallel job. +Then the commands specified in :attr:`~reframe.core.pipeline.RegressionTest.prerun_cmds` follow, while those specified in the :attr:`~reframe.core.pipeline.RegressionTest.postrun_cmds` come after the launch of the parallel job. The parallel launch itself consists of three parts: #. The parallel launcher program (e.g., ``srun``, ``mpirun`` etc.) with its options, -#. the regression test executable as specified in the :attr:`executable ` attribute and -#. the options to be passed to the executable as specified in the :attr:`executable_opts ` attribute. +#. the regression test executable as specified in the :attr:`~reframe.core.pipeline.RegressionTest.executable` attribute and +#. the options to be passed to the executable as specified in the :attr:`~reframe.core.pipeline.RegressionTest.executable_opts` attribute. Adding job scheduler options per test @@ -483,7 +516,7 @@ Let's see how we can rewrite the :class:`MemoryLimitTest` using the ``memory`` r :lines: 26-38 :emphasize-lines: 11-13 -The extra resources that the test needs to obtain through its scheduler are specified in the :attr:`extra_resources ` attribute, which is a dictionary with the resource names as its keys and another dictionary assigning values to the resource placeholders as its values. +The extra resources that the test needs to obtain through its scheduler are specified in the :attr:`~reframe.core.pipeline.RegressionTest.extra_resources` attribute, which is a dictionary with the resource names as its keys and another dictionary assigning values to the resource placeholders as its values. As you can see, this syntax is completely scheduler-agnostic. If the requested resource is not defined for the current partition, it will be simply ignored. @@ -503,7 +536,7 @@ ReFrame gives the ability to do that and we will see some examples in this secti The most common case is to pass arguments to the launcher command that you cannot normally pass as job options. The ``--cpu-bind`` of ``srun`` is such an example. -Inside a ReFrame test, you can access the parallel launcher through the :attr:`launcher ` of the job descriptor. +Inside a ReFrame test, you can access the parallel launcher through the :attr:`~reframe.core.schedulers.Job.launcher` of the job descriptor. This object handles all the details of how the parallel launch command will be emitted. In the following test we run a CPU affinity test using `this `__ utility and we will pin the threads using the ``--cpu-bind`` option: @@ -541,7 +574,7 @@ This can be achieved with the following pipeline hook: self.job.launcher = LauncherWrapper(self.job.launcher, 'ddt', ['--offline']) -The :class:`LauncherWrapper ` is a pseudo-launcher that wraps another one and allows you to prepend anything to it. +The :class:`~reframe.core.launchers.LauncherWrapper` is a pseudo-launcher that wraps another one and allows you to prepend anything to it. In this case the resulting parallel launch command, if the current partition uses native Slurm, will be ``ddt --offline srun [OPTIONS]``. @@ -569,8 +602,8 @@ The trick here is to replace the parallel launcher with the local one, which pra self.job.launcher = getlauncher('local')() -The :func:`getlauncher ` function takes the `registered `__ name of a launcher and returns the class that implements it. -You then instantiate the launcher and assign to the :attr:`launcher` attribute of the job descriptor. +The :func:`~reframe.core.backends.getlauncher` function takes the `registered `__ name of a launcher and returns the class that implements it. +You then instantiate the launcher and assign to the :attr:`~reframe.core.schedulers.Job.launcher` attribute of the job descriptor. An alternative to this approach would be to define your own custom parallel launcher and register it with the framework. You could then use it as the scheduler of a system partition in the configuration, but this approach is less test-specific. @@ -578,9 +611,9 @@ You could then use it as the scheduler of a system partition in the configuratio Adding more parallel launch commands ==================================== -ReFrame uses a parallel launcher by default for anything defined explicitly or implicitly in the :attr:`executable ` test attribute. +ReFrame uses a parallel launcher by default for anything defined explicitly or implicitly in the :attr:`~reframe.core.pipeline.RegressionTest.executable` test attribute. But what if we want to generate multiple parallel launch commands? -One straightforward solution is to hardcode the parallel launch command inside the :attr:`prerun_cmds ` or :attr:`postrun_cmds `, but this is not so portable. +One straightforward solution is to hardcode the parallel launch command inside the :attr:`~reframe.core.pipeline.RegressionTest.prerun_cmds` or :attr:`~reframe.core.pipeline.RegressionTest.postrun_cmds`, but this is not so portable. The best way is to ask ReFrame to emit the parallel launch command for you. The following is a simple test for demonstration purposes that runs the ``hostname`` command several times using a parallel launcher. It resembles a scaling test, except that all happens inside a single ReFrame test, instead of launching multiple instances of a parameterized test. @@ -595,7 +628,7 @@ It resembles a scaling test, except that all happens inside a single ReFrame tes :emphasize-lines: 17-23 The additional parallel launch commands are inserted in either the :attr:`prerun_cmds` or :attr:`postrun_cmds` lists. -To retrieve the actual parallel launch command for the current partition that the test is running on, you can use the :func:`run_command ` method of the launcher object. +To retrieve the actual parallel launch command for the current partition that the test is running on, you can use the :func:`~reframe.core.launchers.Launcher.run_command` method of the launcher object. Let's see how the generated job script looks like: .. code-block:: none @@ -628,10 +661,10 @@ Flexible Regression Tests .. versionadded:: 2.15 -ReFrame can automatically set the number of tasks of a particular test, if its :attr:`num_tasks ` attribute is set to a negative value or zero. +ReFrame can automatically set the number of tasks of a particular test, if its :attr:`~reframe.core.pipeline.RegressionTest.num_tasks` attribute is set to a negative value or zero. In ReFrame's terminology, such tests are called *flexible*. Negative values indicate the minimum number of tasks that are acceptable for this test (a value of ``-4`` indicates that at least ``4`` tasks are required). -A zero value indicates the default minimum number of tasks which is equal to :attr:`num_tasks_per_node `. +A zero value indicates the default minimum number of tasks which is equal to :attr:`~reframe.core.pipeline.RegressionTest.num_tasks_per_node`. By default, ReFrame will spawn such a test on all the idle nodes of the current system partition, but this behavior can be adjusted with the |--flex-alloc-nodes|_ command-line option. Flexible tests are very useful for diagnostics tests, e.g., tests for checking the health of a whole set nodes. @@ -647,12 +680,12 @@ The test will verify that all the nodes print the expected host name: :lines: 6- :emphasize-lines: 11-16 -The first thing to notice in this test is that :attr:`num_tasks ` is set to zero. +The first thing to notice in this test is that :attr:`~reframe.core.pipeline.RegressionTest.num_tasks` is set to zero. This is a requirement for flexible tests. The sanity check of this test simply counts the host names printed and verifies that they are as many as expected. -Notice, however, that the sanity check does not use :attr:`num_tasks` directly, but rather access the attribute through the :func:`sn.getattr() ` sanity function, which is a replacement for the :func:`getattr` builtin. +Notice, however, that the sanity check does not use :attr:`num_tasks` directly, but rather access the attribute through the :func:`~reframe.utility.sanity.getattr` sanity function, which is a replacement for the :func:`getattr` builtin. The reason for that is that at the time the sanity check expression is created, :attr:`num_tasks` is ``0`` and it will only be set to its actual value during the run phase. -Consequently, we need to defer the attribute retrieval, thus we use the :func:`sn.getattr() ` sanity function instead of accessing it directly +Consequently, we need to defer the attribute retrieval, thus we use the :func:`~reframe.utility.sanity.getattr` sanity function instead of accessing it directly .. |--flex-alloc-nodes| replace:: :attr:`--flex-alloc-nodes` @@ -690,18 +723,18 @@ The following parameterized test, will create two tests, one for each of the sup .. literalinclude:: ../tutorials/advanced/containers/container_test.py :lines: 6- - :emphasize-lines: 11-16 + :emphasize-lines: 14-19 -A container-based test can be written as :class:`RunOnlyRegressionTest ` that sets the :attr:`container_platform ` attribute. +A container-based test can be written as :class:`~reframe.core.pipeline.RunOnlyRegressionTest` that sets the :attr:`~reframe.core.pipeline.RegressionTest.container_platform` attribute. This attribute accepts a string that corresponds to the name of the container platform that will be used to run the container for this test. If such a platform is not `configured `__ for the current system, the test will fail. -As soon as the container platform to be used is defined, you need to specify the container image to use by setting the :attr:`image `. +As soon as the container platform to be used is defined, you need to specify the container image to use by setting the :attr:`~reframe.core.containers.ContainerPlatform.image`. In the ``Singularity`` test variant, we add the ``docker://`` prefix to the image name, in order to instruct ``Singularity`` to pull the image from `DockerHub `__. -The default command that the container runs can be overwritten by setting the :attr:`command ` attribute of the container platform. +The default command that the container runs can be overwritten by setting the :attr:`~reframe.core.containers.ContainerPlatform.command` attribute of the container platform. -The :attr:`image ` is the only mandatory attribute for container-based checks. -It is important to note that the :attr:`executable ` and :attr:`executable_opts ` attributes of the actual test are ignored in case of container-based tests. +The :attr:`~reframe.core.containers.ContainerPlatform.image` is the only mandatory attribute for container-based checks. +It is important to note that the :attr:`~reframe.core.pipeline.RegressionTest.executable` and :attr:`~reframe.core.pipeline.RegressionTest.executable_opts` attributes of the actual test are ignored in case of container-based tests. ReFrame will run the container according to the given platform as follows: @@ -721,10 +754,10 @@ In the ``Sarus`` case, ReFrame will prepend the following command in order to pu sarus pull ubuntu:18.04 -This is the default behavior of ReFrame, which can be changed if pulling the image is not desired by setting the :attr:`pull_image ` attribute to :class:`False`. +This is the default behavior of ReFrame, which can be changed if pulling the image is not desired by setting the :attr:`~reframe.core.containers.ContainerPlatform.pull_image` attribute to :class:`False`. By default ReFrame will mount the stage directory of the test under ``/rfm_workdir`` inside the container. Once the commands are executed, the container is stopped and ReFrame goes on with the sanity and performance checks. -Besides the stage directory, additional mount points can be specified through the :attr:`mount_points ` attribute: +Besides the stage directory, additional mount points can be specified through the :attr:`~reframe.core.pipeline.RegressionTest.container_platform.mount_points` attribute: .. code-block:: python @@ -735,7 +768,7 @@ Besides the stage directory, additional mount points can be specified through th The container filesystem is ephemeral, therefore, ReFrame mounts the stage directory under ``/rfm_workdir`` inside the container where the user can copy artifacts as needed. These artifacts will therefore be available inside the stage directory after the container execution finishes. This is very useful if the artifacts are needed for the sanity or performance checks. -If the copy is not performed by the default container command, the user can override this command by settings the :attr:`command ` attribute such as to include the appropriate copy commands. +If the copy is not performed by the default container command, the user can override this command by settings the :attr:`~reframe.core.containers.ContainerPlatform.command` attribute such as to include the appropriate copy commands. In the current test, the output of the ``cat /etc/os-release`` is available both in the standard output as well as in the ``release.txt`` file, since we have used the command: .. code-block:: bash diff --git a/docs/tutorial_basics.rst b/docs/tutorial_basics.rst index 79439d747d..b4836b6387 100644 --- a/docs/tutorial_basics.rst +++ b/docs/tutorial_basics.rst @@ -232,11 +232,11 @@ ReFrame allows you to avoid this in several ways but the most compact is to defi :lines: 6- -This is exactly the same test as the ``hello1.py`` except that it is decorated with the :func:`@parameterized_test ` decorator instead of the :func:`@simple_test `. -Also the constructor of the test now takes an argument. -The :func:`@parameterized_test <>` decorator instructs ReFrame to instantiate a test class with different parameters. -In this case the test will be instantiated for both C and C++ and then we use the ``lang`` parameter directly as the extension of the source file. -Let's run now the test: +This is exactly the same test as the ``hello1.py`` except that it defines the ``lang`` parameter to denote the programming language to be used by the test. +The :py:func:`~reframe.core.pipeline.RegressionTest.parameter` ReFrame built-in defines a new parameter for the test and will cause multiple instantiations of the test, each one setting the :attr:`lang` attribute to the actual parameter value. +In this example, two tests will be created, one with ``lang='c'`` and another with ``lang='cpp'``. +The parameter is available as an attribute of the test class and, in this example, we use it to set the extension of the source file. +Let's run the test now: .. code-block:: console @@ -246,7 +246,7 @@ Let's run now the test: .. code-block:: none [ReFrame Setup] - version: 3.3-dev0 (rev: 5d246bff) + version: 3.6.0-dev.0+a3d0b0cd command: './bin/reframe -c tutorials/basics/hello/hello2.py -r' launched by: user@tresa.local working directory: '/Users/user/Repositories/reframe' @@ -256,7 +256,7 @@ Let's run now the test: output directory: '/Users/user/Repositories/reframe/output' [==========] Running 2 check(s) - [==========] Started on Mon Oct 12 18:24:31 2020 + [==========] Started on Tue Mar 9 23:25:22 2021 [----------] started processing HelloMultiLangTest_c (HelloMultiLangTest_c) [ RUN ] HelloMultiLangTest_c on generic:default using builtin @@ -264,15 +264,16 @@ Let's run now the test: [----------] started processing HelloMultiLangTest_cpp (HelloMultiLangTest_cpp) [ RUN ] HelloMultiLangTest_cpp on generic:default using builtin - [ FAIL ] (1/2) HelloMultiLangTest_cpp on generic:default using builtin [compile: 0.001s run: n/a total: 0.009s] + [ FAIL ] (1/2) HelloMultiLangTest_cpp on generic:default using builtin [compile: 0.006s run: n/a total: 0.023s] + ==> test failed during 'compile': test staged in '/Users/user/Repositories/reframe/stage/generic/default/builtin/HelloMultiLangTest_cpp' [----------] finished processing HelloMultiLangTest_cpp (HelloMultiLangTest_cpp) [----------] waiting for spawned checks to finish - [ OK ] (2/2) HelloMultiLangTest_c on generic:default using builtin [compile: 0.254s run: 0.286s total: 0.555s] + [ OK ] (2/2) HelloMultiLangTest_c on generic:default using builtin [compile: 0.981s run: 0.468s total: 1.475s] [----------] all spawned checks have finished - [ FAILED ] Ran 2 test case(s) from 2 check(s) (1 failure(s)) - [==========] Finished on Mon Oct 12 18:24:32 2020 + [ FAILED ] Ran 2/2 test case(s) from 2 check(s) (1 failure(s)) + [==========] Finished on Tue Mar 9 23:25:23 2021 ============================================================================== SUMMARY OF FAILURES @@ -284,12 +285,14 @@ Let's run now the test: * Stage directory: /Users/user/Repositories/reframe/stage/generic/default/builtin/HelloMultiLangTest_cpp * Node list: None * Job type: local (id=None) + * Dependencies (conceptual): [] + * Dependencies (actual): [] * Maintainers: [] * Failing phase: compile - * Rerun with '-n HelloMultiLangTest_cpp -p builtin --system generic:default' + * Rerun with '-n HelloMultiLangTest_cpp -p builtin --system generic:default -r' * Reason: build system error: I do not know how to compile a C++ program ------------------------------------------------------------------------------ - Log file(s) saved in: '/var/folders/h7/k7cgrdl13r996m4dmsvjq7v80000gp/T/rfm-lbpo8oan.log' + Log file(s) saved in: '/var/folders/h7/k7cgrdl13r996m4dmsvjq7v80000gp/T/rfm-wemvsvs2.log' Oops! The C++ test has failed. @@ -356,8 +359,8 @@ Let's now rerun our "Hello, World!" tests: .. code-block:: none [ReFrame Setup] - version: 3.3-dev0 (rev: 5d246bff) - command: './bin/reframe -C tutorials/config/settings.py -c tutorials/basics/hello/hello2.py -r' + version: 3.6.0-dev.0+a3d0b0cd + command: './bin/reframe -C tutorials/config/mysettings.py -c tutorials/basics/hello/hello2.py -r' launched by: user@tresa.local working directory: '/Users/user/Repositories/reframe' settings file: 'tutorials/config/settings.py' @@ -366,7 +369,7 @@ Let's now rerun our "Hello, World!" tests: output directory: '/Users/user/Repositories/reframe/output' [==========] Running 2 check(s) - [==========] Started on Mon Oct 12 18:28:48 2020 + [==========] Started on Tue Mar 9 23:28:00 2021 [----------] started processing HelloMultiLangTest_c (HelloMultiLangTest_c) [ RUN ] HelloMultiLangTest_c on catalina:default using gnu @@ -379,15 +382,15 @@ Let's now rerun our "Hello, World!" tests: [----------] finished processing HelloMultiLangTest_cpp (HelloMultiLangTest_cpp) [----------] waiting for spawned checks to finish - [ OK ] (1/4) HelloMultiLangTest_cpp on catalina:default using gnu [compile: 1.077s run: 1.475s total: 2.566s] - [ OK ] (2/4) HelloMultiLangTest_c on catalina:default using gnu [compile: 4.128s run: 2.860s total: 7.004s] - [ OK ] (3/4) HelloMultiLangTest_c on catalina:default using clang [compile: 0.241s run: 2.741s total: 2.998s] - [ OK ] (4/4) HelloMultiLangTest_cpp on catalina:default using clang [compile: 1.399s run: 0.356s total: 1.770s] + [ OK ] (1/4) HelloMultiLangTest_cpp on catalina:default using gnu [compile: 0.768s run: 1.115s total: 1.909s] + [ OK ] (2/4) HelloMultiLangTest_c on catalina:default using gnu [compile: 0.600s run: 2.230s total: 2.857s] + [ OK ] (3/4) HelloMultiLangTest_c on catalina:default using clang [compile: 0.238s run: 2.129s total: 2.393s] + [ OK ] (4/4) HelloMultiLangTest_cpp on catalina:default using clang [compile: 1.006s run: 0.427s total: 1.456s] [----------] all spawned checks have finished - [ PASSED ] Ran 4 test case(s) from 2 check(s) (0 failure(s)) - [==========] Finished on Mon Oct 12 18:28:56 2020 - Log file(s) saved in: '/var/folders/h7/k7cgrdl13r996m4dmsvjq7v80000gp/T/rfm-a_dt6nro.log' + [ PASSED ] Ran 4/4 test case(s) from 2 check(s) (0 failure(s)) + [==========] Finished on Tue Mar 9 23:28:03 2021 + Log file(s) saved in: '/var/folders/h7/k7cgrdl13r996m4dmsvjq7v80000gp/T/rfm-dnubkvfi.log' Notice how the same tests are now tried with both the ``gnu`` and ``clang`` programming environments, without having to touch them at all! diff --git a/docs/tutorial_deps.rst b/docs/tutorial_deps.rst index e26cf93c91..881419df1f 100644 --- a/docs/tutorial_deps.rst +++ b/docs/tutorial_deps.rst @@ -19,18 +19,18 @@ We first create a basic run-only test, that fetches the benchmarks: .. literalinclude:: ../tutorials/deps/osu_benchmarks.py - :lines: 110-122 + :lines: 112-124 This test doesn't need any specific programming environment, so we simply pick the ``builtin`` environment in the ``login`` partition. The build tests would then copy the benchmark code and build it for the different programming environments: .. literalinclude:: ../tutorials/deps/osu_benchmarks.py - :lines: 91-107 + :lines: 93-109 The only new thing that comes in with the :class:`OSUBuildTest` test is the following line: .. literalinclude:: ../tutorials/deps/osu_benchmarks.py - :lines: 97 + :lines: 99 Here we tell ReFrame that this test depends on a test named :class:`OSUDownloadTest`. This test may or may not be defined in the same test file; all ReFrame needs is the test name. @@ -46,7 +46,7 @@ The next step for the :class:`OSUBuildTest` is to set its :attr:`sourcesdir` to This is achieved with the following specially decorated function: .. literalinclude:: ../tutorials/deps/osu_benchmarks.py - :lines: 102-107 + :lines: 104-109 The :func:`@require_deps ` decorator binds each argument of the decorated function to the corresponding target dependency. In order for the binding to work correctly the function arguments must be named after the target dependencies. @@ -99,11 +99,15 @@ ReFrame will make sure to properly sort the tests and execute them. Here is the output when running the OSU tests with the asynchronous execution policy: +.. code-block:: console + + ./bin/reframe -c tutorials/deps/osu_benchmarks.py -r + .. code-block:: none [ReFrame Setup] - version: 3.4-dev2 (rev: 56c6c237) - command: './bin/reframe -C tutorials/config/settings.py -c tutorials/deps/osu_benchmarks.py -r' + version: 3.6.0-dev.0+4de0fee1 + command: './bin/reframe -c tutorials/deps/osu_benchmarks.py -r' launched by: user@daint101 working directory: '/users/user/Devel/reframe' settings file: 'tutorials/config/settings.py' @@ -112,7 +116,7 @@ Here is the output when running the OSU tests with the asynchronous execution po output directory: '/users/user/Devel/reframe/output' [==========] Running 8 check(s) - [==========] Started on Mon Jan 25 19:34:09 2021 + [==========] Started on Wed Mar 10 20:53:56 2021 [----------] started processing OSUDownloadTest (OSU benchmarks download sources) [ RUN ] OSUDownloadTest on daint:login using builtin @@ -182,33 +186,34 @@ Here is the output when running the OSU tests with the asynchronous execution po [----------] finished processing OSUAllreduceTest_16 (OSU Allreduce test) [----------] waiting for spawned checks to finish - [ OK ] ( 1/22) OSUDownloadTest on daint:login using builtin [compile: 0.006s run: 1.272s total: 1.349s] - [ OK ] ( 2/22) OSUBuildTest on daint:gpu using gnu [compile: 21.474s run: 0.043s total: 86.844s] - [ OK ] ( 3/22) OSUBuildTest on daint:gpu using pgi [compile: 27.948s run: 58.876s total: 86.842s] - [ OK ] ( 4/22) OSUAllreduceTest_2 on daint:gpu using pgi [compile: 0.007s run: 20.752s total: 36.777s] - [ OK ] ( 5/22) OSUAllreduceTest_8 on daint:gpu using gnu [compile: 0.007s run: 28.699s total: 36.779s] - [ OK ] ( 6/22) OSUAllreduceTest_16 on daint:gpu using gnu [compile: 0.006s run: 34.055s total: 36.785s] - [ OK ] ( 7/22) OSUBuildTest on daint:gpu using intel [compile: 37.314s run: 58.469s total: 123.772s] - [ OK ] ( 8/22) OSULatencyTest on daint:gpu using pgi [compile: 0.009s run: 29.095s total: 56.517s] - [ OK ] ( 9/22) OSUAllreduceTest_2 on daint:gpu using gnu [compile: 0.006s run: 37.876s total: 56.534s] - [ OK ] (10/22) OSUAllreduceTest_4 on daint:gpu using pgi [compile: 0.007s run: 45.804s total: 56.563s] - [ OK ] (11/22) OSUAllreduceTest_16 on daint:gpu using pgi [compile: 0.007s run: 56.553s total: 56.580s] - [ OK ] (12/22) OSULatencyTest on daint:gpu using gnu [compile: 0.009s run: 27.131s total: 57.330s] - [ OK ] (13/22) OSUAllreduceTest_8 on daint:gpu using pgi [compile: 0.007s run: 51.868s total: 57.292s] - [ OK ] (14/22) OSUAllreduceTest_4 on daint:gpu using gnu [compile: 0.007s run: 44.443s total: 57.803s] - [ OK ] (15/22) OSUBandwidthTest on daint:gpu using pgi [compile: 0.015s run: 75.905s total: 97.177s] - [ OK ] (16/22) OSUBandwidthTest on daint:gpu using gnu [compile: 0.019s run: 82.091s total: 106.348s] - [ OK ] (17/22) OSUAllreduceTest_16 on daint:gpu using intel [compile: 0.006s run: 89.678s total: 89.699s] - [ OK ] (18/22) OSUAllreduceTest_4 on daint:gpu using intel [compile: 0.006s run: 113.071s total: 121.153s] - [ OK ] (19/22) OSUAllreduceTest_2 on daint:gpu using intel [compile: 0.006s run: 110.686s total: 121.408s] - [ OK ] (20/22) OSUAllreduceTest_8 on daint:gpu using intel [compile: 0.006s run: 119.416s total: 122.079s] - [ OK ] (21/22) OSULatencyTest on daint:gpu using intel [compile: 0.008s run: 133.892s total: 149.776s] - [ OK ] (22/22) OSUBandwidthTest on daint:gpu using intel [compile: 0.006s run: 170.584s total: 183.903s] + [ OK ] ( 1/22) OSUDownloadTest on daint:login using builtin [compile: 0.007s run: 2.033s total: 2.078s] + [ OK ] ( 2/22) OSUBuildTest on daint:gpu using gnu [compile: 20.531s run: 0.039s total: 83.089s] + [ OK ] ( 3/22) OSUBuildTest on daint:gpu using pgi [compile: 27.193s run: 55.871s total: 83.082s] + [ OK ] ( 4/22) OSUAllreduceTest_16 on daint:gpu using gnu [compile: 0.007s run: 30.713s total: 33.470s] + [ OK ] ( 5/22) OSUBuildTest on daint:gpu using intel [compile: 35.256s run: 54.218s total: 116.712s] + [ OK ] ( 6/22) OSULatencyTest on daint:gpu using pgi [compile: 0.011s run: 23.738s total: 51.190s] + [ OK ] ( 7/22) OSUAllreduceTest_2 on daint:gpu using gnu [compile: 0.008s run: 31.879s total: 51.187s] + [ OK ] ( 8/22) OSUAllreduceTest_4 on daint:gpu using gnu [compile: 0.006s run: 37.447s total: 51.194s] + [ OK ] ( 9/22) OSUAllreduceTest_8 on daint:gpu using gnu [compile: 0.007s run: 42.914s total: 51.202s] + [ OK ] (10/22) OSUAllreduceTest_16 on daint:gpu using pgi [compile: 0.006s run: 51.172s total: 51.197s] + [ OK ] (11/22) OSULatencyTest on daint:gpu using gnu [compile: 0.007s run: 21.500s total: 51.730s] + [ OK ] (12/22) OSUAllreduceTest_2 on daint:gpu using pgi [compile: 0.007s run: 35.083s total: 51.700s] + [ OK ] (13/22) OSUAllreduceTest_8 on daint:gpu using pgi [compile: 0.007s run: 46.187s total: 51.681s] + [ OK ] (14/22) OSUAllreduceTest_4 on daint:gpu using pgi [compile: 0.007s run: 41.060s total: 52.030s] + [ OK ] (15/22) OSUAllreduceTest_2 on daint:gpu using intel [compile: 0.008s run: 27.401s total: 35.900s] + [ OK ] (16/22) OSUBandwidthTest on daint:gpu using gnu [compile: 0.008s run: 82.553s total: 107.334s] + [ OK ] (17/22) OSUBandwidthTest on daint:gpu using pgi [compile: 0.009s run: 87.559s total: 109.613s] + [ OK ] (18/22) OSUAllreduceTest_16 on daint:gpu using intel [compile: 0.006s run: 99.899s total: 99.924s] + [ OK ] (19/22) OSUBandwidthTest on daint:gpu using intel [compile: 0.007s run: 116.771s total: 128.125s] + [ OK ] (20/22) OSULatencyTest on daint:gpu using intel [compile: 0.008s run: 114.236s total: 128.398s] + [ OK ] (21/22) OSUAllreduceTest_8 on daint:gpu using intel [compile: 0.008s run: 125.541s total: 128.387s] + [ OK ] (22/22) OSUAllreduceTest_4 on daint:gpu using intel [compile: 0.007s run: 123.079s total: 128.651s] [----------] all spawned checks have finished - [ PASSED ] Ran 22 test case(s) from 8 check(s) (0 failure(s)) - [==========] Finished on Mon Jan 25 19:39:18 2021 - Log file(s) saved in: '/tmp/rfm-g1a6axrf.log' + [ PASSED ] Ran 22/22 test case(s) from 8 check(s) (0 failure(s)) + [==========] Finished on Wed Mar 10 20:58:03 2021 + Log file(s) saved in: '/tmp/rfm-q0gd9y6v.log' + Before starting running the tests, ReFrame topologically sorts them based on their dependencies and schedules them for running using the selected execution policy. With the serial execution policy, ReFrame simply executes the tests to completion as they "arrive," since the tests are already topologically sorted. @@ -261,8 +266,8 @@ As a result, its immediate dependency :class:`OSUBuildTest` will be skipped, whi .. code-block:: none [ReFrame Setup] - version: 3.5-dev0 (rev: 93948510) - command: './bin/reframe -C tutorials/config/settings.py --system=daint:gpu -c tutorials/deps/osu_benchmarks.py -l' + version: 3.6.0-dev.0+4de0fee1 + command: './bin/reframe -c tutorials/deps/osu_benchmarks.py --system=daint:gpu -n OSULatencyTest -l' launched by: user@daint101 working directory: '/users/user/Devel/reframe' settings file: 'tutorials/config/settings.py' @@ -275,31 +280,33 @@ As a result, its immediate dependency :class:`OSUBuildTest` will be skipped, whi ./bin/reframe: could not resolve dependency: ('OSUBuildTest', 'daint:gpu', 'pgi') -> 'OSUDownloadTest' ./bin/reframe: skipping all dependent test cases - ('OSUBuildTest', 'daint:gpu', 'intel') - - ('OSUBuildTest', 'daint:gpu', 'pgi') - - ('OSUAllreduceTest_4', 'daint:gpu', 'intel') - ('OSUAllreduceTest_2', 'daint:gpu', 'intel') + - ('OSUBuildTest', 'daint:gpu', 'pgi') - ('OSULatencyTest', 'daint:gpu', 'pgi') - - ('OSUBandwidthTest', 'daint:gpu', 'pgi') - ('OSUAllreduceTest_8', 'daint:gpu', 'intel') + - ('OSUAllreduceTest_4', 'daint:gpu', 'pgi') + - ('OSULatencyTest', 'daint:gpu', 'intel') + - ('OSUAllreduceTest_4', 'daint:gpu', 'intel') + - ('OSUAllreduceTest_8', 'daint:gpu', 'pgi') - ('OSUAllreduceTest_16', 'daint:gpu', 'pgi') + - ('OSUAllreduceTest_16', 'daint:gpu', 'intel') + - ('OSUBandwidthTest', 'daint:gpu', 'pgi') - ('OSUBuildTest', 'daint:gpu', 'gnu') - ('OSUBandwidthTest', 'daint:gpu', 'intel') - - ('OSULatencyTest', 'daint:gpu', 'intel') - - ('OSUAllreduceTest_16', 'daint:gpu', 'intel') - - ('OSUAllreduceTest_8', 'daint:gpu', 'pgi') - - ('OSULatencyTest', 'daint:gpu', 'gnu') - ('OSUBandwidthTest', 'daint:gpu', 'gnu') - ('OSUAllreduceTest_2', 'daint:gpu', 'pgi') - - ('OSUAllreduceTest_4', 'daint:gpu', 'pgi') - - ('OSUAllreduceTest_8', 'daint:gpu', 'gnu') - - ('OSUAllreduceTest_4', 'daint:gpu', 'gnu') - - ('OSUAllreduceTest_2', 'daint:gpu', 'gnu') - ('OSUAllreduceTest_16', 'daint:gpu', 'gnu') + - ('OSUAllreduceTest_2', 'daint:gpu', 'gnu') + - ('OSULatencyTest', 'daint:gpu', 'gnu') + - ('OSUAllreduceTest_4', 'daint:gpu', 'gnu') + - ('OSUAllreduceTest_8', 'daint:gpu', 'gnu') [List of matched checks] Found 0 check(s) - Log file(s) saved in: '/tmp/rfm-hjit66h2.log' + + Log file(s) saved in: '/tmp/rfm-6cxeil6h.log' + Listing Dependencies diff --git a/tutorials/advanced/containers/container_test.py b/tutorials/advanced/containers/container_test.py index d47bef17ac..1885690cbf 100644 --- a/tutorials/advanced/containers/container_test.py +++ b/tutorials/advanced/containers/container_test.py @@ -7,14 +7,16 @@ import reframe.utility.sanity as sn -@rfm.parameterized_test(['Sarus'], ['Singularity']) +@rfm.simple_test class ContainerTest(rfm.RunOnlyRegressionTest): - def __init__(self, platform): - self.descr = f'Run commands inside a container using {platform}' + platform = parameter(['Sarus', 'Singularity']) + + def __init__(self): + self.descr = f'Run commands inside a container using {self.platform}' self.valid_systems = ['daint:gpu'] self.valid_prog_environs = ['builtin'] - image_prefix = 'docker://' if platform == 'Singularity' else '' - self.container_platform = platform + image_prefix = 'docker://' if self.platform == 'Singularity' else '' + self.container_platform = self.platform self.container_platform.image = f'{image_prefix}ubuntu:18.04' self.container_platform.command = ( "bash -c 'cat /etc/os-release | tee /rfm_workdir/release.txt'" diff --git a/tutorials/advanced/makefiles/maketest.py b/tutorials/advanced/makefiles/maketest.py index 49707235b8..2360ea1089 100644 --- a/tutorials/advanced/makefiles/maketest.py +++ b/tutorials/advanced/makefiles/maketest.py @@ -7,27 +7,31 @@ import reframe.utility.sanity as sn -@rfm.parameterized_test(['float'], ['double']) +@rfm.simple_test class MakefileTest(rfm.RegressionTest): - def __init__(self, elem_type): + elem_type = parameter(['float', 'double']) + + def __init__(self): self.descr = 'Test demonstrating use of Makefiles' self.valid_systems = ['*'] self.valid_prog_environs = ['clang', 'gnu'] self.executable = './dotprod' self.executable_opts = ['100000'] self.build_system = 'Make' - self.build_system.cppflags = [f'-DELEM_TYPE={elem_type}'] + self.build_system.cppflags = [f'-DELEM_TYPE={self.elem_type}'] self.sanity_patterns = sn.assert_found( - rf'Result \({elem_type}\):', self.stdout + rf'Result \({self.elem_type}\):', self.stdout ) -@rfm.parameterized_test(['float'], ['double']) +@rfm.simple_test class MakeOnlyTest(rfm.CompileOnlyRegressionTest): - def __init__(self, elem_type): + elem_type = parameter(['float', 'double']) + + def __init__(self): self.descr = 'Test demonstrating use of Makefiles' self.valid_systems = ['*'] self.valid_prog_environs = ['clang', 'gnu'] self.build_system = 'Make' - self.build_system.cppflags = [f'-DELEM_TYPE={elem_type}'] + self.build_system.cppflags = [f'-DELEM_TYPE={self.elem_type}'] self.sanity_patterns = sn.assert_not_found(r'warning', self.stdout) diff --git a/tutorials/advanced/makefiles/maketest_mixin.py b/tutorials/advanced/makefiles/maketest_mixin.py new file mode 100644 index 0000000000..17685b97dd --- /dev/null +++ b/tutorials/advanced/makefiles/maketest_mixin.py @@ -0,0 +1,37 @@ +# Copyright 2016-2021 Swiss National Supercomputing Centre (CSCS/ETH Zurich) +# ReFrame Project Developers. See the top-level LICENSE file for details. +# +# SPDX-License-Identifier: BSD-3-Clause + +import reframe as rfm +import reframe.utility.sanity as sn + + +class ElemTypeParam(rfm.RegressionMixin): + elem_type = parameter(['float', 'double']) + + +@rfm.simple_test +class MakefileTest(rfm.RegressionTest, ElemTypeParam): + def __init__(self): + self.descr = 'Test demonstrating use of Makefiles' + self.valid_systems = ['*'] + self.valid_prog_environs = ['clang', 'gnu'] + self.executable = './dotprod' + self.executable_opts = ['100000'] + self.build_system = 'Make' + self.build_system.cppflags = [f'-DELEM_TYPE={self.elem_type}'] + self.sanity_patterns = sn.assert_found( + rf'Result \({self.elem_type}\):', self.stdout + ) + + +@rfm.simple_test +class MakeOnlyTest(rfm.CompileOnlyRegressionTest, ElemTypeParam): + def __init__(self): + self.descr = 'Test demonstrating use of Makefiles' + self.valid_systems = ['*'] + self.valid_prog_environs = ['clang', 'gnu'] + self.build_system = 'Make' + self.build_system.cppflags = [f'-DELEM_TYPE={self.elem_type}'] + self.sanity_patterns = sn.assert_not_found(r'warning', self.stdout) diff --git a/tutorials/advanced/parameterized/stream.py b/tutorials/advanced/parameterized/stream.py index b32ca3e6ed..11c3bae702 100644 --- a/tutorials/advanced/parameterized/stream.py +++ b/tutorials/advanced/parameterized/stream.py @@ -7,10 +7,12 @@ import reframe.utility.sanity as sn -@rfm.parameterized_test(*([1 << pow] for pow in range(19, 30))) +@rfm.simple_test class StreamMultiSysTest(rfm.RegressionTest): - def __init__(self, num_bytes): - array_size = (num_bytes >> 3) // 3 + num_bytes = parameter(1 << pow for pow in range(19, 30)) + + def __init__(self): + array_size = (self.num_bytes >> 3) // 3 ntimes = 100*1024*1024 // array_size self.descr = f'STREAM test (array size: {array_size}, ntimes: {ntimes})' # noqa: E501 self.valid_systems = ['*'] diff --git a/tutorials/basics/hello/hello2.py b/tutorials/basics/hello/hello2.py index 9d3674d4ef..7d2b353e4d 100644 --- a/tutorials/basics/hello/hello2.py +++ b/tutorials/basics/hello/hello2.py @@ -7,10 +7,12 @@ import reframe.utility.sanity as sn -@rfm.parameterized_test(['c'], ['cpp']) +@rfm.simple_test class HelloMultiLangTest(rfm.RegressionTest): - def __init__(self, lang): + lang = parameter(['c', 'cpp']) + + def __init__(self): self.valid_systems = ['*'] self.valid_prog_environs = ['*'] - self.sourcepath = f'hello.{lang}' + self.sourcepath = f'hello.{self.lang}' self.sanity_patterns = sn.assert_found(r'Hello, World\!', self.stdout) diff --git a/tutorials/deps/osu_benchmarks.py b/tutorials/deps/osu_benchmarks.py index edb3bfe775..2467cf20c7 100644 --- a/tutorials/deps/osu_benchmarks.py +++ b/tutorials/deps/osu_benchmarks.py @@ -66,9 +66,11 @@ def set_executable(self, OSUBuildTest): self.executable_opts = ['-x', '100', '-i', '1000'] -@rfm.parameterized_test(*([1 << i] for i in range(1, 5))) +@rfm.simple_test class OSUAllreduceTest(OSUBenchmarkTestBase): - def __init__(self, num_tasks): + mpi_tasks = parameter(1 << i for i in range(1, 5)) + + def __init__(self): super().__init__() self.descr = 'OSU Allreduce test' self.perf_patterns = { @@ -77,7 +79,7 @@ def __init__(self, num_tasks): self.reference = { '*': {'latency': (0, None, None, 'us')} } - self.num_tasks = num_tasks + self.num_tasks = self.mpi_tasks @rfm.require_deps def set_executable(self, OSUBuildTest): From 169bafca90d328c249f82b2148a289b85d3e2e60 Mon Sep 17 00:00:00 2001 From: Vasileios Karakasis Date: Wed, 10 Mar 2021 23:56:34 +0100 Subject: [PATCH 2/3] Fix tutorial checks. --- docs/tutorial_advanced.rst | 8 ++++---- tutorials/advanced/makefiles/maketest_mixin.py | 4 ++-- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/tutorial_advanced.rst b/docs/tutorial_advanced.rst index cdbada6b09..aea50fbf4d 100644 --- a/docs/tutorial_advanced.rst +++ b/docs/tutorial_advanced.rst @@ -322,10 +322,10 @@ Notice how the parameters are expanded in each of the individual tests: output directory: '/Users/user/Repositories/reframe/output' [List of matched checks] - - MakeOnlyTest_double (found in '/Users/user/Repositories/reframe/tutorials/advanced/makefiles/maketest_mixin.py') - - MakeOnlyTest_float (found in '/Users/user/Repositories/reframe/tutorials/advanced/makefiles/maketest_mixin.py') - - MakefileTest_double (found in '/Users/user/Repositories/reframe/tutorials/advanced/makefiles/maketest_mixin.py') - - MakefileTest_float (found in '/Users/user/Repositories/reframe/tutorials/advanced/makefiles/maketest_mixin.py') + - MakeOnlyTestAlt_double (found in '/Users/user/Repositories/reframe/tutorials/advanced/makefiles/maketest_mixin.py') + - MakeOnlyTestAlt_float (found in '/Users/user/Repositories/reframe/tutorials/advanced/makefiles/maketest_mixin.py') + - MakefileTestAlt_double (found in '/Users/user/Repositories/reframe/tutorials/advanced/makefiles/maketest_mixin.py') + - MakefileTestAlt_float (found in '/Users/user/Repositories/reframe/tutorials/advanced/makefiles/maketest_mixin.py') Found 4 check(s) Log file(s) saved in: '/var/folders/h7/k7cgrdl13r996m4dmsvjq7v80000gp/T/rfm-e384bvkd.log' diff --git a/tutorials/advanced/makefiles/maketest_mixin.py b/tutorials/advanced/makefiles/maketest_mixin.py index 17685b97dd..ec9dc590e1 100644 --- a/tutorials/advanced/makefiles/maketest_mixin.py +++ b/tutorials/advanced/makefiles/maketest_mixin.py @@ -12,7 +12,7 @@ class ElemTypeParam(rfm.RegressionMixin): @rfm.simple_test -class MakefileTest(rfm.RegressionTest, ElemTypeParam): +class MakefileTestAlt(rfm.RegressionTest, ElemTypeParam): def __init__(self): self.descr = 'Test demonstrating use of Makefiles' self.valid_systems = ['*'] @@ -27,7 +27,7 @@ def __init__(self): @rfm.simple_test -class MakeOnlyTest(rfm.CompileOnlyRegressionTest, ElemTypeParam): +class MakeOnlyTestAlt(rfm.CompileOnlyRegressionTest, ElemTypeParam): def __init__(self): self.descr = 'Test demonstrating use of Makefiles' self.valid_systems = ['*'] From f11482afd7b66f41b59ec7edb930f3edd8ebfd47 Mon Sep 17 00:00:00 2001 From: Vasileios Karakasis Date: Fri, 12 Mar 2021 15:40:29 +0100 Subject: [PATCH 3/3] Address PR comments + improve links in the tutorial_basics --- docs/tutorial_advanced.rst | 18 ++++++++--------- docs/tutorial_basics.rst | 41 +++++++++++++++++++------------------- docs/tutorial_deps.rst | 2 +- 3 files changed, 30 insertions(+), 31 deletions(-) diff --git a/docs/tutorial_advanced.rst b/docs/tutorial_advanced.rst index aea50fbf4d..c120b42113 100644 --- a/docs/tutorial_advanced.rst +++ b/docs/tutorial_advanced.rst @@ -134,7 +134,7 @@ Let's have a look at the test itself: .. literalinclude:: ../tutorials/advanced/makefiles/maketest.py - :lines: 6-22 + :lines: 6-24 :emphasize-lines: 13,15-16 First, if you're using any build system other than ``SingleSource``, you must set the :attr:`executable` attribute of the test, because ReFrame cannot know what is the actual executable to be run. @@ -163,7 +163,7 @@ Let's inspect the build script generated by ReFrame: make -j 1 CPPFLAGS="-DELEM_TYPE=float" -The compiler variables (``CC``, ``CXX`` etc.) are set based on the corresponding values specified in the `coniguration `__ of the current environment. +The compiler variables (``CC``, ``CXX`` etc.) are set based on the corresponding values specified in the `configuration `__ of the current environment. We can instruct the build system to ignore the default values from the environment by setting its :attr:`~reframe.core.buildsystems.Make.flags_from_environ` attribute to false: .. code-block:: python @@ -211,7 +211,7 @@ Alternatively, we can retrieve specifically a Git repository by assigning its UR self.sourcesdir = 'https://github.com/me/myrepo' -ReFrame will attempt to clone this repository inside the stage directory by executing ``git clone .`` and will then procede with the build procedure as usual. +ReFrame will attempt to clone this repository inside the stage directory by executing ``git clone .`` and will then proceed with the build procedure as usual. .. note:: ReFrame recognizes only URLs in the :attr:`sourcesdir` attribute and requires passwordless access to the repository. @@ -279,7 +279,7 @@ The following test is a compile-only version of the :class:`MakefileTest` presen .. literalinclude:: ../tutorials/advanced/makefiles/maketest.py - :lines: 27-35 + :lines: 27-37 :emphasize-lines: 2 What is worth noting here is that the standard output and standard error of the test, which are accessible through the :attr:`~reframe.core.pipeline.RegressionTest.stdout` and :attr:`~reframe.core.pipeline.RegressionTest.stderr` attributes, correspond now to the standard output and error of the compilation command. @@ -293,9 +293,9 @@ Grouping parameter packs In the dot product example shown above, we had two independent tests that defined the same :attr:`elem_type` parameter. -And the two tests cannot have a parent-child relationship, since one of them is a run-only tests and the other is a compile-only one. -ReFrame offers the :class:`~reframe.core.pipeline.RegressionMixin` class that allows you to group parameters and variables that are meant to be reused over otherwise unrelated tests. -In the example below, we create an :class:`ElemTypeParam` mixin that holds the definition of the :attr:`elem_type` parameter is inherited by both the concrete test classes: +And the two tests cannot have a parent-child relationship, since one of them is a run-only test and the other is a compile-only one. +ReFrame offers the :class:`~reframe.core.pipeline.RegressionMixin` class that allows you to group parameters and other `builtins `__ that are meant to be reused over otherwise unrelated tests. +In the example below, we create an :class:`ElemTypeParam` mixin that holds the definition of the :attr:`elem_type` parameter which is inherited by both the concrete test classes: .. literalinclude:: ../tutorials/advanced/makefiles/maketest_mixin.py :lines: 6- @@ -371,7 +371,7 @@ As a direct replacement of the built-in :py:func:`all` function, ReFrame's :func So we must make sure that all 100 numbers are generated. This is achieved by the ``sn.assert_eq(sn.count(numbers), 100)`` statement, which uses the :func:`~reframe.utility.sanity.count` sanity function for counting the generated numbers. Finally, we need to combine these two conditions to a single deferred expression that will be assigned to the test's :attr:`sanity_patterns`. -We accomplish this by using again the :func:`~reframe.utility.sanity.all` sanity function. +We accomplish this by using the :func:`~reframe.utility.sanity.all` sanity function. For more information about how exactly sanity functions work and how their execution is deferred, please refer to :doc:`deferrables`. @@ -460,7 +460,7 @@ Here is the test: :emphasize-lines: 16-18 Each ReFrame test has an associated `run job descriptor `__ which represents the scheduler job that will be used to run this test. -This object has an :attr:`options` attributes, which can be used to pass arbitrary options to the scheduler. +This object has an :attr:`options` attribute, which can be used to pass arbitrary options to the scheduler. The job descriptor is initialized by the framework during the `setup `__ pipeline phase. For this reason, we cannot directly set the job options inside the test constructor and we have to use a pipeline hook that runs before running (i.e., submitting the test). diff --git a/docs/tutorial_basics.rst b/docs/tutorial_basics.rst index b4836b6387..4766c9c757 100644 --- a/docs/tutorial_basics.rst +++ b/docs/tutorial_basics.rst @@ -54,27 +54,26 @@ And here is the ReFrame version of it: :lines: 6- -Regression tests in ReFrame are specially decorated classes that ultimately derive from :class:`RegressionTest `. +Regression tests in ReFrame are specially decorated classes that ultimately derive from :class:`~reframe.core.pipeline.RegressionTest`. The :func:`@simple_test ` decorator registers a test class with ReFrame and makes it available to the framework. -The test parameters are essentially attributes of the test class and are usually defined in the test class constructor (:func:`__init__` function). -Each test must always set the :attr:`valid_systems ` and :attr:`valid_prog_environs ` attributes. +The test variables are essentially attributes of the test class and can be defined either in the test constructor (:func:`__init__` function) or the class body using the :func:`~reframe.core.pipeline.RegressionTest.variable` ReFrame builtin. +Each test must always set the :attr:`~reframe.core.pipeline.RegressionTest.valid_systems` and :attr:`~reframe.core.pipeline.RegressionTest.valid_prog_environs` attributes. These define the systems and/or system partitions that this test is allowed to run on, as well as the programming environments that it is valid for. A programming environment is essentially a compiler toolchain. We will see later on in the tutorial how a programming environment can be defined. The generic configuration of ReFrame assumes a single programming environment named ``builtin`` which comprises a C compiler that can be invoked with ``cc``. In this particular test we set both these attributes to ``['*']``, essentially allowing this test to run everywhere. -Each regression test must always define the :attr:`sanity_patterns ` attribute. +A ReFrame test must either define an executable to execute or a source file (or source code) to be compiled. +In this example, it is enough to define the source file of our hello program. +ReFrame knows the executable that was produced and will use that to run the test. + +Finally, each regression test must always define the :attr:`~reframe.core.pipeline.RegressionTest.sanity_patterns` attribute. This is a `lazily evaluated `__ expression that asserts the sanity of the test. In this particular case, we ask ReFrame to check for the desired phrase in the test's standard output. Note that ReFrame does not determine the success of a test by its exit code. The assessment of success is responsibility of the test itself. -Finally, a test must either define an executable to execute or a source file (or source code) to be compiled. -In this example, it is enough to define the source file of our hello program. -ReFrame knows the executable that was produced and will use that to run the test. - - Before running the test let's inspect the directory structure surrounding it: .. code-block:: none @@ -85,7 +84,7 @@ Before running the test let's inspect the directory structure surrounding it: └── hello.c Our test is ``hello1.py`` and its resources, i.e., the ``hello.c`` source file, are located inside the ``src/`` subdirectory. -If not specified otherwise, the :attr:`sourcepath ` attribute is always resolved relative to ``src/``. +If not specified otherwise, the :attr:`~reframe.core.pipeline.RegressionTest.sourcepath` attribute is always resolved relative to ``src/``. There is full flexibility in organizing the tests. Multiple tests may be defined in a single file or they may be split in multiple files. Similarly, several tests may share the same resources directory or they can simply have their own. @@ -435,7 +434,7 @@ Build systems take also care of interactions with the programming environment if Compilation flags are a property of the build system. If not explicitly specified, ReFrame will try to pick the correct build system (e.g., CMake, Autotools etc.) by inspecting the test resources, but in cases as the one presented here where we need to set the compilation flags, we need to specify a build system explicitly. In this example, we instruct ReFrame to compile a single source file using the ``-std=c++11 -Wall`` compilation flags. -Finally, we set the arguments to be passed to the generated executable in :attr:`executable_opts `. +Finally, we set the arguments to be passed to the generated executable in :attr:`~reframe.core.pipeline.RegressionTest.executable_opts`. .. code-block:: console @@ -629,16 +628,16 @@ In the test below, we highlight the lines that introduce new concepts. :emphasize-lines: 10-12,17-20,23-32 First of all, notice that we restrict the programming environments to ``gnu`` only, since this test requires OpenMP, which our installation of Clang does not have. -The next thing to notice is the :attr:`prebuild_cmds ` attribute, which provides a list of commands to be executed before the build step. +The next thing to notice is the :attr:`~reframe.core.pipeline.RegressionTest.prebuild_cmds` attribute, which provides a list of commands to be executed before the build step. These commands will be executed from the test's stage directory. In this case, we just fetch the source code of the benchmark. For running the benchmark, we need to set the OpenMP number of threads and pin them to the right CPUs through the ``OMP_NUM_THREADS`` and ``OMP_PLACES`` environment variables. -You can set environment variables in a ReFrame test through the :attr:`variables ` dictionary. +You can set environment variables in a ReFrame test through the :attr:`~reframe.core.pipeline.RegressionTest.variables` dictionary. -What makes a ReFrame test a performance test is the definition of the :attr:`perf_patterns ` attribute. +What makes a ReFrame test a performance test is the definition of the :attr:`~reframe.core.pipeline.RegressionTest.perf_patterns` attribute. This is a dictionary where the keys are *performance variables* and the values are lazily evaluated expressions for extracting the performance variable values from the test's output. -In this example, we extract four performance variables, namely the memory bandwidth values for each of the "Copy", "Scale", "Add" and "Triad" sub-benchmarks of STREAM and we do so by using the :func:`extractsingle ` sanity function. -For each of the sub-benchmarks we extract the "Best Rate MB/s" column of the output (see below) and wee convert that to a float. +In this example, we extract four performance variables, namely the memory bandwidth values for each of the "Copy", "Scale", "Add" and "Triad" sub-benchmarks of STREAM and we do so by using the :func:`~reframe.utility.sanity.extractsingle` sanity function. +For each of the sub-benchmarks we extract the "Best Rate MB/s" column of the output (see below) and we convert that to a float. .. code-block:: none @@ -754,7 +753,7 @@ Examining the performance logs ReFrame has a powerful mechanism for logging its activities as well as performance data. It supports different types of log channels and it can send data simultaneously in any number of them. -For example, performance data might be logged in files and the same time being send to Syslog or to a centralized log management server. +For example, performance data might be logged in files and the same time being sent to Syslog or to a centralized log management server. By default (i.e., starting off from the builtin configuration file), ReFrame sends performance data to files per test under the ``perflogs/`` directory: .. code-block:: none @@ -795,7 +794,7 @@ Porting The Tests to an HPC cluster It's now time to port our tests to an HPC cluster. Obviously, HPC clusters are much more complex than our laptop or PC. -Usually there are many more compilers, the user environment is handled in a different way, and the way to launch the tests varies significantly, since you have to go through a workload manager in order to acces the actual compute nodes. +Usually there are many more compilers, the user environment is handled in a different way, and the way to launch the tests varies significantly, since you have to go through a workload manager in order to access the actual compute nodes. Besides that, there might be multiple types of compute nodes that we would like to run our tests on, but each type might be accessed in a different way. It is already apparent that porting even an as simple as a "Hello, World" test to such a system is not that straightforward. As we shall see in this section, ReFrame makes that pretty easy. @@ -1062,7 +1061,7 @@ Adapting a test to new systems and programming environments ----------------------------------------------------------- Unless a test is rather generic, you will need to make some adaptations for the system that you port it to. -In this case, we will adapt the STREAM benchmark so as to run it with multiple compiler and adjust its execution parameters based on the target architecture of each partition. +In this case, we will adapt the STREAM benchmark so as to run it with multiple compiler and adjust its execution based on the target architecture of each partition. Let's see and comment the changes: .. code-block:: console @@ -1082,11 +1081,11 @@ Based on the system ReFrame runs on and the supported environments of the tests, During its execution, a test case goes through the *regression test pipeline*, which is a series of well defined phases. Users can attach arbitrary functions to run before or after any pipeline stage and this is exactly what the :func:`setflags` function is. We instruct ReFrame to run this function before the test enters the ``compile`` stage and set accordingly the compilation flags. -The system partition and the programming environment of the currently running test case are available to a ReFrame test through the :attr:`current_partition ` and :attr:`current_environ ` attributes respectively. +The system partition and the programming environment of the currently running test case are available to a ReFrame test through the :attr:`~reframe.core.pipeline.RegressionTest.current_partition` and :attr:`~reframe.core.pipeline.RegressionTest.current_environ` attributes respectively. These attributes, however, are only set after the first stage (``setup``) of the pipeline is executed, so we can't use them inside the test's constructor. We do exactly the same for setting the ``OMP_NUM_THREADS`` environment variables depending on the system partition we are running on, by attaching the :func:`set_num_threads` pipeline hook to the ``run`` phase of the test. -In that same hook we also set the :attr:`num_cpus_per_task ` attribute of the test, so as to instruct the backend job scheduler to properly assign CPU cores to the test. +In that same hook we also set the :attr:`~reframe.core.pipeline.RegressionTest.num_cpus_per_task` attribute of the test, so as to instruct the backend job scheduler to properly assign CPU cores to the test. In ReFrame tests you can set a series of task allocation attributes that will be used by the backend schedulers to emit the right job submission script. The section :ref:`scheduler_options` of the :doc:`regression_test_api` summarizes these attributes and the actual backend scheduler options that they correspond to. diff --git a/docs/tutorial_deps.rst b/docs/tutorial_deps.rst index 881419df1f..b699c4a22b 100644 --- a/docs/tutorial_deps.rst +++ b/docs/tutorial_deps.rst @@ -66,7 +66,7 @@ Here is the relevant part: First, since we will have multiple similar benchmarks, we move all the common functionality to the :class:`OSUBenchmarkTestBase` base class. Again nothing new here; we are going to use two nodes for the benchmark and we set :attr:`sourcesdir ` to ``None``, since none of the benchmark tests will use any additional resources. -As done previously, we define the dependencies with the the following line: +As done previously, we define the dependencies with the following line: .. literalinclude:: ../tutorials/deps/osu_benchmarks.py :lines: 23