diff --git a/docs/dependencies.rst b/docs/dependencies.rst index 29ee052794..006f819790 100644 --- a/docs/dependencies.rst +++ b/docs/dependencies.rst @@ -13,18 +13,18 @@ This can be expressed inside :class:`T1` using the :func:`depends_on` method: @rfm.simple_test class T0(rfm.RegressionTest): - def __init__(self): - ... - self.valid_systems = ['P0', 'P1'] - self.valid_prog_environs = ['E0', 'E1'] + ... + valid_systems = ['P0', 'P1'] + valid_prog_environs = ['E0', 'E1'] @rfm.simple_test class T1(rfm.RegressionTest): + ... + valid_systems = ['P0', 'P1'] + valid_prog_environs = ['E0', 'E1'] + def __init__(self): - ... - self.valid_systems = ['P0', 'P1'] - self.valid_prog_environs = ['E0', 'E1'] self.depends_on('T0') Conceptually, this dependency can be viewed at the test level as follows: diff --git a/docs/regression_test_api.rst b/docs/regression_test_api.rst index 63d870244e..2523d63531 100644 --- a/docs/regression_test_api.rst +++ b/docs/regression_test_api.rst @@ -70,7 +70,7 @@ In essence, these builtins exert control over the test creation, and they allow Inserts or modifies a regression test parameter. If a parameter with a matching name is already present in the parameter space of a parent class, the existing parameter values will be combined with those provided by this method following the inheritance behavior set by the arguments ``inherit_params`` and ``filter_params``. Instead, if no parameter with a matching name exists in any of the parent parameter spaces, a new regression test parameter is created. - A regression test can be parametrized as follows: + A regression test can be parameterized as follows: .. code:: python @@ -78,49 +78,28 @@ In essence, these builtins exert control over the test creation, and they allow variant = parameter(['A', 'B']) # print(variant) # Error: a parameter may only be accessed from the class instance. - def __init__(self): + @rfm.run_after('init') + def do_something(self): if self.variant == 'A': do_this() else: do_other() - One of the most powerful features about these built-in functions is that they store their input information at the class level. + One of the most powerful features of these built-in functions is that they store their input information at the class level. However, a parameter may only be accessed from the class instance and accessing it directly from the class body is disallowed. - With this approach, extending or specializing an existing parametrized regression test becomes straightforward, since the test attribute additions and modifications made through built-in functions in the parent class are automatically inherited by the child test. - For instance, continuing with the example above, one could override the :func:`__init__` method in the :class:`Foo` regression test as follows: + With this approach, extending or specializing an existing parameterized regression test becomes straightforward, since the test attribute additions and modifications made through built-in functions in the parent class are automatically inherited by the child test. + For instance, continuing with the example above, one could override the :func:`do_something` hook in the :class:`Foo` regression test as follows: .. code:: python class Bar(Foo): - def __init__(self): + @rfm.run_after('init') + def do_something(self): if self.variant == 'A': override_this() else: override_other() - Note that this built-in parameter function provides an alternative method to parameterize a test to :func:`reframe.core.decorators.parameterized_test`, and the use of both approaches in the same test is currently disallowed. - The two main advantages of the built-in :func:`parameter` over the decorated approach reside in the parameter inheritance across classes and the handling of large parameter sets. - As shown in the example above, the parameters declared with the built-in :func:`parameter` are automatically carried over into derived tests through class inheritance, whereas tests using the decorated approach would have to redefine the parameters on every test. - Similarly, parameters declared through the built-in :func:`parameter` are regarded as fully independent from each other and ReFrame will automatically generate as many tests as available parameter combinations. This is a major advantage over the decorated approach, where one would have to manually expand the parameter combinations. - This is illustrated in the example below, consisting of a case with two parameters, each having two possible values. - - .. code:: python - - # Parameterized test with two parameters (p0 = ['a', 'b'] and p1 = ['x', 'y']) - @rfm.parameterized_test(['a','x'], ['a','y'], ['b','x'], ['b', 'y']) - class Foo(rfm.RegressionTest): - def __init__(self, p0, p1): - do_something(p0, p1) - - # This is easier to write with the parameter built-in. - @rfm.simple_test - class Bar(rfm.RegressionTest): - p0 = parameter(['a', 'b']) - p1 = parameter(['x', 'y']) - - def __init__(self): - do_something(self.p0, self.p1) - :param values: A list containing the parameter values. If no values are passed when creating a new parameter, the parameter is considered as *declared* but not *defined* (i.e. an abstract parameter). @@ -136,7 +115,7 @@ In essence, these builtins exert control over the test creation, and they allow Inserts a new regression test variable. Declaring a test variable through the :func:`variable` built-in allows for a more robust test implementation than if the variables were just defined as regular test attributes (e.g. ``self.a = 10``). Using variables declared through the :func:`variable` built-in guarantees that these regression test variables will not be redeclared by any child class, while also ensuring that any values that may be assigned to such variables comply with its original declaration. - In essence, by using test variables, the user removes any potential test errors that might be caused by accidentally overriding a class attribute. See the example below. + In essence, declaring test variables with the :func:`variable` built-in removes any potential test errors that might be caused by accidentally overriding a class attribute. See the example below. .. code:: python @@ -145,27 +124,42 @@ In essence, these builtins exert control over the test creation, and they allow my_var = variable(int, value=8) not_a_var = my_var - 4 - def __init__(self): + @rfm.run_after('init') + def access_vars(self): print(self.my_var) # prints 8. # self.my_var = 'override' # Error: my_var must be an int! self.not_a_var = 'override' # However, this would work. Dangerous! self.my_var = 10 # tests may also assign values the standard way - The argument ``value`` in the :func:`variable` built-in sets the default value for the variable. - Note that a variable may be accesed directly from the class body as long as its value was previously assigned in the same class body. - As mentioned above, a variable may not be declared more than once, but its default value can be updated by simply assigning it a new value directly in the class body. However, a variable may only be acted upon once in the same class body. + Here, the argument ``value`` in the :func:`variable` built-in sets the default value for the variable. + This value may be accessed directly from the class body, as long as it was assigned before either in the same class body or in the class body of a parent class. + This behavior extends the standard Python data model, where a regular class attribute from a parent class is never available in the class body of a child class. + Hence, using the :func:`variable` built-in enables us to directly use or modify any variables that may have been declared upstream the class inheritance chain, without altering their original value at the parent class level. .. code:: python class Bar(Foo): + print(my_var) # prints 8 + # print(not_a_var) # This is standard Python and raises a NameError + + # Since my_var is available, we can also update its value: my_var = 4 - # my_var = 'override' # Error again! - # my_var = 8 # Error: Double action on `my_var` is not allowed. - def __init__(self): - print(self.my_var) # prints 4. + # Bar inherits the full declaration of my_var with the original type-checking. + # my_var = 'override' # Wrong type error again! + + @rfm.run_after('init') + def access_vars(self): + print(self.my_var) # prints 4 + print(self.not_a_var) # prints 4 + - Here, the class :class:`Bar` inherits the variables from :class:`Foo` and can see that ``my_var`` has already been declared in the parent class. Therefore, the value of ``my_var`` is updated ensuring that the new value complies to the original variable declaration. + print(Foo.my_var) # prints 8 + print(Bar.my_var) # prints 4 + + + Here, :class:`Bar` inherits the variables from :class:`Foo` and can see that ``my_var`` has already been declared in the parent class. Therefore, the value of ``my_var`` is updated ensuring that the new value complies to the original variable declaration. + However, the value of ``my_var`` at :class:`Foo` remains unchanged. These examples above assumed that a default value can be provided to the variables in the bases tests, but that might not always be the case. For example, when writing a test library, one might want to leave some variables undefined and force the user to set these when using the test. @@ -177,9 +171,11 @@ In essence, these builtins exert control over the test creation, and they allow class EchoBaseTest(rfm.RunOnlyRegressionTest): what = variable(str) - def __init__(self): - self.valid_systems = ['*'] - self.valid_prog_environs = ['PrgEnv-gnu'] + valid_systems = ['*'] + valid_prog_environs = ['PrgEnv-gnu'] + + @rfm.run_before('run') + def set_exec_and_sanity(self): self.executable = f'echo {self.what}' self.sanity_patterns = sn.assert_found(fr'{self.what}') @@ -190,14 +186,14 @@ In essence, these builtins exert control over the test creation, and they allow what = 'Hello' - # A parametrized test with type-checking + # A parameterized test with type-checking @rfm.simple_test class FoodTest(EchoBaseTest): param = parameter(['Bacon', 'Eggs']) - def __init__(self): + @rfm.run_after('init') + def set_vars_with_params(self): self.what = self.param - super().__init__() Similarly to a variable with a value already assigned to it, the value of a required variable may be set either directly in the class body, on the :func:`__init__` method, or in any other hook before it is referenced. @@ -210,7 +206,7 @@ In essence, these builtins exert control over the test creation, and they allow what = required - Running the above test will cause the :func:`__init__` method from :class:`EchoBaseTest` to throw an error indicating that the variable ``what`` has not been set. + Running the above test will cause the :func:`set_exec_and_sanity` hook from :class:`EchoBaseTest` to throw an error indicating that the variable ``what`` has not been set. :param types: the supported types for the variable. :param value: the default value assigned to the variable. If no value is provided, the variable is set as ``required``. diff --git a/docs/tutorial_advanced.rst b/docs/tutorial_advanced.rst index dc436b1f9f..a553082f47 100644 --- a/docs/tutorial_advanced.rst +++ b/docs/tutorial_advanced.rst @@ -28,7 +28,7 @@ Here is the adapted code with the relevant parts highlighted (for simplicity, we .. literalinclude:: ../tutorials/advanced/parameterized/stream.py :lines: 6- - :emphasize-lines: 7,10-11,20-21 + :emphasize-lines: 7-9,44-51,55-56 Any ordinary ReFrame test becomes a parameterized one if the user defines parameters inside the class body of the test. This is done using the :py:func:`~reframe.core.pipeline.RegressionTest.parameter` ReFrame built-in function, which accepts the list of parameter values. @@ -89,7 +89,8 @@ The following example will create a test for each ``GROMACS`` module found on th class MyTest(rfm.RegressionTest): module_info = parameter(util.find_modules('GROMACS')) - def __init__(self): + @rfm.run_after('init') + def process_module_info(self): s, e, m = self.module_info self.valid_systems = [s] self.valid_prog_environs = [e] @@ -134,8 +135,8 @@ Let's have a look at the test itself: .. literalinclude:: ../tutorials/advanced/makefiles/maketest.py - :lines: 6-24 - :emphasize-lines: 13,15-16 + :lines: 6-29 + :emphasize-lines: 18,22-24 First, if you're using any build system other than ``SingleSource``, you must set the :attr:`executable` attribute of the test, because ReFrame cannot know what is the actual executable to be run. We then set the build system to :class:`~reframe.core.buildsystems.Make` and set the preprocessor flags as we would do with the :class:`SingleSource` build system. @@ -279,7 +280,7 @@ The following test is a compile-only version of the :class:`MakefileTest` presen .. literalinclude:: ../tutorials/advanced/makefiles/maketest.py - :lines: 27-37 + :lines: 32- :emphasize-lines: 2 What is worth noting here is that the standard output and standard error of the test, which are accessible through the :attr:`~reframe.core.pipeline.RegressionTest.stdout` and :attr:`~reframe.core.pipeline.RegressionTest.stderr` attributes, correspond now to the standard output and error of the compilation command. @@ -299,7 +300,7 @@ In the example below, we create an :class:`ElemTypeParam` mixin that holds the d .. literalinclude:: ../tutorials/advanced/makefiles/maketest_mixin.py :lines: 6- - :emphasize-lines: 5-6,10,25 + :emphasize-lines: 5-6,10,30 Notice how the parameters are expanded in each of the individual tests: @@ -398,7 +399,7 @@ Here is the modified test file: .. literalinclude:: ../tutorials/advanced/random/prepostrun.py :lines: 6- - :emphasize-lines: 11-12,17,20 + :emphasize-lines: 10-11,19,22 The :attr:`prerun_cmds` and :attr:`postrun_cmds` are lists of commands to be emitted in the generated job script before and after the parallel launch of the executable. Obviously, the working directory for these commands is that of the job script itself, which is the stage directory of the test. @@ -457,7 +458,7 @@ Here is the test: .. literalinclude:: ../tutorials/advanced/jobopts/eatmemory.py :lines: 6-23 - :emphasize-lines: 16-18 + :emphasize-lines: 12-14 Each ReFrame test has an associated `run job descriptor `__ which represents the scheduler job that will be used to run this test. This object has an :attr:`options` attribute, which can be used to pass arbitrary options to the scheduler. @@ -513,8 +514,8 @@ Let's see how we can rewrite the :class:`MemoryLimitTest` using the ``memory`` r .. literalinclude:: ../tutorials/advanced/jobopts/eatmemory.py - :lines: 26-38 - :emphasize-lines: 11-13 + :lines: 28- + :emphasize-lines: 7-9 The extra resources that the test needs to obtain through its scheduler are specified in the :attr:`~reframe.core.pipeline.RegressionTest.extra_resources` attribute, which is a dictionary with the resource names as its keys and another dictionary assigning values to the resource placeholders as its values. As you can see, this syntax is completely scheduler-agnostic. @@ -566,8 +567,7 @@ This can be achieved with the following pipeline hook: from reframe.core.launchers import LauncherWrapper class DebuggerTest(rfm.RunOnlyRegressionTest): - def __init__(self): - ... + ... @rfm.run_before('run') def set_launcher(self): @@ -592,10 +592,9 @@ The trick here is to replace the parallel launcher with the local one, which pra class CustomLauncherTest(rfm.RunOnlyRegressionTest): - def __init__(self): - ... - self.executable = 'custom_scheduler' - self.executable_opts = [...] + ... + executable = 'custom_scheduler' + executable_opts = [...] @rfm.run_before('run') def replace_launcher(self): @@ -625,7 +624,7 @@ It resembles a scaling test, except that all happens inside a single ReFrame tes .. literalinclude:: ../tutorials/advanced/multilaunch/multilaunch.py :lines: 6- - :emphasize-lines: 17-23 + :emphasize-lines: 12-19 The additional parallel launch commands are inserted in either the :attr:`prerun_cmds` or :attr:`postrun_cmds` lists. To retrieve the actual parallel launch command for the current partition that the test is running on, you can use the :func:`~reframe.core.launchers.Launcher.run_command` method of the launcher object. @@ -678,7 +677,7 @@ The test will verify that all the nodes print the expected host name: .. literalinclude:: ../tutorials/advanced/flexnodes/flextest.py :lines: 6- - :emphasize-lines: 11-16 + :emphasize-lines: 10- The first thing to notice in this test is that :attr:`~reframe.core.pipeline.RegressionTest.num_tasks` is set to zero. This is a requirement for flexible tests. @@ -723,7 +722,7 @@ The following parameterized test, will create two tests, one for each of the sup .. literalinclude:: ../tutorials/advanced/containers/container_test.py :lines: 6- - :emphasize-lines: 14-19 + :emphasize-lines: 16-22 A container-based test can be written as :class:`~reframe.core.pipeline.RunOnlyRegressionTest` that sets the :attr:`~reframe.core.pipeline.RegressionTest.container_platform` attribute. This attribute accepts a string that corresponds to the name of the container platform that will be used to run the container for this test. @@ -779,13 +778,8 @@ In the current test, the output of the ``cat /etc/os-release`` is available both and ``/rfm_workdir`` corresponds to the stage directory on the host system. Therefore, the ``release.txt`` file can now be used in the subsequent sanity checks: -.. code-block:: python - - os_release_pattern = r'18.04.\d+ LTS \(Bionic Beaver\)' - self.sanity_patterns = sn.all([ - sn.assert_found(os_release_pattern, 'release.txt'), - sn.assert_found(os_release_pattern, self.stdout) - ]) +.. literalinclude:: ../tutorials/advanced/containers/container_test.py + :lines: 15-17 For a complete list of the available attributes of a specific container platform, please have a look at the :ref:`container-platforms` section of the :doc:`regression_test_api` guide. @@ -798,8 +792,10 @@ Writing reusable tests .. versionadded:: 3.5.0 So far, all the examples shown above were tight to a particular system or configuration, which makes reusing these tests in other systems not straightforward. -However, the introduction of the :py:func:`~reframe.core.pipeline.RegressionTest.parameter` and :py:func:`~reframe.core.pipeline.RegressionTest.variable` ReFrame built-ins solves this problem, eliminating the need to specify any of the test variables in the :func:`__init__` method. -Hence, these parameters and variables can be treated as simple class attributes, which allows us to leverage Python's class inheritance and write more modular tests. +However, the introduction of the :py:func:`~reframe.core.pipeline.RegressionTest.parameter` and :py:func:`~reframe.core.pipeline.RegressionTest.variable` ReFrame built-ins solves this problem, eliminating the need to specify any of the test variables in the :func:`__init__` method and simplifying code reuse. +Hence, readers who are not familiar with these built-in functions are encouraged to read their basic use examples (see :py:func:`~reframe.core.pipeline.RegressionTest.parameter` and :py:func:`~reframe.core.pipeline.RegressionTest.variable`) before delving any deeper into this tutorial. + +In essence, parameters and variables can be treated as simple class attributes, which allows us to leverage Python's class inheritance and write more modular tests. For simplicity, we illustrate this concept with the above :class:`ContainerTest` example, where the goal here is to re-write this test as a library that users can simply import from and derive their tests without having to rewrite the bulk of the test. Also, for illustrative purposes, we parameterize this library test on a few different image tags (the above example just used ``ubuntu:18.04``) and throw the container commands into a separate bash script just to create some source files. Thus, removing all the system and configuration specific variables, and moving as many assignments as possible into the class body, the system agnostic library test looks as follows: @@ -816,7 +812,7 @@ Thus, removing all the system and configuration specific variables, and moving a Note that the class :class:`ContainerBase` is not decorated since it does not specify the required variables ``valid_systems`` and ``valid_prog_environs``, and it declares the ``platform`` parameter without any defined values assigned. Hence, the user can simply derive from this test and specialize it to use the desired container platforms. Since the parameters are defined directly in the class body, the user is also free to override or extend any of the other parameters in a derived test. -In this example, we have parametrized the base test to run with the ``ubuntu:18.04`` and ``ubuntu:20.04`` images, but these values from ``dist`` (and also the ``dist_name`` variable) could be modified by the derived class if needed. +In this example, we have parameterized the base test to run with the ``ubuntu:18.04`` and ``ubuntu:20.04`` images, but these values from ``dist`` (and also the ``dist_name`` variable) could be modified by the derived class if needed. On the other hand, the rest of the test depends on the values from the test parameters, and a parameter is only assigned a specific value after the class has been instantiated. Thus, the rest of the test is expressed as hooks, without the need to write anything in the :func:`__init__` method. diff --git a/docs/tutorial_basics.rst b/docs/tutorial_basics.rst index e9fe705606..71921ff412 100644 --- a/docs/tutorial_basics.rst +++ b/docs/tutorial_basics.rst @@ -56,7 +56,7 @@ And here is the ReFrame version of it: Regression tests in ReFrame are specially decorated classes that ultimately derive from :class:`~reframe.core.pipeline.RegressionTest`. The :func:`@simple_test ` decorator registers a test class with ReFrame and makes it available to the framework. -The test variables are essentially attributes of the test class and can be defined either in the test constructor (:func:`__init__` function) or the class body using the :func:`~reframe.core.pipeline.RegressionTest.variable` ReFrame builtin. +The test variables are essentially attributes of the test class and can be defined directly in the class body. Each test must always set the :attr:`~reframe.core.pipeline.RegressionTest.valid_systems` and :attr:`~reframe.core.pipeline.RegressionTest.valid_prog_environs` attributes. These define the systems and/or system partitions that this test is allowed to run on, as well as the programming environments that it is valid for. A programming environment is essentially a compiler toolchain. @@ -67,12 +67,14 @@ In this particular test we set both these attributes to ``['*']``, essentially a A ReFrame test must either define an executable to execute or a source file (or source code) to be compiled. In this example, it is enough to define the source file of our hello program. ReFrame knows the executable that was produced and will use that to run the test. +In this example, we redirect the executable's output into a file by defining the optional variable :attr:`~reframe.core.pipeline.RegressionTest.executable_opts`. +This output redirection is not strictly necessary and it is just done here to keep this first example as intuitive as possible. Finally, each regression test must always define the :attr:`~reframe.core.pipeline.RegressionTest.sanity_patterns` attribute. This is a `lazily evaluated `__ expression that asserts the sanity of the test. -In this particular case, we ask ReFrame to check for the desired phrase in the test's standard output. +In this particular case, we ask ReFrame to check that the executable has produced the desired phrase into the output file ``hello.out``. Note that ReFrame does not determine the success of a test by its exit code. -The assessment of success is responsibility of the test itself. +Instead, the assessment of success is responsibility of the test itself. Before running the test let's inspect the directory structure surrounding it: @@ -231,10 +233,32 @@ ReFrame allows you to avoid this in several ways but the most compact is to defi :lines: 6- -This is exactly the same test as the ``hello1.py`` except that it defines the ``lang`` parameter to denote the programming language to be used by the test. -The :py:func:`~reframe.core.pipeline.RegressionTest.parameter` ReFrame built-in defines a new parameter for the test and will cause multiple instantiations of the test, each one setting the :attr:`lang` attribute to the actual parameter value. -In this example, two tests will be created, one with ``lang='c'`` and another with ``lang='cpp'``. -The parameter is available as an attribute of the test class and, in this example, we use it to set the extension of the source file. +This test extends the ``hello1.py`` test by defining the ``lang`` parameter with the :py:func:`~reframe.core.pipeline.RegressionTest.parameter` built-in. +This parameter will cause as many instantiations as parameter values available, each one setting the :attr:`lang` attribute to one single value. +Hence, this example will create two test instances, one with ``lang='c'`` and another with ``lang='cpp'``. +The parameter is available as an attribute of the test instance and, in this example, we use it to set the extension of the source file. +However, at the class level, a test parameter holds all the possible values for itself, and this is only assigned a single value after the class is instantiated. +Therefore, the variable ``sourcepath``, which depends on this parameter, also needs to be set after the class instantiation. +The simplest way to do this would be to move the ``sourcepath`` assignment into the :func:`__init__` method as shown in the code snippet below, but this has some disadvantages when writing larger tests. + +.. code-block:: python + + def __init__(self): + self.sourcepath = f'hello.{self.lang}' + +For example, when writing a base class for a test with a large amount of code into the :func:`__init__` method, the derived class may want to do a partial override of the code in this function. +This would force us to understand the full implementation of the base class' :func:`__init__` despite that we may just be interested in overriding a small part of it. +Doable, but not ideal. +Instead, through pipeline hooks, ReFrame provides a mechanism to attach independent functions to execute at a given time before the data they set is required by the test. +This is exactly what we want to do here, and we know that the test sources are needed to compile the code. +Hence, we move the ``sourcepath`` assignment into a pre-compile hook. + +.. literalinclude:: ../tutorials/basics/hello/hello2.py + :lines: 19- + +The use of hooks is covered in more detail later on, but for now, let's just think of them as a way to defer the execution of a function to a given stage of the test's pipeline. +By using hooks, any user could now derive from this class and attach other hooks (for example, adding some compiler flags) without having to worry about overriding the base method that sets the ``sourcepath`` variable. + Let's run the test now: @@ -427,33 +451,35 @@ Here is the corresponding ReFrame test, where the new concepts introduced are hi .. literalinclude:: ../tutorials/basics/hellomp/hellomp1.py :lines: 6- - :emphasize-lines: 11-13 - + :emphasize-lines: 10-10, 13-18 -In order to compile applications using ``std::thread`` with GCC and Clang, the ``-pthread`` option has to be passed to the compiler. -Since the above option might not be valid for other compilers, we use pipeline hooks to differentiate based on the programming environment as follows: - -.. code-block:: python - - @rfm.run_before('compile') - def set_threading_flags(self): - environ = self.current_environ.name - if environ in {'clang', 'gnu'}: - self.build_system.cxxflags += ['-pthread'] +ReFrame delegates the compilation of a test to a :attr:`~reframe.core.pipeline.RegressionTest.build_system`, which is an abstraction of the steps needed to compile the test. +Build systems take also care of interactions with the programming environment if necessary. +Compilation flags are a property of the build system. +If not explicitly specified, ReFrame will try to pick the correct build system (e.g., CMake, Autotools etc.) by inspecting the test resources, but in cases as the one presented here where we need to set the compilation flags, we need to specify a build system explicitly. +In this example, we instruct ReFrame to compile a single source file using the ``-std=c++11 -pthread -Wall`` compilation flags. +However, the flag ``-pthread`` is only needed to compile applications using ``std::thread`` with the GCC and Clang compilers. +Hence, since this flag may not be valid for other compilers, we need to include it only in the tests that use either GCC or Clang. +Similarly to the ``lang`` parameter in the previous example, the information regarding which compiler is being used is only available after the class is instantiated (after completion of the ``setup`` pipeline stage), so we also defer the addition of this optional compiler flag with a pipeline hook. +In this case, we set the :func:`set_compile_flags` hook to run before the ReFrame pipeline stage ``compile``. .. note:: The pipeline hooks, as well as the regression test pipeline itself, are covered in more detail later on in the tutorial. -ReFrame delegates the compilation of a test to a *build system*, which is an abstraction of the steps needed to compile the test. -Build systems take also care of interactions with the programming environment if necessary. -Compilation flags are a property of the build system. -If not explicitly specified, ReFrame will try to pick the correct build system (e.g., CMake, Autotools etc.) by inspecting the test resources, but in cases as the one presented here where we need to set the compilation flags, we need to specify a build system explicitly. -In this example, we instruct ReFrame to compile a single source file using the ``-std=c++11 -pthread -Wall`` compilation flags. -Finally, we set the arguments to be passed to the generated executable in :attr:`executable_opts `. +In this example, the generated executable takes a single argument which sets the number of threads that will be used. +As seen in the previous examples, executable options are defined with the :attr:`executable_opts ` variable, and here is set to ``'16'``. +Also, the reader may notice that this example no longer redirects the standard output of the executable into a file as the previous examples did. +Instead, just with the purpose of keeping the :attr:`executable_opts ` simple, we use ReFrame's internal mechanism to process the standard output of the executable. +Similarly to the parameters and the compiler settings, the output of a test is private to each of the instances of the :class:`HelloThreadedTest` class. +So, instead of inspecting an external file to evaluate the sanity of the test, we can just set our sanity function to inspect this attribute that contains the test's standard output. +This output is stored under :attr:`self.stdout` and is populated only after the executable has run. +Therefore, we can set the :attr:`~reframe.core.pipeline.RegressionTest.sanity_patterns` with the :func:`set_sanity_patterns` pipeline hook that is scheduled to run before the ``sanity`` pipeline stage. +Again, pipeline stages will be covered detail further on, so for now, just think of this ``sanity`` stage as a step that occurs after the test's executable is run. +Let's run the test now: .. code-block:: console @@ -531,7 +557,7 @@ So far, we have seen only a ``grep``-like search for a string in the output, but In fact, you can practically do almost any operation in the output and process it as you would like before assessing the test's sanity. The syntax feels also quite natural since it is fully integrated in Python. -In the following we extend the sanity checking of the multithreaded "Hello, World!", such that not only the output pattern we are looking for is more restrictive, but also we check that all the threads produce a greetings line. +In the following we extend the sanity checking of the multithreaded "Hello, World!", such that not only the output pattern we are looking for is more restrictive, but also we check that all the threads produce a greetings line. See the highlighted lines in the modified version of the ``set_sanity_patterns`` pipeline hook. .. code-block:: console @@ -540,7 +566,7 @@ In the following we extend the sanity checking of the multithreaded "Hello, Worl .. literalinclude:: ../tutorials/basics/hellomp/hellomp2.py :lines: 6- - :emphasize-lines: 14-16 + :emphasize-lines: 22-24 The sanity checking is straightforward. We find all the matches of the required pattern, we count them and finally we check their number. @@ -626,7 +652,7 @@ To fix this test, we need to compile with ``-DSYNC_MESSAGES``, which will synchr .. literalinclude:: ../tutorials/basics/hellomp/hellomp3.py :lines: 6- - :emphasize-lines: 13 + :emphasize-lines: 15 Writing A Performance Test @@ -643,7 +669,7 @@ In the test below, we highlight the lines that introduce new concepts. .. literalinclude:: ../tutorials/basics/stream/stream1.py :lines: 6- - :emphasize-lines: 10-12,17-20,23-32 + :emphasize-lines: 9-11,14-17,29-40 First of all, notice that we restrict the programming environments to ``gnu`` only, since this test requires OpenMP, which our installation of Clang does not have. The next thing to notice is the :attr:`~reframe.core.pipeline.RegressionTest.prebuild_cmds` attribute, which provides a list of commands to be executed before the build step. @@ -736,7 +762,7 @@ In the following example, we set the reference values for all the STREAM sub-ben .. literalinclude:: ../tutorials/basics/stream/stream2.py :lines: 6- - :emphasize-lines: 33- + :emphasize-lines: 18-25 The performance reference tuple consists of the reference value, the lower and upper thresholds expressed as fractional numbers relative to the reference value, and the unit of measurement. @@ -1088,16 +1114,16 @@ Let's see and comment the changes: .. literalinclude:: ../tutorials/basics/stream/stream3.py :lines: 6- - :emphasize-lines: 9,37- + :emphasize-lines: 8, 27-41, 46-56 First of all, we need to add the new programming environments in the list of the supported ones. Now there is the problem that each compiler has its own flags for enabling OpenMP, so we need to differentiate the behavior of the test based on the programming environment. -For this reason, we define the flags for each compiler in a separate dictionary (``self.flags``) and we set them in the :func:`setflags` pipeline hook. +For this reason, we define the flags for each compiler in a separate dictionary (``flags`` variable) and we set them in the :func:`set_compiler_flags` pipeline hook. We have first seen the pipeline hooks in the multithreaded "Hello, World!" example and now we explain them in more detail. When ReFrame loads a test file, it instantiates all the tests it finds in it. Based on the system ReFrame runs on and the supported environments of the tests, it will generate different test cases for each system partition and environment combination and it will finally send the test cases for execution. During its execution, a test case goes through the *regression test pipeline*, which is a series of well defined phases. -Users can attach arbitrary functions to run before or after any pipeline stage and this is exactly what the :func:`setflags` function is. +Users can attach arbitrary functions to run before or after any pipeline stage and this is exactly what the :func:`set_compiler_flags` function is. We instruct ReFrame to run this function before the test enters the ``compile`` stage and set accordingly the compilation flags. The system partition and the programming environment of the currently running test case are available to a ReFrame test through the :attr:`~reframe.core.pipeline.RegressionTest.current_partition` and :attr:`~reframe.core.pipeline.RegressionTest.current_environ` attributes respectively. These attributes, however, are only set after the first stage (``setup``) of the pipeline is executed, so we can't use them inside the test's constructor. diff --git a/docs/tutorial_deps.rst b/docs/tutorial_deps.rst index b699c4a22b..033543a68a 100644 --- a/docs/tutorial_deps.rst +++ b/docs/tutorial_deps.rst @@ -19,18 +19,18 @@ We first create a basic run-only test, that fetches the benchmarks: .. literalinclude:: ../tutorials/deps/osu_benchmarks.py - :lines: 112-124 + :lines: 130- This test doesn't need any specific programming environment, so we simply pick the ``builtin`` environment in the ``login`` partition. The build tests would then copy the benchmark code and build it for the different programming environments: .. literalinclude:: ../tutorials/deps/osu_benchmarks.py - :lines: 93-109 + :lines: 103-128 -The only new thing that comes in with the :class:`OSUBuildTest` test is the following line: +The only new thing that comes in with the :class:`OSUBuildTest` test is the following: .. literalinclude:: ../tutorials/deps/osu_benchmarks.py - :lines: 99 + :lines: 110-112 Here we tell ReFrame that this test depends on a test named :class:`OSUDownloadTest`. This test may or may not be defined in the same test file; all ReFrame needs is the test name. @@ -46,7 +46,7 @@ The next step for the :class:`OSUBuildTest` is to set its :attr:`sourcesdir` to This is achieved with the following specially decorated function: .. literalinclude:: ../tutorials/deps/osu_benchmarks.py - :lines: 104-109 + :lines: 114-119 The :func:`@require_deps ` decorator binds each argument of the decorated function to the corresponding target dependency. In order for the binding to work correctly the function arguments must be named after the target dependencies. @@ -62,14 +62,14 @@ For the next test we need to use the OSU benchmark binaries that we just built, Here is the relevant part: .. literalinclude:: ../tutorials/deps/osu_benchmarks.py - :lines: 13-44 + :lines: 13-50 First, since we will have multiple similar benchmarks, we move all the common functionality to the :class:`OSUBenchmarkTestBase` base class. Again nothing new here; we are going to use two nodes for the benchmark and we set :attr:`sourcesdir ` to ``None``, since none of the benchmark tests will use any additional resources. -As done previously, we define the dependencies with the following line: +As done previously, we define the dependencies with the following: .. literalinclude:: ../tutorials/deps/osu_benchmarks.py - :lines: 23 + :lines: 22-24 Here we tell ReFrame that this test depends on a test named :class:`OSUBuildTest` "by environment." This means that the test cases of this test will only depend on the test cases of the :class:`OSUBuildTest` that use the same environment; @@ -87,7 +87,7 @@ The :class:`OSUAllreduceTest` shown below is similar to the other two, except th It is essentially a scalability test that is running the ``osu_allreduce`` executable created by the :class:`OSUBuildTest` for 2, 4, 8 and 16 nodes. .. literalinclude:: ../tutorials/deps/osu_benchmarks.py - :lines: 69-88 + :lines: 76-100 The full set of OSU example tests is shown below: diff --git a/docs/tutorial_tips_tricks.rst b/docs/tutorial_tips_tricks.rst index c8a63377c6..585d7143ff 100644 --- a/docs/tutorial_tips_tricks.rst +++ b/docs/tutorial_tips_tricks.rst @@ -117,18 +117,24 @@ Trying to use the standard print here :func:`print` function here would be of li .. code-block:: python - :emphasize-lines: 11 + :emphasize-lines: 15-17 import reframe as rfm import reframe.utility.sanity as sn - @rfm.parameterized_test(['c'], ['cpp']) + @rfm.simple_test class HelloMultiLangTest(rfm.RegressionTest): - def __init__(self, lang): - self.valid_systems = ['*'] - self.valid_prog_environs = ['*'] - self.sourcepath = f'hello.{lang}' + lang = parameter(['c', 'cpp']) + valid_systems = ['*'] + valid_prog_environs = ['*'] + + @rfm.run_after('compile') + def set_sourcepath(self): + self.sourcepath = f'hello.{self.lang}' + + @rfm.run_before('sanity') + def set_sanity_patterns(self): self.sanity_patterns = sn.assert_found(r'Hello, World\!', sn.print(self.stdout)) @@ -327,12 +333,11 @@ Assume you have a test that loads a ``gromacs`` module: .. code-block:: python class GromacsTest(rfm.RunOnlyRegressionTest): - def __init__(self): - ... - self.modules = ['gromacs'] + ... + modules = ['gromacs'] -This test would the default version of the module in the system, but you might want to test another version, before making that new one the default. +This test would use the default version of the module in the system, but you might want to test another version, before making that new one the default. You can ask ReFrame to temporarily replace the ``gromacs`` module with another one as follows: diff --git a/reframe/utility/__init__.py b/reframe/utility/__init__.py index e3a6a1061f..cb048c5e19 100644 --- a/reframe/utility/__init__.py +++ b/reframe/utility/__init__.py @@ -573,7 +573,8 @@ def find_modules(substr, environ_mapping=None): class MyTest(rfm.RegressionTest): module_info = parameter(find_modules('netcdf')) - def __init__(self): + @rfm.run_after('init') + def apply_module_info(self): s, e, m = self.module_info self.valid_systems = [s] self.valid_prog_environs = [e] @@ -598,7 +599,8 @@ def __init__(self): class MyTest(rfm.RegressionTest): module_info = parameter(my_find_modules('GROMACS')) - def __init__(self): + @rfm.run_after('init') + def apply_module_info(self): s, e, m = self.module_info self.valid_systems = [s] self.valid_prog_environs = [e] diff --git a/tutorials/advanced/affinity/affinity.py b/tutorials/advanced/affinity/affinity.py index d60f080ceb..635e4d345c 100644 --- a/tutorials/advanced/affinity/affinity.py +++ b/tutorials/advanced/affinity/affinity.py @@ -9,15 +9,20 @@ @rfm.simple_test class AffinityTest(rfm.RegressionTest): - def __init__(self): - self.valid_systems = ['daint:gpu', 'daint:mc'] - self.valid_prog_environs = ['*'] - self.sourcesdir = 'https://github.com/vkarak/affinity.git' - self.build_system = 'Make' + valid_systems = ['daint:gpu', 'daint:mc'] + valid_prog_environs = ['*'] + sourcesdir = 'https://github.com/vkarak/affinity.git' + build_system = 'Make' + executable = './affinity' + + @rfm.run_before('compile') + def set_build_system_options(self): self.build_system.options = ['OPENMP=1'] - self.executable = './affinity' - self.sanity_patterns = sn.assert_found(r'CPU affinity', self.stdout) @rfm.run_before('run') def set_cpu_binding(self): self.job.launcher.options = ['--cpu-bind=cores'] + + @rfm.run_before('sanity') + def set_sanity_patterns(self): + self.sanity_patterns = sn.assert_found(r'CPU affinity', self.stdout) diff --git a/tutorials/advanced/containers/container_test.py b/tutorials/advanced/containers/container_test.py index 1885690cbf..4813af10e9 100644 --- a/tutorials/advanced/containers/container_test.py +++ b/tutorials/advanced/containers/container_test.py @@ -10,19 +10,18 @@ @rfm.simple_test class ContainerTest(rfm.RunOnlyRegressionTest): platform = parameter(['Sarus', 'Singularity']) + valid_systems = ['daint:gpu'] + valid_prog_environs = ['builtin'] - def __init__(self): + os_release_pattern = r'18.04.\d+ LTS \(Bionic Beaver\)' + sanity_patterns = sn.assert_found(os_release_pattern, 'release.txt') + + @rfm.run_before('run') + def set_container_variables(self): self.descr = f'Run commands inside a container using {self.platform}' - self.valid_systems = ['daint:gpu'] - self.valid_prog_environs = ['builtin'] image_prefix = 'docker://' if self.platform == 'Singularity' else '' self.container_platform = self.platform self.container_platform.image = f'{image_prefix}ubuntu:18.04' self.container_platform.command = ( "bash -c 'cat /etc/os-release | tee /rfm_workdir/release.txt'" ) - os_release_pattern = r'18.04.\d+ LTS \(Bionic Beaver\)' - self.sanity_patterns = sn.all([ - sn.assert_found(os_release_pattern, 'release.txt'), - sn.assert_found(os_release_pattern, self.stdout) - ]) diff --git a/tutorials/advanced/flexnodes/flextest.py b/tutorials/advanced/flexnodes/flextest.py index 65acb3e353..ac22f3dd54 100644 --- a/tutorials/advanced/flexnodes/flextest.py +++ b/tutorials/advanced/flexnodes/flextest.py @@ -9,12 +9,14 @@ @rfm.simple_test class HostnameCheck(rfm.RunOnlyRegressionTest): - def __init__(self): - self.valid_systems = ['daint:gpu', 'daint:mc'] - self.valid_prog_environs = ['cray'] - self.executable = 'hostname' - self.num_tasks = 0 - self.num_tasks_per_node = 1 + valid_systems = ['daint:gpu', 'daint:mc'] + valid_prog_environs = ['cray'] + executable = 'hostname' + num_tasks = 0 + num_tasks_per_node = 1 + + @rfm.run_before('sanity') + def set_sanity_patterns(self): self.sanity_patterns = sn.assert_eq( sn.getattr(self, 'num_tasks'), sn.count(sn.findall(r'^nid\d+$', self.stdout)) diff --git a/tutorials/advanced/jobopts/eatmemory.py b/tutorials/advanced/jobopts/eatmemory.py index 70a64ca4d4..d3b3b6fac7 100644 --- a/tutorials/advanced/jobopts/eatmemory.py +++ b/tutorials/advanced/jobopts/eatmemory.py @@ -9,30 +9,34 @@ @rfm.simple_test class MemoryLimitTest(rfm.RegressionTest): - def __init__(self): - self.valid_systems = ['daint:gpu', 'daint:mc'] - self.valid_prog_environs = ['gnu'] - self.sourcepath = 'eatmemory.c' - self.executable_opts = ['2000M'] - self.sanity_patterns = sn.assert_found( - r'(exceeded memory limit)|(Out Of Memory)', self.stderr - ) + valid_systems = ['daint:gpu', 'daint:mc'] + valid_prog_environs = ['gnu'] + sourcepath = 'eatmemory.c' + executable_opts = ['2000M'] @rfm.run_before('run') def set_memory_limit(self): self.job.options = ['--mem=1000'] + @rfm.run_before('sanity') + def set_sanity_patterns(self): + self.sanity_patterns = sn.assert_found( + r'(exceeded memory limit)|(Out Of Memory)', self.stderr + ) + @rfm.simple_test class MemoryLimitWithResourcesTest(rfm.RegressionTest): - def __init__(self): - self.valid_systems = ['daint:gpu', 'daint:mc'] - self.valid_prog_environs = ['gnu'] - self.sourcepath = 'eatmemory.c' - self.executable_opts = ['2000M'] + valid_systems = ['daint:gpu', 'daint:mc'] + valid_prog_environs = ['gnu'] + sourcepath = 'eatmemory.c' + executable_opts = ['2000M'] + extra_resources = { + 'memory': {'size': '1000'} + } + + @rfm.run_before('sanity') + def set_sanity_patterns(self): self.sanity_patterns = sn.assert_found( r'(exceeded memory limit)|(Out Of Memory)', self.stderr ) - self.extra_resources = { - 'memory': {'size': '1000'} - } diff --git a/tutorials/advanced/makefiles/maketest.py b/tutorials/advanced/makefiles/maketest.py index 2360ea1089..90e9939d41 100644 --- a/tutorials/advanced/makefiles/maketest.py +++ b/tutorials/advanced/makefiles/maketest.py @@ -11,14 +11,19 @@ class MakefileTest(rfm.RegressionTest): elem_type = parameter(['float', 'double']) - def __init__(self): - self.descr = 'Test demonstrating use of Makefiles' - self.valid_systems = ['*'] - self.valid_prog_environs = ['clang', 'gnu'] - self.executable = './dotprod' - self.executable_opts = ['100000'] - self.build_system = 'Make' + descr = 'Test demonstrating use of Makefiles' + valid_systems = ['*'] + valid_prog_environs = ['clang', 'gnu'] + executable = './dotprod' + executable_opts = ['100000'] + build_system = 'Make' + + @rfm.run_before('compile') + def set_compiler_flags(self): self.build_system.cppflags = [f'-DELEM_TYPE={self.elem_type}'] + + @rfm.run_before('sanity') + def set_sanity_patterns(self): self.sanity_patterns = sn.assert_found( rf'Result \({self.elem_type}\):', self.stdout ) @@ -27,11 +32,15 @@ def __init__(self): @rfm.simple_test class MakeOnlyTest(rfm.CompileOnlyRegressionTest): elem_type = parameter(['float', 'double']) + descr = 'Test demonstrating use of Makefiles' + valid_systems = ['*'] + valid_prog_environs = ['clang', 'gnu'] + build_system = 'Make' - def __init__(self): - self.descr = 'Test demonstrating use of Makefiles' - self.valid_systems = ['*'] - self.valid_prog_environs = ['clang', 'gnu'] - self.build_system = 'Make' + @rfm.run_before('compile') + def set_compiler_flags(self): self.build_system.cppflags = [f'-DELEM_TYPE={self.elem_type}'] + + @rfm.run_before('sanity') + def set_sanity_patterns(self): self.sanity_patterns = sn.assert_not_found(r'warning', self.stdout) diff --git a/tutorials/advanced/makefiles/maketest_mixin.py b/tutorials/advanced/makefiles/maketest_mixin.py index ec9dc590e1..e1a570a3fa 100644 --- a/tutorials/advanced/makefiles/maketest_mixin.py +++ b/tutorials/advanced/makefiles/maketest_mixin.py @@ -13,14 +13,19 @@ class ElemTypeParam(rfm.RegressionMixin): @rfm.simple_test class MakefileTestAlt(rfm.RegressionTest, ElemTypeParam): - def __init__(self): - self.descr = 'Test demonstrating use of Makefiles' - self.valid_systems = ['*'] - self.valid_prog_environs = ['clang', 'gnu'] - self.executable = './dotprod' - self.executable_opts = ['100000'] - self.build_system = 'Make' + descr = 'Test demonstrating use of Makefiles' + valid_systems = ['*'] + valid_prog_environs = ['clang', 'gnu'] + executable = './dotprod' + executable_opts = ['100000'] + build_system = 'Make' + + @rfm.run_before('compile') + def set_compiler_flags(self): self.build_system.cppflags = [f'-DELEM_TYPE={self.elem_type}'] + + @rfm.run_before('sanity') + def set_sanity_patterns(self): self.sanity_patterns = sn.assert_found( rf'Result \({self.elem_type}\):', self.stdout ) @@ -28,10 +33,15 @@ def __init__(self): @rfm.simple_test class MakeOnlyTestAlt(rfm.CompileOnlyRegressionTest, ElemTypeParam): - def __init__(self): - self.descr = 'Test demonstrating use of Makefiles' - self.valid_systems = ['*'] - self.valid_prog_environs = ['clang', 'gnu'] - self.build_system = 'Make' + descr = 'Test demonstrating use of Makefiles' + valid_systems = ['*'] + valid_prog_environs = ['clang', 'gnu'] + build_system = 'Make' + + @rfm.run_before('compile') + def set_compiler_flags(self): self.build_system.cppflags = [f'-DELEM_TYPE={self.elem_type}'] + + @rfm.run_before('sanity') + def set_sanity_patterns(self): self.sanity_patterns = sn.assert_not_found(r'warning', self.stdout) diff --git a/tutorials/advanced/multilaunch/multilaunch.py b/tutorials/advanced/multilaunch/multilaunch.py index 576885adab..a5a8269a98 100644 --- a/tutorials/advanced/multilaunch/multilaunch.py +++ b/tutorials/advanced/multilaunch/multilaunch.py @@ -9,15 +9,11 @@ @rfm.simple_test class MultiLaunchTest(rfm.RunOnlyRegressionTest): - def __init__(self): - self.valid_systems = ['daint:gpu', 'daint:mc'] - self.valid_prog_environs = ['builtin'] - self.executable = 'hostname' - self.num_tasks = 4 - self.num_tasks_per_node = 1 - self.sanity_patterns = sn.assert_eq( - sn.count(sn.extractall(r'^nid\d+', self.stdout)), 10 - ) + valid_systems = ['daint:gpu', 'daint:mc'] + valid_prog_environs = ['builtin'] + executable = 'hostname' + num_tasks = 4 + num_tasks_per_node = 1 @rfm.run_before('run') def pre_launch(self): @@ -26,3 +22,9 @@ def pre_launch(self): f'{cmd} -n {n} {self.executable}' for n in range(1, self.num_tasks) ] + + @rfm.run_before('sanity') + def set_sanity_patterns(self): + self.sanity_patterns = sn.assert_eq( + sn.count(sn.extractall(r'^nid\d+', self.stdout)), 10 + ) diff --git a/tutorials/advanced/parameterized/stream.py b/tutorials/advanced/parameterized/stream.py index 11c3bae702..8e0db229d3 100644 --- a/tutorials/advanced/parameterized/stream.py +++ b/tutorials/advanced/parameterized/stream.py @@ -10,50 +10,55 @@ @rfm.simple_test class StreamMultiSysTest(rfm.RegressionTest): num_bytes = parameter(1 << pow for pow in range(19, 30)) + array_size = variable(int) + ntimes = variable(int) - def __init__(self): - array_size = (self.num_bytes >> 3) // 3 - ntimes = 100*1024*1024 // array_size - self.descr = f'STREAM test (array size: {array_size}, ntimes: {ntimes})' # noqa: E501 - self.valid_systems = ['*'] - self.valid_prog_environs = ['cray', 'gnu', 'intel', 'pgi'] - self.prebuild_cmds = [ - 'wget http://www.cs.virginia.edu/stream/FTP/Code/stream.c', - ] - self.sourcepath = 'stream.c' - self.build_system = 'SingleSource' - self.build_system.cppflags = [f'-DSTREAM_ARRAY_SIZE={array_size}', - f'-DNTIMES={ntimes}'] - self.sanity_patterns = sn.assert_found(r'Solution Validates', - self.stdout) - self.perf_patterns = { - 'Triad': sn.extractsingle(r'Triad:\s+(\S+)\s+.*', - self.stdout, 1, float), + valid_systems = ['*'] + valid_prog_environs = ['cray', 'gnu', 'intel', 'pgi'] + prebuild_cmds = [ + 'wget http://www.cs.virginia.edu/stream/FTP/Code/stream.c', + ] + build_system = 'SingleSource' + sourcepath = 'stream.c' + variables = { + 'OMP_NUM_THREADS': '4', + 'OMP_PLACES': 'cores' + } + reference = { + '*': { + 'Triad': (0, None, None, 'MB/s'), } + } - # Flags per programming environment - self.flags = { - 'cray': ['-fopenmp', '-O3', '-Wall'], - 'gnu': ['-fopenmp', '-O3', '-Wall'], - 'intel': ['-qopenmp', '-O3', '-Wall'], - 'pgi': ['-mp', '-O3'] - } + # Flags per programming environment + flags = variable(dict, value={ + 'cray': ['-fopenmp', '-O3', '-Wall'], + 'gnu': ['-fopenmp', '-O3', '-Wall'], + 'intel': ['-qopenmp', '-O3', '-Wall'], + 'pgi': ['-mp', '-O3'] + }) - # Number of cores for each system - self.cores = { - 'catalina:default': 4, - 'daint:gpu': 12, - 'daint:mc': 36, - 'daint:login': 10 - } - self.reference = { - '*': { - 'Triad': (0, None, None, 'MB/s'), - } - } + # Number of cores for each system + cores = variable(dict, value={ + 'catalina:default': 4, + 'daint:gpu': 12, + 'daint:mc': 36, + 'daint:login': 10 + }) + + @rfm.run_after('init') + def set_variables(self): + self.array_size = (self.num_bytes >> 3) // 3 + self.ntimes = 100*1024*1024 // self.array_size + self.descr = ( + f'STREAM test (array size: {self.array_size}, ' + f'ntimes: {self.ntimes})' + ) @rfm.run_before('compile') - def setflags(self): + def set_compiler_flags(self): + self.build_system.cppflags = [f'-DSTREAM_ARRAY_SIZE={self.array_size}', + f'-DNTIMES={self.ntimes}'] environ = self.current_environ.name self.build_system.cflags = self.flags.get(environ, []) @@ -65,3 +70,15 @@ def set_num_threads(self): 'OMP_NUM_THREADS': str(num_threads), 'OMP_PLACES': 'cores' } + + @rfm.run_before('sanity') + def set_sanity_patterns(self): + self.sanity_patterns = sn.assert_found(r'Solution Validates', + self.stdout) + + @rfm.run_before('performance') + def set_perf_patterns(self): + self.perf_patterns = { + 'Triad': sn.extractsingle(r'Triad:\s+(\S+)\s+.*', + self.stdout, 1, float), + } diff --git a/tutorials/advanced/random/prepostrun.py b/tutorials/advanced/random/prepostrun.py index 1c4ea6f488..951702c1be 100644 --- a/tutorials/advanced/random/prepostrun.py +++ b/tutorials/advanced/random/prepostrun.py @@ -9,13 +9,15 @@ @rfm.simple_test class PrepostRunTest(rfm.RunOnlyRegressionTest): - def __init__(self): - self.descr = 'Pre- and post-run demo test' - self.valid_systems = ['*'] - self.valid_prog_environs = ['*'] - self.prerun_cmds = ['source limits.sh'] - self.postrun_cmds = ['echo FINISHED'] - self.executable = './random_numbers.sh' + descr = 'Pre- and post-run demo test' + valid_systems = ['*'] + valid_prog_environs = ['*'] + prerun_cmds = ['source limits.sh'] + postrun_cmds = ['echo FINISHED'] + executable = './random_numbers.sh' + + @rfm.run_before('sanity') + def set_sanity_patterns(self): numbers = sn.extractall( r'Random: (?P\S+)', self.stdout, 'number', float ) diff --git a/tutorials/advanced/random/randint.py b/tutorials/advanced/random/randint.py index e9b8ed6fe1..10d6c0b482 100644 --- a/tutorials/advanced/random/randint.py +++ b/tutorials/advanced/random/randint.py @@ -9,11 +9,13 @@ @rfm.simple_test class DeferredIterationTest(rfm.RunOnlyRegressionTest): - def __init__(self): - self.descr = 'Apply a sanity function iteratively' - self.valid_systems = ['*'] - self.valid_prog_environs = ['*'] - self.executable = './random_numbers.sh' + descr = 'Apply a sanity function iteratively' + valid_systems = ['*'] + valid_prog_environs = ['*'] + executable = './random_numbers.sh' + + @rfm.run_before('sanity') + def set_sanity_patterns(self): numbers = sn.extractall( r'Random: (?P\S+)', self.stdout, 'number', float ) diff --git a/tutorials/advanced/runonly/echorand.py b/tutorials/advanced/runonly/echorand.py index 0bbc9ff3db..caeb4b40e3 100644 --- a/tutorials/advanced/runonly/echorand.py +++ b/tutorials/advanced/runonly/echorand.py @@ -9,20 +9,22 @@ @rfm.simple_test class EchoRandTest(rfm.RunOnlyRegressionTest): - def __init__(self): - self.descr = 'A simple test that echoes a random number' - self.valid_systems = ['*'] - self.valid_prog_environs = ['*'] - lower = 90 - upper = 100 - self.executable = 'echo' - self.executable_opts = [ - 'Random: ', - f'$((RANDOM%({upper}+1-{lower})+{lower}))' - ] + descr = 'A simple test that echoes a random number' + valid_systems = ['*'] + valid_prog_environs = ['*'] + lower = variable(int, value=90) + upper = variable(int, value=100) + executable = 'echo' + executable_opts = [ + 'Random: ', + f'$((RANDOM%({upper}+1-{lower})+{lower}))' + ] + + @rfm.run_before('sanity') + def set_sanity_patterns(self): self.sanity_patterns = sn.assert_bounded( sn.extractsingle( r'Random: (?P\S+)', self.stdout, 'number', float ), - lower, upper + self.lower, self.upper ) diff --git a/tutorials/basics/hello/hello1.py b/tutorials/basics/hello/hello1.py index 919e4877a4..c18d8c9a9d 100644 --- a/tutorials/basics/hello/hello1.py +++ b/tutorials/basics/hello/hello1.py @@ -9,8 +9,8 @@ @rfm.simple_test class HelloTest(rfm.RegressionTest): - def __init__(self): - self.valid_systems = ['*'] - self.valid_prog_environs = ['*'] - self.sourcepath = 'hello.c' - self.sanity_patterns = sn.assert_found(r'Hello, World\!', self.stdout) + valid_systems = ['*'] + valid_prog_environs = ['*'] + sourcepath = 'hello.c' + executable_opts = ['> hello.out'] + sanity_patterns = sn.assert_found(r'Hello, World\!', 'hello.out') diff --git a/tutorials/basics/hello/hello2.py b/tutorials/basics/hello/hello2.py index 7d2b353e4d..55d1ec3358 100644 --- a/tutorials/basics/hello/hello2.py +++ b/tutorials/basics/hello/hello2.py @@ -11,8 +11,11 @@ class HelloMultiLangTest(rfm.RegressionTest): lang = parameter(['c', 'cpp']) - def __init__(self): - self.valid_systems = ['*'] - self.valid_prog_environs = ['*'] + valid_systems = ['*'] + valid_prog_environs = ['*'] + executable_opts = ['> hello.out'] + sanity_patterns = sn.assert_found(r'Hello, World\!', 'hello.out') + + @rfm.run_before('compile') + def set_sourcepath(self): self.sourcepath = f'hello.{self.lang}' - self.sanity_patterns = sn.assert_found(r'Hello, World\!', self.stdout) diff --git a/tutorials/basics/hellomp/hellomp1.py b/tutorials/basics/hellomp/hellomp1.py index 9925a9ed22..d39dc5fdab 100644 --- a/tutorials/basics/hellomp/hellomp1.py +++ b/tutorials/basics/hellomp/hellomp1.py @@ -9,17 +9,19 @@ @rfm.simple_test class HelloThreadedTest(rfm.RegressionTest): - def __init__(self): - self.valid_systems = ['*'] - self.valid_prog_environs = ['*'] - self.sourcepath = 'hello_threads.cpp' - self.build_system = 'SingleSource' - self.build_system.cxxflags = ['-std=c++11', '-Wall'] - self.executable_opts = ['16'] - self.sanity_patterns = sn.assert_found(r'Hello, World\!', self.stdout) + valid_systems = ['*'] + valid_prog_environs = ['*'] + sourcepath = 'hello_threads.cpp' + build_system = 'SingleSource' + executable_opts = ['16'] @rfm.run_before('compile') - def set_threading_flags(self): + def set_compilation_flags(self): + self.build_system.cxxflags = ['-std=c++11', '-Wall'] environ = self.current_environ.name if environ in {'clang', 'gnu'}: self.build_system.cxxflags += ['-pthread'] + + @rfm.run_before('sanity') + def set_sanity_patterns(self): + self.sanity_patterns = sn.assert_found(r'Hello, World\!', self.stdout) diff --git a/tutorials/basics/hellomp/hellomp2.py b/tutorials/basics/hellomp/hellomp2.py index 48524d8b28..8fab6daa77 100644 --- a/tutorials/basics/hellomp/hellomp2.py +++ b/tutorials/basics/hellomp/hellomp2.py @@ -9,19 +9,21 @@ @rfm.simple_test class HelloThreadedExtendedTest(rfm.RegressionTest): - def __init__(self): - self.valid_systems = ['*'] - self.valid_prog_environs = ['*'] - self.sourcepath = 'hello_threads.cpp' - self.executable_opts = ['16'] - self.build_system = 'SingleSource' - self.build_system.cxxflags = ['-std=c++11', '-Wall'] - num_messages = sn.len(sn.findall(r'\[\s?\d+\] Hello, World\!', - self.stdout)) - self.sanity_patterns = sn.assert_eq(num_messages, 16) + valid_systems = ['*'] + valid_prog_environs = ['*'] + sourcepath = 'hello_threads.cpp' + build_system = 'SingleSource' + executable_opts = ['16'] @rfm.run_before('compile') - def set_threading_flags(self): + def set_compilation_flags(self): + self.build_system.cxxflags = ['-std=c++11', '-Wall'] environ = self.current_environ.name if environ in {'clang', 'gnu'}: self.build_system.cxxflags += ['-pthread'] + + @rfm.run_before('sanity') + def set_sanity_patterns(self): + num_messages = sn.len(sn.findall(r'\[\s?\d+\] Hello, World\!', + self.stdout)) + self.sanity_patterns = sn.assert_eq(num_messages, 16) diff --git a/tutorials/basics/hellomp/hellomp3.py b/tutorials/basics/hellomp/hellomp3.py index c6441545b6..2fb53c5763 100644 --- a/tutorials/basics/hellomp/hellomp3.py +++ b/tutorials/basics/hellomp/hellomp3.py @@ -9,20 +9,22 @@ @rfm.simple_test class HelloThreadedExtended2Test(rfm.RegressionTest): - def __init__(self): - self.valid_systems = ['*'] - self.valid_prog_environs = ['*'] - self.sourcepath = 'hello_threads.cpp' - self.executable_opts = ['16'] - self.build_system = 'SingleSource' - self.build_system.cppflags = ['-DSYNC_MESSAGES'] - self.build_system.cxxflags = ['-std=c++11', '-Wall'] - num_messages = sn.len(sn.findall(r'\[\s?\d+\] Hello, World\!', - self.stdout)) - self.sanity_patterns = sn.assert_eq(num_messages, 16) + valid_systems = ['*'] + valid_prog_environs = ['*'] + sourcepath = 'hello_threads.cpp' + build_system = 'SingleSource' + executable_opts = ['16'] @rfm.run_before('compile') - def set_threading_flags(self): + def set_compilation_flags(self): + self.build_system.cppflags = ['-DSYNC_MESSAGES'] + self.build_system.cxxflags = ['-std=c++11', '-Wall'] environ = self.current_environ.name if environ in {'clang', 'gnu'}: self.build_system.cxxflags += ['-pthread'] + + @rfm.run_before('sanity') + def set_sanity_patterns(self): + num_messages = sn.len(sn.findall(r'\[\s?\d+\] Hello, World\!', + self.stdout)) + self.sanity_patterns = sn.assert_eq(num_messages, 16) diff --git a/tutorials/basics/stream/stream1.py b/tutorials/basics/stream/stream1.py index dd8985aa0f..30a0434026 100644 --- a/tutorials/basics/stream/stream1.py +++ b/tutorials/basics/stream/stream1.py @@ -9,22 +9,30 @@ @rfm.simple_test class StreamTest(rfm.RegressionTest): - def __init__(self): - self.valid_systems = ['*'] - self.valid_prog_environs = ['gnu'] - self.prebuild_cmds = [ - 'wget http://www.cs.virginia.edu/stream/FTP/Code/stream.c', - ] - self.build_system = 'SingleSource' - self.sourcepath = 'stream.c' + valid_systems = ['*'] + valid_prog_environs = ['gnu'] + prebuild_cmds = [ + 'wget http://www.cs.virginia.edu/stream/FTP/Code/stream.c', + ] + build_system = 'SingleSource' + sourcepath = 'stream.c' + variables = { + 'OMP_NUM_THREADS': '4', + 'OMP_PLACES': 'cores' + } + + @rfm.run_before('compile') + def set_compiler_flags(self): self.build_system.cppflags = ['-DSTREAM_ARRAY_SIZE=$((1 << 25))'] self.build_system.cflags = ['-fopenmp', '-O3', '-Wall'] - self.variables = { - 'OMP_NUM_THREADS': '4', - 'OMP_PLACES': 'cores' - } + + @rfm.run_before('sanity') + def set_sanity_patterns(self): self.sanity_patterns = sn.assert_found(r'Solution Validates', self.stdout) + + @rfm.run_before('performance') + def set_perf_patterns(self): self.perf_patterns = { 'Copy': sn.extractsingle(r'Copy:\s+(\S+)\s+.*', self.stdout, 1, float), diff --git a/tutorials/basics/stream/stream2.py b/tutorials/basics/stream/stream2.py index 725dae58ff..68ff25743c 100644 --- a/tutorials/basics/stream/stream2.py +++ b/tutorials/basics/stream/stream2.py @@ -9,22 +9,38 @@ @rfm.simple_test class StreamWithRefTest(rfm.RegressionTest): - def __init__(self): - self.valid_systems = ['*'] - self.valid_prog_environs = ['gnu'] - self.prebuild_cmds = [ - 'wget http://www.cs.virginia.edu/stream/FTP/Code/stream.c', - ] - self.build_system = 'SingleSource' - self.sourcepath = 'stream.c' + valid_systems = ['*'] + valid_prog_environs = ['gnu'] + prebuild_cmds = [ + 'wget http://www.cs.virginia.edu/stream/FTP/Code/stream.c', + ] + build_system = 'SingleSource' + sourcepath = 'stream.c' + variables = { + 'OMP_NUM_THREADS': '4', + 'OMP_PLACES': 'cores' + } + reference = { + 'catalina': { + 'Copy': (25200, -0.05, 0.05, 'MB/s'), + 'Scale': (16800, -0.05, 0.05, 'MB/s'), + 'Add': (18500, -0.05, 0.05, 'MB/s'), + 'Triad': (18800, -0.05, 0.05, 'MB/s') + } + } + + @rfm.run_before('compile') + def set_compiler_flags(self): self.build_system.cppflags = ['-DSTREAM_ARRAY_SIZE=$((1 << 25))'] self.build_system.cflags = ['-fopenmp', '-O3', '-Wall'] - self.variables = { - 'OMP_NUM_THREADS': '4', - 'OMP_PLACES': 'cores' - } + + @rfm.run_before('sanity') + def set_sanity_patterns(self): self.sanity_patterns = sn.assert_found(r'Solution Validates', self.stdout) + + @rfm.run_before('performance') + def set_perf_patterns(self): self.perf_patterns = { 'Copy': sn.extractsingle(r'Copy:\s+(\S+)\s+.*', self.stdout, 1, float), @@ -35,11 +51,3 @@ def __init__(self): 'Triad': sn.extractsingle(r'Triad:\s+(\S+)\s+.*', self.stdout, 1, float) } - self.reference = { - 'catalina': { - 'Copy': (25200, -0.05, 0.05, 'MB/s'), - 'Scale': (16800, -0.05, 0.05, 'MB/s'), - 'Add': (18500, -0.05, 0.05, 'MB/s'), - 'Triad': (18800, -0.05, 0.05, 'MB/s') - } - } diff --git a/tutorials/basics/stream/stream3.py b/tutorials/basics/stream/stream3.py index 291c174017..333350459b 100644 --- a/tutorials/basics/stream/stream3.py +++ b/tutorials/basics/stream/stream3.py @@ -9,54 +9,45 @@ @rfm.simple_test class StreamMultiSysTest(rfm.RegressionTest): - def __init__(self): - self.valid_systems = ['*'] - self.valid_prog_environs = ['cray', 'gnu', 'intel', 'pgi'] - self.prebuild_cmds = [ - 'wget http://www.cs.virginia.edu/stream/FTP/Code/stream.c', - ] - self.build_system = 'SingleSource' - self.sourcepath = 'stream.c' - self.build_system.cppflags = ['-DSTREAM_ARRAY_SIZE=$((1 << 25))'] - self.sanity_patterns = sn.assert_found(r'Solution Validates', - self.stdout) - self.perf_patterns = { - 'Copy': sn.extractsingle(r'Copy:\s+(\S+)\s+.*', - self.stdout, 1, float), - 'Scale': sn.extractsingle(r'Scale:\s+(\S+)\s+.*', - self.stdout, 1, float), - 'Add': sn.extractsingle(r'Add:\s+(\S+)\s+.*', - self.stdout, 1, float), - 'Triad': sn.extractsingle(r'Triad:\s+(\S+)\s+.*', - self.stdout, 1, float) - } - self.reference = { - 'catalina': { - 'Copy': (25200, -0.05, 0.05, 'MB/s'), - 'Scale': (16800, -0.05, 0.05, 'MB/s'), - 'Add': (18500, -0.05, 0.05, 'MB/s'), - 'Triad': (18800, -0.05, 0.05, 'MB/s') - } + valid_systems = ['*'] + valid_prog_environs = ['cray', 'gnu', 'intel', 'pgi'] + prebuild_cmds = [ + 'wget http://www.cs.virginia.edu/stream/FTP/Code/stream.c', + ] + build_system = 'SingleSource' + sourcepath = 'stream.c' + variables = { + 'OMP_NUM_THREADS': '4', + 'OMP_PLACES': 'cores' + } + reference = { + 'catalina': { + 'Copy': (25200, -0.05, 0.05, 'MB/s'), + 'Scale': (16800, -0.05, 0.05, 'MB/s'), + 'Add': (18500, -0.05, 0.05, 'MB/s'), + 'Triad': (18800, -0.05, 0.05, 'MB/s') } + } - # Flags per programming environment - self.flags = { - 'cray': ['-fopenmp', '-O3', '-Wall'], - 'gnu': ['-fopenmp', '-O3', '-Wall'], - 'intel': ['-qopenmp', '-O3', '-Wall'], - 'pgi': ['-mp', '-O3'] - } + # Flags per programming environment + flags = variable(dict, value={ + 'cray': ['-fopenmp', '-O3', '-Wall'], + 'gnu': ['-fopenmp', '-O3', '-Wall'], + 'intel': ['-qopenmp', '-O3', '-Wall'], + 'pgi': ['-mp', '-O3'] + }) - # Number of cores for each system - self.cores = { - 'catalina:default': 4, - 'daint:gpu': 12, - 'daint:mc': 36, - 'daint:login': 10 - } + # Number of cores for each system + cores = variable(dict, value={ + 'catalina:default': 4, + 'daint:gpu': 12, + 'daint:mc': 36, + 'daint:login': 10 + }) @rfm.run_before('compile') - def setflags(self): + def set_compiler_flags(self): + self.build_system.cppflags = ['-DSTREAM_ARRAY_SIZE=$((1 << 25))'] environ = self.current_environ.name self.build_system.cflags = self.flags.get(environ, []) @@ -68,3 +59,21 @@ def set_num_threads(self): 'OMP_NUM_THREADS': str(num_threads), 'OMP_PLACES': 'cores' } + + @rfm.run_before('sanity') + def set_sanity_patterns(self): + self.sanity_patterns = sn.assert_found(r'Solution Validates', + self.stdout) + + @rfm.run_before('performance') + def set_perf_patterns(self): + self.perf_patterns = { + 'Copy': sn.extractsingle(r'Copy:\s+(\S+)\s+.*', + self.stdout, 1, float), + 'Scale': sn.extractsingle(r'Scale:\s+(\S+)\s+.*', + self.stdout, 1, float), + 'Add': sn.extractsingle(r'Add:\s+(\S+)\s+.*', + self.stdout, 1, float), + 'Triad': sn.extractsingle(r'Triad:\s+(\S+)\s+.*', + self.stdout, 1, float) + } diff --git a/tutorials/deps/osu_benchmarks.py b/tutorials/deps/osu_benchmarks.py index 2467cf20c7..d6de228637 100644 --- a/tutorials/deps/osu_benchmarks.py +++ b/tutorials/deps/osu_benchmarks.py @@ -13,27 +13,27 @@ class OSUBenchmarkTestBase(rfm.RunOnlyRegressionTest): '''Base class of OSU benchmarks runtime tests''' - def __init__(self): - self.valid_systems = ['daint:gpu'] - self.valid_prog_environs = ['gnu', 'pgi', 'intel'] - self.sourcesdir = None - self.num_tasks = 2 - self.num_tasks_per_node = 1 - self.sanity_patterns = sn.assert_found(r'^8', self.stdout) + valid_systems = ['daint:gpu'] + valid_prog_environs = ['gnu', 'pgi', 'intel'] + sourcesdir = None + num_tasks = 2 + num_tasks_per_node = 1 + + @rfm.run_after('init') + def set_dependencies(self): self.depends_on('OSUBuildTest', udeps.by_env) + @rfm.run_before('sanity') + def set_sanity_patterns(self): + self.sanity_patterns = sn.assert_found(r'^8', self.stdout) + @rfm.simple_test class OSULatencyTest(OSUBenchmarkTestBase): - def __init__(self): - super().__init__() - self.descr = 'OSU latency test' - self.perf_patterns = { - 'latency': sn.extractsingle(r'^8\s+(\S+)', self.stdout, 1, float) - } - self.reference = { - '*': {'latency': (0, None, None, 'us')} - } + descr = 'OSU latency test' + reference = { + '*': {'latency': (0, None, None, 'us')} + } @rfm.require_deps def set_executable(self, OSUBuildTest): @@ -43,19 +43,19 @@ def set_executable(self, OSUBuildTest): ) self.executable_opts = ['-x', '100', '-i', '1000'] + @rfm.run_before('performance') + def set_perf_patterns(self): + self.perf_patterns = { + 'latency': sn.extractsingle(r'^8\s+(\S+)', self.stdout, 1, float) + } + @rfm.simple_test class OSUBandwidthTest(OSUBenchmarkTestBase): - def __init__(self): - super().__init__() - self.descr = 'OSU bandwidth test' - self.perf_patterns = { - 'bandwidth': sn.extractsingle(r'^4194304\s+(\S+)', - self.stdout, 1, float) - } - self.reference = { - '*': {'bandwidth': (0, None, None, 'MB/s')} - } + descr = 'OSU bandwidth test' + reference = { + '*': {'bandwidth': (0, None, None, 'MB/s')} + } @rfm.require_deps def set_executable(self, OSUBuildTest): @@ -65,20 +65,24 @@ def set_executable(self, OSUBuildTest): ) self.executable_opts = ['-x', '100', '-i', '1000'] + @rfm.run_before('performance') + def set_perf_patterns(self): + self.perf_patterns = { + 'bandwidth': sn.extractsingle(r'^4194304\s+(\S+)', + self.stdout, 1, float) + } + @rfm.simple_test class OSUAllreduceTest(OSUBenchmarkTestBase): mpi_tasks = parameter(1 << i for i in range(1, 5)) + descr = 'OSU Allreduce test' + reference = { + '*': {'latency': (0, None, None, 'us')} + } - def __init__(self): - super().__init__() - self.descr = 'OSU Allreduce test' - self.perf_patterns = { - 'latency': sn.extractsingle(r'^8\s+(\S+)', self.stdout, 1, float) - } - self.reference = { - '*': {'latency': (0, None, None, 'us')} - } + @rfm.run_after('init') + def set_num_tasks(self): self.num_tasks = self.mpi_tasks @rfm.require_deps @@ -89,17 +93,23 @@ def set_executable(self, OSUBuildTest): ) self.executable_opts = ['-m', '8', '-x', '1000', '-i', '20000'] + @rfm.run_before('performance') + def set_perf_patterns(self): + self.perf_patterns = { + 'latency': sn.extractsingle(r'^8\s+(\S+)', self.stdout, 1, float) + } + @rfm.simple_test class OSUBuildTest(rfm.CompileOnlyRegressionTest): - def __init__(self): - self.descr = 'OSU benchmarks build test' - self.valid_systems = ['daint:gpu'] - self.valid_prog_environs = ['gnu', 'pgi', 'intel'] + descr = 'OSU benchmarks build test' + valid_systems = ['daint:gpu'] + valid_prog_environs = ['gnu', 'pgi', 'intel'] + build_system = 'Autotools' + + @rfm.run_after('init') + def inject_dependencies(self): self.depends_on('OSUDownloadTest', udeps.fully) - self.build_system = 'Autotools' - self.build_system.max_concurrency = 8 - self.sanity_patterns = sn.assert_not_found('error', self.stderr) @rfm.require_deps def set_sourcedir(self, OSUDownloadTest): @@ -108,18 +118,28 @@ def set_sourcedir(self, OSUDownloadTest): 'osu-micro-benchmarks-5.6.2' ) + @rfm.run_before('compile') + def set_build_system_attrs(self): + self.build_system.max_concurrency = 8 + + @rfm.run_before('sanity') + def set_sanity_patterns(self): + self.sanity_patterns = sn.assert_not_found('error', self.stderr) + @rfm.simple_test class OSUDownloadTest(rfm.RunOnlyRegressionTest): - def __init__(self): - self.descr = 'OSU benchmarks download sources' - self.valid_systems = ['daint:login'] - self.valid_prog_environs = ['builtin'] - self.executable = 'wget' - self.executable_opts = [ - 'http://mvapich.cse.ohio-state.edu/download/mvapich/osu-micro-benchmarks-5.6.2.tar.gz' # noqa: E501 - ] - self.postrun_cmds = [ - 'tar xzf osu-micro-benchmarks-5.6.2.tar.gz' - ] + descr = 'OSU benchmarks download sources' + valid_systems = ['daint:login'] + valid_prog_environs = ['builtin'] + executable = 'wget' + executable_opts = [ + 'http://mvapich.cse.ohio-state.edu/download/mvapich/osu-micro-benchmarks-5.6.2.tar.gz' # noqa: E501 + ] + postrun_cmds = [ + 'tar xzf osu-micro-benchmarks-5.6.2.tar.gz' + ] + + @rfm.run_before('sanity') + def set_sanity_patterns(self): self.sanity_patterns = sn.assert_not_found('error', self.stderr)