Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added docs/_static/img/gitlab-ci.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 8 additions & 0 deletions docs/manpage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,14 @@ There are currently two actions that can be performed on tests: (a) list the tes
An action must always be specified.


.. option:: --ci-generate=FILE

Do not run the tests, but generate a Gitlab `child pipeline <https://docs.gitlab.com/ee/ci/parent_child_pipelines.html>`__ specification in ``FILE``.
You can set up your Gitlab CI to use the generated file to run every test as a separate Gitlab job respecting test dependencies.
For more information, have a look in :ref:`generate-ci-pipeline`.

.. versionadded:: 3.4.1

.. option:: -l, --list

List selected tests.
Expand Down
59 changes: 59 additions & 0 deletions docs/tutorial_tips_tricks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -367,6 +367,8 @@ This option is useful when you combine it with the various test filtering option
For example, you might want to rerun only the failed tests or just a specific test in a dependency chain.
Let's see an artificial example that uses the following test dependency graph.

.. _fig-deps-complex:

.. figure:: _static/img/deps-complex.svg
:align: center

Expand Down Expand Up @@ -477,3 +479,60 @@ If we tried to run :class:`T6` without restoring the session, we would have to r

[ PASSED ] Ran 5 test case(s) from 5 check(s) (0 failure(s))
[==========] Finished on Thu Jan 21 14:32:09 2021


.. _generate-ci-pipeline:

Integrating into a CI pipeline
------------------------------

.. versionadded:: 3.4.1

Instead of running your tests, you can ask ReFrame to generate a `child pipeline <https://docs.gitlab.com/ee/ci/parent_child_pipelines.html>`__ specification for the Gitlab CI.
This will spawn a CI job for each ReFrame test respecting test dependencies.
You could run your tests in a single job of your Gitlab pipeline, but you would not take advantage of the parallelism across different CI jobs.
Having a separate CI job per test makes it also easier to spot the failing tests.

As soon as you have set up a `runner <https://docs.gitlab.com/ee/ci/quick_start/>`__ for your repository, it is fairly straightforward to use ReFrame to automatically generate the necessary CI steps.
The following is an example of ``.gitlab-ci.yml`` file that does exactly that:

.. code-block:: yaml

stages:
- generate
- test

generate-pipeline:
stage: generate
script:
- reframe --ci-generate=${CI_PROJECT_DIR}/pipeline.yml -c ${CI_PROJECT_DIR}/path/to/tests
artifacts:
paths:
- ${CI_PROJECT_DIR}/pipeline.yml

test-jobs:
stage: test
trigger:
include:
- artifact: pipeline.yml
job: generate-pipeline
strategy: depend


It defines two stages.
The first one, called ``generate``, will call ReFrame to generate the pipeline specification for the desired tests.
All the usual `test selection options <manpage.html#test-filtering>`__ can be used to select specific tests.
ReFrame will process them as usual, but instead of running the selected tests, it will generate the correct steps for running each test individually as a Gitlab job.
We then pass the generated CI pipeline file to second phase as an artifact and we are done!

The following figure shows one part of the automatically generated pipeline for the test graph depicted `above <#fig-deps-complex>`__.

.. figure:: _static/img/gitlab-ci.png
:align: center

:sub:`Snapshot of a Gitlab pipeline generated automatically by ReFrame.`


.. note::

The ReFrame executable must be available in the Gitlab runner that will run the CI jobs.
7 changes: 1 addition & 6 deletions reframe/core/exceptions.py
Original file line number Diff line number Diff line change
Expand Up @@ -308,13 +308,8 @@ def is_severe(exc_type, exc_value, tb):
'''Check if exception is a severe one.'''

soft_errors = (ReframeError,
ConnectionError,
FileExistsError,
FileNotFoundError,
IsADirectoryError,
OSError,
KeyboardInterrupt,
NotADirectoryError,
PermissionError,
TimeoutError)
if isinstance(exc_value, soft_errors):
return False
Expand Down
72 changes: 72 additions & 0 deletions reframe/frontend/ci.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
# Copyright 2016-2020 Swiss National Supercomputing Centre (CSCS/ETH Zurich)
# ReFrame Project Developers. See the top-level LICENSE file for details.
#
# SPDX-License-Identifier: BSD-3-Clause

import sys
import yaml

import reframe.core.exceptions as errors
import reframe.core.runtime as runtime


def _emit_gitlab_pipeline(testcases):
config = runtime.runtime().site_config

# Collect the necessary ReFrame invariants
program = 'reframe'
prefix = 'rfm-stage/${CI_COMMIT_SHORT_SHA}'
checkpath = config.get('general/0/check_search_path')
recurse = config.get('general/0/check_search_recursive')

def rfm_command(testcase):
if config.filename != '<builtin>':
config_opt = f'-C {config.filename}'
else:
config_opt = ''

report_file = f'rfm-report-{testcase.level}.json'
if testcase.level:
restore_file = f'rfm-report-{testcase.level - 1}.json'
else:
restore_file = None

return ' '.join([
program,
f'--prefix={prefix}', config_opt,
f'{" ".join("-c " + c for c in checkpath)}',
f'-R' if recurse else '',
f'--report-file={report_file}',
f'--restore-session={restore_file}' if restore_file else '',
'-n', testcase.check.name, '-r'
])

max_level = 0 # We need the maximum level to generate the stages section
json = {
'cache': {
'key': '${CI_COMMIT_REF_SLUG}',
'paths': ['rfm-stage/${CI_COMMIT_SHORT_SHA}']
},
'stages': []
}
for tc in testcases:
json[f'{tc.check.name}'] = {
'stage': f'rfm-stage-{tc.level}',
'script': [rfm_command(tc)],
'artifacts': {
'paths': [f'rfm-report-{tc.level}.json']
},
'needs': [t.check.name for t in tc.deps]
}
max_level = max(max_level, tc.level)

json['stages'] = [f'rfm-stage-{m}' for m in range(max_level+1)]
return json


def emit_pipeline(fp, testcases, backend='gitlab'):
if backend != 'gitlab':
raise errors.ReframeError(f'unknown CI backend {backend!r}')

yaml.dump(_emit_gitlab_pipeline(testcases), stream=fp,
indent=2, sort_keys=False, width=sys.maxsize)
28 changes: 25 additions & 3 deletions reframe/frontend/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
import reframe.core.runtime as runtime
import reframe.core.warnings as warnings
import reframe.frontend.argparse as argparse
import reframe.frontend.ci as ci
import reframe.frontend.dependencies as dependencies
import reframe.frontend.filters as filters
import reframe.frontend.runreport as runreport
Expand Down Expand Up @@ -119,7 +120,7 @@ def list_checks(testcases, printer, detailed=False):
printer.info(
'\n'.join(format_check(c, deps[c.name], detailed) for c in checks)
)
printer.info(f'Found {len(checks)} check(s)')
printer.info(f'Found {len(checks)} check(s)\n')


def logfiles_message():
Expand Down Expand Up @@ -272,6 +273,11 @@ def main():
'-r', '--run', action='store_true',
help='Run the selected checks'
)
action_options.add_argument(
'--ci-generate', action='store', metavar='FILE',
help=('Generate into FILE a Gitlab CI pipeline '
'for the selected tests and exit'),
)

# Run options
run_options.add_argument(
Expand Down Expand Up @@ -334,6 +340,8 @@ def main():
'--disable-hook', action='append', metavar='NAME', dest='hooks',
default=[], help='Disable a pipeline hook for this run'
)

# Environment options
env_options.add_argument(
'-M', '--map-module', action='append', metavar='MAPPING',
dest='module_mappings', default=[],
Expand Down Expand Up @@ -795,9 +803,23 @@ def _case_failed(t):
list_checks(testcases, printer, options.list_detailed)
sys.exit(0)

if options.ci_generate:
list_checks(testcases, printer)
printer.info('[Generate CI]')
with open(options.ci_generate, 'wt') as fp:
ci.emit_pipeline(fp, testcases)

printer.info(
f' Gitlab pipeline generated successfully '
f'in {options.ci_generate!r}.\n'
)
sys.exit(0)

if not options.run:
printer.error(f"No action specified. Please specify `-l'/`-L' for "
f"listing or `-r' for running. "
printer.error("No action option specified. Available options:\n"
" - `-l'/`-L' for listing\n"
" - `-r' for running\n"
" - `--ci-generate' for generating a CI pipeline\n"
f"Try `{argparser.prog} -h' for more options.")
sys.exit(1)

Expand Down
17 changes: 13 additions & 4 deletions reframe/frontend/dependencies.py
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,8 @@ def validate_deps(graph):
if n in path:
cycle_str = '->'.join(path + [n])
raise DependencyError(
'found cyclic dependency between tests: ' + cycle_str)
'found cyclic dependency between tests: ' + cycle_str
)

if n not in visited:
unvisited.append((n, node))
Expand Down Expand Up @@ -212,6 +213,7 @@ def toposort(graph, is_subgraph=False):
'''
test_deps = _reduce_deps(graph)
visited = util.OrderedSet()
levels = {}

def retrieve(d, key, default):
try:
Expand All @@ -229,9 +231,15 @@ def visit(node, path):
path.add(node)

# Do a DFS visit of all the adjacent nodes
for adj in retrieve(test_deps, node, []):
if adj not in visited:
visit(adj, path)
adjacent = retrieve(test_deps, node, [])
for u in adjacent:
if u not in visited:
visit(u, path)

if adjacent:
levels[node] = max(levels[u] for u in adjacent) + 1
else:
levels[node] = 0

path.pop()
visited.add(node)
Expand All @@ -243,6 +251,7 @@ def visit(node, path):
# Index test cases by test name
cases_by_name = {}
for c in graph.keys():
c.level = levels[c.check.name]
try:
cases_by_name[c.check.name].append(c)
except KeyError:
Expand Down
3 changes: 3 additions & 0 deletions reframe/frontend/executors/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,9 @@ def __init__(self, check, partition, environ):
# Incoming dependencies
self.in_degree = 0

# Level in the dependency chain
self.level = 0

def __iter__(self):
# Allow unpacking a test case with a single liner:
# c, p, e = case
Expand Down
6 changes: 4 additions & 2 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
argcomplete==1.12.2
coverage==5.3
importlib_metadata==2.0.0
jsonschema==3.2.0
pytest==6.2.0
pytest-forked==1.3.0
pytest-parallel==0.1.0
coverage==5.3
PyYAML==5.3.1
requests==2.25.1
setuptools==50.3.0
wcwidth==0.2.5
argcomplete==1.12.2
#+pygelf%pygelf==0.3.6
32 changes: 32 additions & 0 deletions unittests/test_ci.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Copyright 2016-2021 Swiss National Supercomputing Centre (CSCS/ETH Zurich)
# ReFrame Project Developers. See the top-level LICENSE file for details.
#
# SPDX-License-Identifier: BSD-3-Clause


import io
import requests

import reframe.frontend.ci as ci
import reframe.frontend.dependencies as dependencies
import reframe.frontend.executors as executors
from reframe.frontend.loader import RegressionCheckLoader


def test_ci_gitlab_pipeline():
loader = RegressionCheckLoader([
'unittests/resources/checks_unlisted/deps_complex.py'
])
cases = dependencies.toposort(
dependencies.build_deps(
executors.generate_testcases(loader.load_all())
)[0]
)
with io.StringIO() as fp:
ci.emit_pipeline(fp, cases)
yaml = fp.getvalue()

response = requests.post('https://gitlab.com/api/v4/ci/lint',
data={'content': {yaml}})
assert response.ok
assert response.json()['status'] == 'valid'
21 changes: 21 additions & 0 deletions unittests/test_dependencies.py
Original file line number Diff line number Diff line change
Expand Up @@ -804,6 +804,18 @@ def test_toposort(make_test, exec_ctx):
cases = dependencies.toposort(deps)
assert_topological_order(cases, deps)

# Assert the level assignment
cases_by_level = {}
for c in cases:
cases_by_level.setdefault(c.level, set())
cases_by_level[c.level].add(c.check.name)

assert cases_by_level[0] == {'t0', 't5'}
assert cases_by_level[1] == {'t1', 't6', 't7'}
assert cases_by_level[2] == {'t2', 't8'}
assert cases_by_level[3] == {'t3'}
assert cases_by_level[4] == {'t4'}


def test_toposort_subgraph(make_test, exec_ctx):
#
Expand Down Expand Up @@ -836,3 +848,12 @@ def test_toposort_subgraph(make_test, exec_ctx):
)
cases = dependencies.toposort(partial_deps, is_subgraph=True)
assert_topological_order(cases, partial_deps)

# Assert the level assignment
cases_by_level = {}
for c in cases:
cases_by_level.setdefault(c.level, set())
cases_by_level[c.level].add(c.check.name)

assert cases_by_level[1] == {'t3'}
assert cases_by_level[2] == {'t4'}