Skip to content

ci[cartesian]: Thread safe parallel stencil tests#1849

Merged
FlorianDeconinck merged 2 commits intoGridTools:mainfrom
romanc:romanc/cartesian-thread-safe-parallel-tests
Feb 7, 2025
Merged

ci[cartesian]: Thread safe parallel stencil tests#1849
FlorianDeconinck merged 2 commits intoGridTools:mainfrom
romanc:romanc/cartesian-thread-safe-parallel-tests

Conversation

@romanc
Copy link
Copy Markdown
Contributor

@romanc romanc commented Feb 6, 2025

Description

To avoid repeating boiler plate code in testing, StencilTestSuite provides a convenient interace to test gtscript stencils.

Within that StencilTestSuite base class, generating the stencil is separated from running & validating the stencil code. Each deriving test class will end up with two tests: one for stencil generation and a second one to test the implementation by running the generated code with defined inputs and expected outputs.

The base class was written such that the implementation test would re-use the generated stencil code from the first test. This introduces an implicit test order dependency. To save time and avoid unnecessary test failure outputs, failing to generate the stencil code would automatically skip the implementation/validation test.

Running tests in parallel (with xdist) breaks the expected test execution order (in the default configuration). This leads to automatically skiped validation tests in case the stencil code wasn't generated yet. On the CI, we only run with 2 threads so only a couple tests were skipped usually. Locally, I was running with 16 threads and got ~30 skipped validation tests.

This PR proposes to address the issue by setting an xdist_group mark on the generation/implementation tests that belong togehter. In combination with --dist loadgroup, this will keep the expected order where necessary. Only tests with xdist_group markers are affected by --dist loadgroup. Tests without that marker will be distributed normally as if in --dist load mode (the default so far). By grouping with cls_name and backend, we keep maximal parallelization, grouping only the two tests that are depending on each other.

Further reading: see --dist section in pytest-xdist documentation.

Requirements

  • All fixes and/or new features come with corresponding tests.
    Existing tests are still green. No more skipped tests \o/ Works as expected locally
  • Important design decisions have been documented in the appropriate ADR inside the docs/development/ADRs/ folder.
    N/A

To avoid repeating boiler plate code in testing, `StencilTestSuite`
provides a convenient interace to test gtscript stencils.

Within that `StencilTestSuite` base class, generating the stencil is
separated from running & validating the stencil code. Each deriving
test class will end up with two tests: one for stencil generation and a
second one to test the implementation by running the generated code with
defined inputs and expected outputs.

The base class was written such that the implementation test would
re-use the generated stencil code from the first test. This introduces
an implicit test order dependency. To save time and avoid unnecessary
test failure outputs, failing to generate the stencil code would
automatically skip the implementation/validation test.

Running tests in parallel (with `xdist`) breaks the expected test
execution order (in the default configuration). This leads to
automatically skiped validation tests in case the stencil code wasn't
generated yet. On the CI, we only run with 2 threads so only a couple
tests were skipped usually. Locally, I was running with 16 threads and
got ~30 skipped validation tests.

This PR proposes to address the issue by setting an `xdist_group` mark
on the generation/implementation tests that belong togehter. In
combination with `--dist loadgroup`, this will keep the expected order
where necessary. Only tests with `xdist_group` markers are affected by
`--dist loadgroup`. Tests without that marker will be distributed
normally as if in `--dist load` mode (the default so far). By grouping
with `cls_name` and backend, we keep maximal parallelization, grouping
only the two tests that are depending on each other.
@romanc romanc force-pushed the romanc/cartesian-thread-safe-parallel-tests branch from 6900575 to 8e8e497 Compare February 6, 2025 14:21
Copy link
Copy Markdown
Contributor Author

@romanc romanc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some details inline

d, generation_strategy_factories
),
implementations=[],
implementation=None,
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might have been different in the past. The way we cache implementations now, there's only ever max one implementation per test context.

Comment on lines -437 to +451
The generated implementations are cached in a :class:`utils.ImplementationsDB`
instance, to avoid duplication of (potentially expensive) compilations.
The generated implementation is cached in the test context, to avoid duplication
of (potentially expensive) compilation.
Note: This caching introduces a dependency between tests, which is captured by an
`xdist_group` marker in combination with `--dist loadgroup` to ensure safe parallel
test execution.
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment was out of date. There's no utils.ImplementationDB (anymore).

Comment on lines -464 to +478
test["implementations"].append(implementation)
assert test["implementation"] is None
test["implementation"] = implementation
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assert our assumption that we only ever cache one implementation per test context.

Comment on lines -591 to +614
implementation_list = test["implementations"]
if not implementation_list:
pytest.skip(
"Cannot perform validation tests, since there are no valid implementations."
)
for implementation in implementation_list:
if not isinstance(implementation, StencilObject):
raise RuntimeError("Wrong function got from implementations_db cache!")
implementation = test["implementation"]
assert (
implementation is not None
), "Stencil not yet generated. Did you attempt to run stencil tests in parallel?"
assert isinstance(implementation, StencilObject)

cls._run_test_implementation(parameters_dict, implementation)
cls._run_test_implementation(parameters_dict, implementation)
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simplified since we don't have an array of implementations (anymore). Assert that the stencil code has been generated. If not, fail instead of skip. This leads to more errors in case code generation fails. Imo the best way to handle this is to get rid of the the ideas of having two tests (one for codegen and one for validation) per class. We could achieve the same level of parallelization with less glue code if we had just one test per class (codegen and validation inside the same test).

@romanc romanc marked this pull request as ready for review February 6, 2025 14:34
@romanc
Copy link
Copy Markdown
Contributor Author

romanc commented Feb 6, 2025

/cc @egparedes @havogt FYI

The previous one was from when I thought we had to run these tests
on one thread only.
Copy link
Copy Markdown
Contributor

@FlorianDeconinck FlorianDeconinck left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Copy link
Copy Markdown
Contributor

@havogt havogt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@FlorianDeconinck FlorianDeconinck merged commit 4b566d7 into GridTools:main Feb 7, 2025
30 checks passed
@romanc romanc deleted the romanc/cartesian-thread-safe-parallel-tests branch February 8, 2025 12:17
stubbiali pushed a commit to stubbiali/gt4py that referenced this pull request Aug 19, 2025
<!--
Delete this comment and add a proper description of the changes
contained in this PR. The text here will be used in the commit message
since the approved PRs are always squash-merged. The preferred format
is:

- PR Title: <type>[<scope>]: <one-line-summary>

    <type>:
- build: Changes that affect the build system or external dependencies
        - ci: Changes to our CI configuration files and scripts
        - docs: Documentation only changes
        - feat: A new feature
        - fix: A bug fix
        - perf: A code change that improves performance
- refactor: A code change that neither fixes a bug nor adds a feature
        - style: Changes that do not affect the meaning of the code
        - test: Adding missing tests or correcting existing tests

    <scope>: cartesian | eve | next | storage
    # ONLY if changes are limited to a specific subsystem

- PR Description:

Description of the main changes with links to appropriate
issues/documents/references/...
-->

## Description

To avoid repeating boiler plate code in testing, `StencilTestSuite`
provides a convenient interace to test gtscript stencils.

Within that `StencilTestSuite` base class, generating the stencil is
separated from running & validating the stencil code. Each deriving test
class will end up with two tests: one for stencil generation and a
second one to test the implementation by running the generated code with
defined inputs and expected outputs.

The base class was written such that the implementation test would
re-use the generated stencil code from the first test. This introduces
an implicit test order dependency. To save time and avoid unnecessary
test failure outputs, failing to generate the stencil code would
automatically skip the implementation/validation test.

Running tests in parallel (with `xdist`) breaks the expected test
execution order (in the default configuration). This leads to
automatically skiped validation tests in case the stencil code wasn't
generated yet. On the CI, we only run with 2 threads so only a couple
tests were skipped usually. Locally, I was running with 16 threads and
got ~30 skipped validation tests.

This PR proposes to address the issue by setting an `xdist_group` mark
on the generation/implementation tests that belong togehter. In
combination with `--dist loadgroup`, this will keep the expected order
where necessary. Only tests with `xdist_group` markers are affected by
`--dist loadgroup`. Tests without that marker will be distributed
normally as if in `--dist load` mode (the default so far). By grouping
with `cls_name` and backend, we keep maximal parallelization, grouping
only the two tests that are depending on each other.

Further reading: see [`--dist`
section](https://pytest-xdist.readthedocs.io/en/stable/distribution.html)
in `pytest-xdist` documentation.

## Requirements

- [x] All fixes and/or new features come with corresponding tests.
Existing tests are still green. No more skipped tests \o/ Works as
expected locally
- [ ] Important design decisions have been documented in the appropriate
ADR inside the [docs/development/ADRs/](docs/development/ADRs/Index.md)
folder.
  N/A

---------

Co-authored-by: Roman Cattaneo <1116746+romanc@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants