New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add docs for test_resumed_cancelled_handle_compute #6905
base: main
Are you sure you want to change the base?
Add docs for test_resumed_cancelled_handle_compute #6905
Conversation
I also added a minimal example using the WSM itself. It does not cover the exact same thing, particularly the scheduler interaction is not simulated in this minimal example which is why I would prefer keeping both. The WSM test is, however, much more straight forward |
|
||
|
||
@pytest.mark.parametrize("fail", [True, False]) | ||
def test_executing_cancelled_fetch_executing(ws, fail): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: This actually fails when using the ws_with_running_task
fixture because we're not releasing a resource properly. Will try again after #6699
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you move this test to test_cancelled_state.py?
I'd rather have tests grouped by what they test, not by how they test it.
I already added a -m workerstate
to pytest to run all WorkerState tests throughout the whole test suite (#6706) for this purpose.
Unit Test ResultsSee test report for an extended history of previous test failures. This is useful for diagnosing flaky tests. 15 files ± 0 15 suites ±0 6h 30m 36s ⏱️ + 3m 45s For more details on these failures and errors, see this check. Results for commit 9af7fca. ± Comparison against base commit 61fca1c. |
assert len(instructions) == 1 | ||
assert instructions[0] == Execute(key="x", stimulus_id="s1") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assert len(instructions) == 1 | |
assert instructions[0] == Execute(key="x", stimulus_id="s1") | |
assert instructions == [Execute(key="x", stimulus_id="s1")] |
assert len(instructions) == 1 | ||
assert isinstance(instructions[0], TaskErredMsg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assert len(instructions) == 1 | |
assert isinstance(instructions[0], TaskErredMsg) | |
assert instructions == [TaskErredMsg.match(key="x", stimulus_id="s6")] |
assert len(instructions) == 1 | ||
assert isinstance(instructions[0], TaskFinishedMsg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assert len(instructions) == 1 | |
assert isinstance(instructions[0], TaskFinishedMsg) | |
assert instructions == [TaskFinishedMsg.match(key="x", stimulus_id="s6")] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cosmetic
Shouldn't the new test also factor in the |
Came up during review of #6699