New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TST/BUG: fix array API test skip decorators #19018
Conversation
Co-authored-by: Tyler Reddy <tyler.je.reddy@gmail.com>
@tylerjereddy CI is green, how does this look? |
if xp.__name__ == backend: | ||
pytest.skip(reason=reason) | ||
return func(*args, xp, **kwargs) | ||
return func(*args, **kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I do this to test on this branch:
--- a/scipy/cluster/tests/test_vq.py
+++ b/scipy/cluster/tests/test_vq.py
@@ -282,7 +282,8 @@ class TestKMean:
@array_api_compatible
@skip_if_array_api_backend('numpy.array_api')
- def test_kmeans2_rank1_2(self, xp):
+ @pytest.mark.parametrize("junk", [1, 2])
+ def test_kmeans2_rank1_2(self, xp, junk):
data = xp.asarray(TESTDATA_2D)
data1 = data[:, 0]
kmeans2(data1, xp.asarray(2), iter=1)
python dev.py test -j 32 -b all -- -k "test_kmeans2_rank1_2"
I get: no tests ran in 18.76s
, which doesn't seem quite right--shouldn't we still run for torch
here? This is particularly complex, because it looks like class TestKMean
also has a class-level GPU skip. It also seems to me like we should report the skip(s) with reasons(s) rather than "no tests ran."
If I delete the class-level skip for TestKMean
in this branch and main
, the behavior in this branch is indeed superior than a cherry-pick of the same diff onto main
, where the original problem of breaking parametrize returns:
TypeError: TestKMean.test_kmeans2_rank1_2() missing 1 required positional argument: 'junk'
So, this is a bit long-winded, but I think it may be important to consider three things here:
- This is probably a solid improvement (I'm slightly biased, obviously)
- It isn't the full story yet, and we should probably open an issue for the behavior of class level + function level custom decorators we add, though I'd also be inclined to avoid the class-level ones as much as possible...
- This is getting pretty complex to review and test locally already--I hate to say it, but I wonder if we're going to need tests for our new test infrastructure for sanity reasons (this could maybe be discussed in the issue). Would still be messy because devices not available on CI. Obviously, would be nice if this infra could be maintained externally/community level, but that's perhaps farther out...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get: no tests ran in 18.76s, which doesn't seem quite right--shouldn't we still run for torch here?
I suspect that this is down to my use of if torch.cuda.is_available():
. I hoped that this would not skip pytorch-cpu but it seems it is. I don't know enough about torch
to say how to skip only on GPU then...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm maybe not, I still get no tests ran
with -b numpy
rather than -b all
. I'm still not confident in if torch.cuda.is_available():
though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can probably agree this is getting confusing though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The decorator of this branch in combination with the class-level skip is completely broken: python dev.py test -b all -t scipy/cluster/tests/test_vq.py::TestKMean
gives
========================================================================================== no tests ran in 0.04s ===========================================================================================
ERROR: not found: /home/lucas/dev/myscipy/build-install/lib/python3.10/site-packages/scipy/cluster/tests/test_vq.py::TestKMean
(no name '/home/lucas/dev/myscipy/build-install/lib/python3.10/site-packages/scipy/cluster/tests/test_vq.py::TestKMean' in any of [<Module test_vq.py>])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I was a getting a bit worried that we'll end up going in circles because we'll fix N behaviors and cause just 1 other thing to break instead, so that's why the "testing the tests" idea is suggested, but ugh.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for what it's worth, class-level skips with skip_if_array_api_backend
are currently completely broken on main
too, giving the same error as above. It seems that the decorators (other than the faulty GPU skip on main
) are written to only wrap functions, not classes. Can we modify these decorators to wrap classes too, or would a separate decorator be needed?
if torch.cuda.is_available(): | ||
pytest.skip(reason=reason) | ||
return func(*args, **kwargs) | ||
return wrapped |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
python dev.py test -j 32 -b all -s cluster
does seem a bit more sensible here vs. main
with 50 tests skipped on this branch vs. 389 on main
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Strangely, while I get 389 skips on main
, I only get 32 skips here (with 110 passes). Could another dev run this so we can see which number is the odd one out?
Edit: the difference may be that I am not running PyTorch GPU.
I'll leave this open for a bit if that's "ok," just want perhaps one other core dev to read my comment about the complexities above to make sure I'm not misunderstanding, etc, and so folks are on the same page. |
I think it would be sensible for now to remove the class-level skips from More testing is needed of PyTorch CPU/GPU to check whether this is doing what we want it to, but if we can get that working and remove class-level skips then I think that this will be looking like a decent improvement. |
[skip cirrus] [skip circle]
[skip cirrus] [skip circle]
Just pushed a commit where the |
@@ -180,23 +228,3 @@ def check_fpu_mode(request): | |||
SCIPY_HYPOTHESIS_PROFILE = os.environ.get("SCIPY_HYPOTHESIS_PROFILE", | |||
"deterministic") | |||
hypothesis.settings.load_profile(SCIPY_HYPOTHESIS_PROFILE) | |||
|
|||
|
|||
def skip_if_array_api_backend(backend): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this function is moved above the hypothesis
stuff from #18927 so that the array API stuff is not split on either side.
Noting that this PR is related to #18668 (comment) from @rgommers, which is concerned with the fact that Perhaps we want to have a decorator which skips every backend apart from Some notes from @tupui about
It may be an improvement for These changes could be considered in this PR, but it might be better to leave them for a follow-up. |
I've already found this to be a hindrance, it makes it harder to make changes and then run all tests with a single test command. Imho this isn't worth doing for the small-ish gains in test runtime, there are better ways of speeding up the test suite. In addition it's extra boilerplate; so I'd still prefer to simply remove this decorator.
Agreed to do this in a separate PR. |
@tylerjereddy want to hit the green button if you are happy with this? |
Yeah, I think it is a step in the right direction. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like a good improvement to me 👍 thanks @lucascolley (will let Tyler merge)
Reference PR
Fixes decorators which were introduced in #18668.
What does this implement/fix?
functools.wraps
is introduced toskip_if_array_api_backend
so that it can be used as a decorator together withpytest.mark.parametrize
.skip_if_array_api_gpu
is rewritten in a similar style toskip_if_array_api_backend
so that it functions properly. Previously, when used withpython dev.py test -b all
, decorated tests would be skipped on every backend, not just those which use GPU.Additional information
The discussion leading up to this PR occurred in the comments of #19005.
If there is a CI fail due todone.# type: ignore[misc]
, perhaps this can be removed now?