Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: deactivate a randomly failing answer test #3068

Merged

Conversation

neutrinoceros
Copy link
Member

PR Summary

This one answer test has been randomly failing in all PRs since the release of numpy 0.20.0 a week ago.
I suggest to deactivate it at least temporarily (I added a comment to explain the situation).

@neutrinoceros neutrinoceros added bug infrastructure Related to CI, versioning, websites, organizational issues, etc tests: running tests Issues with the test setup and removed infrastructure Related to CI, versioning, websites, organizational issues, etc labels Feb 8, 2021
@brittonsmith
Copy link
Member

I'm worried that this will result in this never getting fixed. There should probably be at least an issue (maybe even a blocking one) so it doesn't get forgotten.

@neutrinoceros
Copy link
Member Author

Fair point; here you go #3069

@munkm
Copy link
Member

munkm commented Feb 9, 2021

Ok I've added the 4.0 label to that issue so we make sure to address it before the release.

I do worry about deactivating this test -- we might introduce other errors. That said, we might still attribute it to the random failure pre-fixing so I'm really not sure of the best approach here.

@neutrinoceros
Copy link
Member Author

Right now it seems to me the test is mostly catching external errors out of our control 🤷🏻‍♂️, but I agree that I'm not happy about the solution either.

Copy link
Member

@cphyc cphyc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All my PRs are failing randomly because of this bug... I am definitely in favour of having it silenced for the moment until we figure out a solution. Given an issue has been filled, I think it is safe to merge this one.

@cphyc
Copy link
Member

cphyc commented Feb 10, 2021

Also @munkm the bug is actually a pixel-wise difference in an answer test, and both the old and new answer seem sensible to me. I guess the issue is that we can't have both answers at the same time, except if we pin a particular version of numpy, can we?

@neutrinoceros
Copy link
Member Author

we could maybe adjust the rms tolerance but I think it's not a priority now

@munkm
Copy link
Member

munkm commented Feb 10, 2021

I guess the issue is that we can't have both answers at the same time, except if we pin a particular version of numpy, can we?

Yeah I don't think we can have both? Or, I don't think it's the hassle of figuring out answers for both versions right now. I will merge this and hopefully we can reactivate the test soon.

@munkm munkm merged commit 5df2629 into yt-project:main Feb 10, 2021
@neutrinoceros neutrinoceros deleted the deactivate_randomly_failing_test branch February 10, 2021 20:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug tests: running tests Issues with the test setup
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants