-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: stats trim_mean #2729
Bug: stats trim_mean #2729
Conversation
assert_equal(stats.trim_mean([5,4,3,1,2,0], 2/6.), 2.5) | ||
|
||
# check axis argument | ||
a = np.random.randint(20, size=(5, 6, 4, 7)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as in levene PR
no random seed, but the test has to pass for any possible random draw
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's true for almost all tests, the problem is if it doesn't pass and someone reports it, we don't know the seed so it's hard to reproduce. Testing without seeds is for during developing the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but if there is a bug, and it fails, you face difficulties in finding out what the random permutation that caused the failure was.
So the question is about pre-empting problems when trying to reproduce test failures (for instance occurring on hardware which you do not have access to).
If you want to test several permutations, you can add a loop around the test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changed.
However,
The tests for stats.distributions are real random tests. And we introduced the seeds to avoid random failures which are supposed to show up with some small probability.
Reproducing test failures wasn't an argument for random seeds at the time.
In the cases here the test result is deterministic (always pass), and something very weird has to go on on a platform to cause an error. (For example introducing nan's somewhere.) And in this case, I think, it would still be useful to test "permutations" (and not just the function of the test.)
But that's not the purpose of the test here.
Thanks Josef, merged with minor tweaks in 6bb29aa |
fixes bug for >1d array,
add axis argument closes #2554
Note: existing unit tests used sorted array, which didn't hit the sorting bug
I reshuffled the numbers
I tested this outside of scipy source and then copied it into the source files