-
Notifications
You must be signed in to change notification settings - Fork 680
Make testing more robust especially for random objects #1268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
"Assigned" this just to stimulate discussion... |
I think
Those are certainly both solvable problems, and point 1 in particular is just general improvement of usability. The second point is about designing the tests well, which again is certainly doable, but will take a while (it takes long enough just to refactor them, let alone a total rewrite of large chunks of them!). |
Hi @nathanshammah, Could this issue perhaps be broken into sub-tasks in some way, perhaps, in order to enable work starting on it a bit more feasible? Also, I can see it's been labelled as a "good first issue" but it seems to me the definition of done (i.e., what would a PR - or a set of PRs - that would successfully address the problem entail?), with respect to the entire issue, could be clarified a bit further, and the breakdown into sub-tasks (which itself might result organically from some further discussion) might help a bit in that direction. Regarding possible approaches for handling randomness - I have to admit I've started looking into QuTiP only very recently, and I'm yet to start familiarising myself with its more intricate details and get to run the full set of tests, and investigate what kind of test failures occur. In the meantime, I'll generally share some (what I think is) relevant experience in the context of handling randomness in tests:
|
Is your feature request related to a problem? Please describe.
A lot of issues in test fail seem to arise from random objects (as well as low-level math in MKL and/or cython issues).
Describe the solution you'd like
There are several options at hand.
stick to pytest and be creative
A possible fix in testing may be to add randomly generated data that then is pointed at or fix seeds.
Pros: fast (?)
Cons: technical debt.
pytest-randomly plugin
pytest-randomly is a pytest plugin that addresses this kind of issues. It allows to control
random.seed
, rather thannumpy.random.seed
.Pros: pytest plugin, supports doctest.
Cons: not super popular, not designed for numpy
property-based testing with Hypothesis
Hypothesis is a library that aims at changing the way tests are designed, allegedly: it should go from testing an instance to designing a test that applies to a domain of instances (property-based testing). It is not super clear to me right now.
It contains various randomness-related features, including a seed function.
Pros: sounds powerful and clever, popular and growing, well documented, more robust even beyond this randomness problems.
Cons: radical change of testing framework (?), steep learning curve (?), overkill (?).
I also admit I used nose until recently / used with pytest tests thought for nose, without taking advantage of pytest full power.
The text was updated successfully, but these errors were encountered: