New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERF: Loosen pytest-seed and add a file for collection broken seeds #80
base: master
Are you sure you want to change the base?
Conversation
@@ -0,0 +1,2 @@ | |||
# This file is a collection of pytest-seeds that led to at least one of the | |||
# tests failing, and it provides a way to spot faulty functionality. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would add a clear specification of how to store the seeds. So, we should first find a broken seed so the we can add an example here. Anyway, we should explain that a failing test can be reproduced like this:
pytest --seed==N mdp/test/test_XXX.py -k test_YYY
The values for N
, XXX
and YYY
can be read out of the pytest report after a failed test run.
Here an example of failing seed (from: 664d688)
pytest --seed=2133340156 mdp/test/test_GSFANode.py -k test_basic_GSFA_edge_dict
(note that to get a failure for this specific test you need to comment out the @pytest.mark.skip
decorator in the file)
So maybe it is enough if we just add the above line to this file. When we discover other failing seed/tests, we can add the lines to reproduce here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think extending the section on testing in the docs is a better place for this or should at least be considered additionally?
Yes, good idea! But in the developers guide, not in the install/testing section. That is not something users should care about.
…On 13 May 2020 00:56:05 CEST, NiMlr ***@***.***> wrote:
@NiMlr commented on this pull request.
> @@ -0,0 +1,2 @@
+# This file is a collection of pytest-seeds that led to at least one
of the
+# tests failing, and it provides a way to spot faulty functionality.
Do you think extending the section on testing in the docs is a better
place for this or should at least be considered additionally?
|
See the development guide entry that describes how to handle failed tests. |
The PR is an implementation of this suggestion.