-
Notifications
You must be signed in to change notification settings - Fork 335
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update test documentation #542
Conversation
Codecov Report
@@ Coverage Diff @@
## master #542 +/- ##
==========================================
- Coverage 98.07% 97.76% -0.31%
==========================================
Files 46 46
Lines 3795 3809 +14
==========================================
+ Hits 3722 3724 +2
- Misses 73 85 +12
Continue to review full report at Codecov.
|
The changes look okay... I'll have to take a look at why the doc build is failing. |
Well taken a look! That looks like it's going to work :) |
The coverage decrease is probably the result of #543 getting merged in the background. Nothing to worry about. |
The main remaining task is to add some information about fixtures that describes what they are and how they work, and provide some straightforward, minimal examples. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few suggestions...
docs/development/testing_guide.rst
Outdated
(True, True), | ||
(False, True), | ||
] | ||
@pytest.mark.parametrize("truth_value, expected", truth_values_and_expected_results) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think i actually prefer sticking the contents of this list right here rather than creating that lengthy variable name — unless that one is used somewhere else (which seems unlikely), it makes for less jumping through code looking for variables.
Also, I found a stack answer on improving the way we previously did this kind of thing:
https://stackoverflow.com/questions/36749796/how-to-run-all-pytest-tests-even-if-some-of-them-fail
But it appears pytest-expect is unmaintained.
docs/development/testing_guide.rst
Outdated
def test_trigonometry(input_tuple): | ||
plasmapy.utils.run_test(input_tuple) | ||
|
||
This parametrized function will check that ``sin(0) == cos(π/2)` and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missed a ` here and it broke everything afterwards!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It also seems not too clear from this description what the comparison mechanism for floats using this function is, what accuracy we can expect... I think the reference to sin(0) == cos(π/2)
has to go, as it's not what we're checking if the code underneath this uses absolute and relative tolerances
docs/development/testing_guide.rst
Outdated
expected outcome (which may be the returned `object`, a warning, or an | ||
exception). | ||
|
||
.. TODO: The following code still needs to be tested!!!!! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what you're saying is... We need tests for our testing documentations? I guess this was an inevitability :D
Apart from proofreading, the only thing remaining is a discussion on fixtures. Given that I only understand the basics, I may give up for the moment and create an issue instead! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good now! I'm not quite sure we need to document fixtures too much - pytest's documentation should be the place for that. Perhaps we could improve their docs a bit instead.
Or we could document them here once we actually start to use them. |
@StanczakDominik - The fixtures section now contains a few sentences that refer to the pytest docs and say that we should use fixtures for more complex tests. I've made all the changes I've been wanting to make, so this is ready for review. Does anyone have any final suggestions? I've come to appreciate squash merges for cases like this (thanks to @StanczakDominik and a few other people who brought this up in the last few months), and I plan to do that here. There isn't anything in the individual commits that is worth saving, and bunching them together will lead to a cleaner history. I generally prefer to do squash merges for my PRs myself, since that gives me a chance to write a detailed commit message with contextual information. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, guess I'll merge it now!
I've come to appreciate squash merges for cases like this (thanks to @StanczakDominik and a few other people who brought this up in the last few months), and I plan to do that here. There isn't anything in the individual commits that is worth saving, and bunching them together will lead to a cleaner history. I generally prefer to do squash merges for my PRs myself, since that gives me a chance to write a detailed commit message with contextual information.
Woo! 🎉
Oh right, now I remember what I wanted to say.
If you ever get into a "PR is ready but waiting for review" situation and want to customize the eventual commit message, you could do a Also note that e.g. pandas doesn't bother with commit descriptions. That might be a good idea given that the PR is linked. But yeah, the gist of what I wanted to say is: I'll wait for you to craft your commit message. :D |
Our test practices have been evolving and improving over the last several months, and this documentation aims to update our development guide to reflect those changes.
I am not entirely familiar with all of the aspects of our testing, so suggestions and contributions from others would be much appreciated!
Cross-references: #153, #197, #198, #246, #249, #278, #291, #490