-
-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluate our use of plan
and done-testing
#85
Comments
I am a little bit of a pragmatic wrt test plan vs done-testing, my views are:
|
With 2 maybe, if you're certain that you haven't made any mistakes when modifying the JSON. I think it's more accurate to say it's what you did do (for better or worse) rather than what you wanted to do. For 3, now that you mention it, I can see the value of |
For 2 there is also the caveat that it depends of what the actual situation is. If the json used is a huge thing used by other pieces of software and things could get mixed up, yeah, that's one thing. But if we're dealing with a simple file used for only the test at hand, it's a much simpler kettle of fish. For 3, yup, #88 is a good example of what I mean. It's nothing world-changing, but it's a mite nicer than a |
As I am not very fond of telling people how to do things, nor am I in any sort of position to do so(!) I would like to have a discussion as to how we should go forward with the use of
plan
anddone-testing
in tests, and hopefully we can decide on a standard way of using them.In my opinion:
We should not include
done-testing
.plan
already results indone-testing
running. Ifdone-testing
is left in, and somebody removesplan
(e.g. they remove it while modifying a test, but forget to put it back in), Travis will not catch it and will report success. It might not happen, but it is a possibility. In the Perl 6 documentation it is also recommended that it be removed: https://docs.perl6.org/language/testing#Test_plansplan
should use a hardcoded value.Similar to the above. If someone accidentally removes a test, and it is missed in a review, Travis will not catch it and report success. Again, it might not happen in future, but it has happened in the past. (And that is the story of how I ended up on this repository!).
subtest
s should also each have aplan
, as without one it behaves similarly as ifdone-testing
had been used.TL;DR Give Travis the capacity to catch out mistakes we make.
Does anyone have any disagreements with my opinions, or any suggestions otherwise? 🙂
The text was updated successfully, but these errors were encountered: