Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluate our use of plan and done-testing #85

Closed
m-dango opened this issue Feb 1, 2017 · 3 comments
Closed

Evaluate our use of plan and done-testing #85

m-dango opened this issue Feb 1, 2017 · 3 comments

Comments

@m-dango
Copy link
Member

m-dango commented Feb 1, 2017

As I am not very fond of telling people how to do things, nor am I in any sort of position to do so(!) I would like to have a discussion as to how we should go forward with the use of plan and done-testing in tests, and hopefully we can decide on a standard way of using them.

In my opinion:

  1. We should not include done-testing.
    plan already results in done-testing running. If done-testing is left in, and somebody removes plan (e.g. they remove it while modifying a test, but forget to put it back in), Travis will not catch it and will report success. It might not happen, but it is a possibility. In the Perl 6 documentation it is also recommended that it be removed: https://docs.perl6.org/language/testing#Test_plans

  2. plan should use a hardcoded value.
    Similar to the above. If someone accidentally removes a test, and it is missed in a review, Travis will not catch it and report success. Again, it might not happen in future, but it has happened in the past. (And that is the story of how I ended up on this repository!).

  3. subtests should also each have a plan, as without one it behaves similarly as if done-testing had been used.

TL;DR Give Travis the capacity to catch out mistakes we make.

Does anyone have any disagreements with my opinions, or any suggestions otherwise? 🙂

@yanick
Copy link
Contributor

yanick commented Feb 2, 2017

I am a little bit of a pragmatic wrt test plan vs done-testing, my views are:

  1. if you know the number of tests, put it. If you have a huge test file where a lot of funny things like non-deterministic number of tests can happen, the number of test cases will also help you. If the test file has a handful of tests, done-testing is already good enough -- a specific test plan is a cherry on top.

  2. If the number of tests is driven by a json file, plan 0+@testcases is better than a hard-coded plan 13. Because the former always reflect what you want to do, and the latter will bite you in the beep fairly often, unless you have a better short-term memory than I do. ;-)

  3. done-testing can be nice to have when there are a bunch of utility functions at the end of the file, as it's used as a visual marker of "tests are done now, you can stop reading".

  4. for subtests, rule Implement Perl 6 exercises #1 applies with s/file/subtest.

  5. considering that in most cases plan and/or done-testing will work just fine, using either one or both is, as far as I'm concerned, a question of personal style. I acknowledge that if someone remove wholesale a test the done-testing will not complain, and that could be an argument for fairly large test files. But for the test files we're having here, it's (in my opinion) inconsequential.

  6. I should probably specify that while am not convinced of the supremacy of test plan over done-testing, I don't have any problem at all with somebody caring more about it revisiting and uniformizing (sp?) the testcases later on. I have enough of an opinion to do what I think makes sense to me, but I'm flexible enough to let peeps tweak the given code as to satisfy their own personal itch. :-)

@m-dango
Copy link
Member Author

m-dango commented Feb 4, 2017

With 2 maybe, if you're certain that you haven't made any mistakes when modifying the JSON. I think it's more accurate to say it's what you did do (for better or worse) rather than what you wanted to do.

For 3, now that you mention it, I can see the value of done-testing in some cases, particularly with the changes I'm making re: #87

@yanick
Copy link
Contributor

yanick commented Feb 5, 2017

For 2 there is also the caveat that it depends of what the actual situation is. If the json used is a huge thing used by other pieces of software and things could get mixed up, yeah, that's one thing. But if we're dealing with a simple file used for only the test at hand, it's a much simpler kettle of fish.

For 3, yup, #88 is a good example of what I mean. It's nothing world-changing, but it's a mite nicer than a # ----------- line.

@m-dango m-dango closed this as completed Mar 16, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants