I see in the buster-test.js file that test are randomly ordered each time a test case is run. Perhaps there is a reason for this but when using a simple browser test page, this means the tests in the report are reordered each time. If running a long list of tests repeatedly this means the eye cannot use physical location in the page or on the screen to check when one test function changes from red to green. It is more work to look though the list and see what is passing and failing each time because the list is reordering each time.
If it is necessary to randomly order the tests when running them, would it be possible to return the order back to something constant in the report?
Hmm, I haven't really been bothered by this because I typically use the "dots" reporter on the command line (clue is: it doesn't show the test names for passing tests). I can see how the random reporting can be confusing.
Random test order is enforced to help you avoid writing tests that implicitly depend on each other (e.g. by depending on state from previous tests).
Random test order and non-random reporting could be done, I guess, but not without sacrificing the "live" reporting. Another approach is to allow configuration of random runs (which I've resisted in the past, but now see a good enough argument for). Which one's better?
I think random test ordering, live reporting, and non-random reporting are possible. The test cases and suites could be interleaved in order in the HTML of the visible report as they are completed.
It might be difficult to implement so that the reporter retains the order of the tests and test cases as they were added in source (and in fact some browsers may not even retain that order in the objects containing the tests) but I think it would be reasonably easy to output things in an alphabetized order in the reporter.
Of course, you're right. I'm currently deep down in code trying to position and reflow text in a terminal using ANSI escape sequences. The mere thought of non-linear printing gives me the shivers right now :)
Ok, we'll do alphabetic printing in the HTML report then, good idea.
By alphabetizing, I suppose there is loss of information. If tests fail when run in one order but not in another order than there is no indication of the order they were run in for any particular report. This would make debugging more difficult.
It may be best just to close this ticket as "wontfix".
There are (unfortunately) already some issues related to potential test ordering bugs; Buster currently provides no way of re-running a given ordering. One suggested fix to that is to return a seed value of some sort that can be used to re-run with the same ordering. However, I feel that the problem is somewhat overstated - with random ordering, wouldn't you catch an ordering issue roughly as you introduced it? Thus being able to fix it promptly.
Would adding a run number of sorts to the test name help? "the thing does the thingamagiggy (2)"
"the thing does the thingamagiggy (2)"
Even though random ordering is causing a bit of inconvenience, I think the gain (stronger test suites) is worth it.
I can see random ordering of tests as a benefit and worth it but also possibly difficult to find failing tests. For example, in a large test suite, there might be one bad ordering out of hundreds of permutations. If this bad ordering happens occasionally in the continuous integration loop then someone else in the company may complain that you are committing failing code. The difficulty of finding the actual problem and the occasional reoccurring failed tests could damage ones reputation of doing a good job. Having an optional seed to make order repeatable would help with this problem. The seed used (regardless of whether or not it was specified or generated) could be included in the report so that after a failed integration loop email arrives, the seed could be used locally to find the problem and fix it once and for all.
Random ordering means it could take an infinite number of test runs to find ordering bugs - so no, you would not trigger the error as you introduce it. And without a seed, you would not be able to verify the fix either.
We'll implement the seed.
Hi Christian :)
Was there a fix for this at all? There are many reasons for both arguments, I think a configurable switch would cater to both. I'm also baffled as to how the specs get out of order when none are async?
It would be great if the random ordering could be disabled (or implement the seeding). This is probably the most irritating "feature" in Buster compared to other test frameworks. I have hundreds of tests in my current project and that it fails 1 out of 50 runs in our CI environment without being able to repeat it is time-consuming.
-S/--random-seed was implemented in Buster 0.7