Skip to content
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
302 lines (261 sloc) 81.1 KB

RFC 28: setup({single_test: true}) opt-in


Single-page tests are simple testharness.js tests that just do assertions and finally call done(). They reduce boilerplate, as no test() or async_test() is needed. Callbacks also need not be wrapped with step_func(), as any error simply fails the test. See simple sync test and simple async test for real examples.

Unfortunately, the rules for single-page tests are subtle and many tests accidentally enter this mode. This leads to less consistent results cross-browser, see FileAPI example. There are ~130 intentional single-page tests and ~640 tests that accidentally trigger the mode in one or more browsers.

Introduce setup({single_test: true}) as an a required opt-in for single-page tests.

Example usage:

setup({single_test: true});
addEventListener('someevent', () => {



The changes will be rolled out as follows:

  • Support setup({single_test: true}) as an additional trigger for the single-page test mode.
  • Add setup({single_test: true}) to the ~130 tests already identified.
  • Handle any remaining tests that reference the global done but use neither setup({explicit_done: true}) nor setup({single_test: true}).
  • Update any of the ~640 accidental single-page tests that would regress or fail in a less clear way in the final step.
  • Wait for the above changes to roll into vendor repos and for additional tests there to be updated.
  • Make it an error to call done() or make any assertion before any test has been defined.
  • Remove the pre-existing triggers for single-page tests, leaving only setup({single_test: true}).

The current rules for single-page tests are a bit subtle. They are triggered by one of the following happening before any test is explicitly defined:

  • Using any assert method
  • Calling the global done()
  • An uncaught exception or unhandled rejection occurs

A problem with the current setup is that uncaught errors are common in tests not intended to be single-page tests, and manifest as a bogus failing subtest; see FileAPI example. In fact, this appears to be much more common than real single-page tests.

Another issue is that in dedicated and shared workers, done() has to be called, and the generated done() for any.js tests is called even if the test script failed. This explains some of the results seen in the following.

Survey of existing tests

A testharness.js change was prepared to find which conditions were hit: commitruns / resultsscriptraw stats.


browser assert done error
chrome-dev 20 93 16
chrome-stable 20 92 72
edge-canary 18 94 69
edge-dev 18 94 8
firefox-nightly 22 88 305
firefox-stable 22 88 360
safari-preview 16 85 557
safari-stable 16 85 646
all 24 104 648
All of the "assert" and "done" tests:

Most of these tests will both assert something and eventually call done(). The categorization depends on details (timing) of how this was measured and so isn't very interesting, but:

  • All 24 "assert" cases were checked and confirmed to be async tests with a done() somewhere in the test.
  • The "done" cases are sync tests or async tests where all asserts happen right before the done() call.

The "error" case is usually hit when the test tries to use some API that isn't supported in the browser under test. That's why there are more of these in stable browsers. Most of these tests aren't intended to be single-page tests.

A few tests hit the "assert" or "done" condition in one browser but the "error" condition in another:

test-analyser-minimum.html is a good example of a test that could be impacted by a change.


  • There aren't very many single-page tests right now
  • The accidents far outnumber the intentional uses


Single-page tests are intended to have as little boilerplate as possible, based on feedback from Gecko developers familiar with Mochitest; see test example. By requiring both setup({single_test: true}) and done() even for sync tests where SimpleTest.js requires neither, some Gecko engineers might prefer to use Mochitest instead of WPT. However, a single sync test can also be written by just wrapping the code in test().

Updates to testharness.js haven't always been made together with the test updates in all browser engine repos. If this is still the case, the testharness.js changes have to be made first and synced downstream before any test changes are made.

It's difficult to with certainty identify all single-page tests that exist and need to be updated. They will be identified by custom test runs as above and by searching for uses of done() in the repo. Finally, results before and after the changes need to be carefully compared.

The new failure mode of tests that were accidentally single-page tests should be no less obvious than before. Some tests may need to be updated to ensure this.

Alternatives considered

Just fix the tests

The accidental single-page tests could be fixed, often by wrapping setup code in setup(). However, without at least removing the "error" trigger, such tests just look like they have differently named subtests, which is sometimes OK. That makes any tooling to identify the problem from results noisy, and such tests would continue to accumulate.

Remove only the "error" trigger

jgraham proposed Don't assume file_is_test when there's a top-level non-assert errorruns / results. The FileAPI example turned into a timeout instead, still masking the problem.

foolip tried Remove the error_handler trigger for single-page testsruns / results. The FileAPI example is still no good, now there's a bogus passing subtest instead. As mentioned above, this is because done() is called in generated code.

With additional tweaking one can probably ensure that an uncaught error is treated as a harness error, solving most of the problem. However, some minor warts would remain:

  • A test would have to reach an assert or done() before it's known to be a single-page test.
  • Therefore, one couldn't rely on an exception to fail the test, with behavior changing after the first assert. This is subtle, and to avoid a harness error, correct setup() wrapping would be needed.

Note: not all harness errors can be avoided, but they are often a sign of a test problem, and so avoiding them where possible help highlight those problems.

Also remove the "assert" trigger

This would leave done() as the only opt-in, and the behavior would not change on the first assert. However, any failing assert would turn into a harness error, which is no good. Fixing it with an early opt-in amounts to this proposal.

Use single_test() as the opt-in

This would allow giving a name and other properties to the single test. However, it doesn't suggest in the same way as setup({single_test: true}) that this is a special mode that changes the rules of testharness.js.

You can’t perform that action at this time.