Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC #16 - PRECONDITION_FAILED subtest result (status) #16

Merged
merged 9 commits into from Sep 19, 2019
Merged
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
72 changes: 72 additions & 0 deletions rfcs/not_implemented.md
@@ -0,0 +1,72 @@
# RFC #16: 'PRECONDITION_FAILED' subtest result (status)

## Summary

Allow another distinct subtest result, different from `FAIL` or `MISSING` results,
by asserting a precondition requirement for a subtest, where the precondition
needs to be met for the test to actually produce any meaningful result.

## Details

### Background

Some specs, such as [WebCryptoAPI](https://www.w3.org/TR/WebCryptoAPI/#scope-algorithms),
focus on the discovery of the set of a user agent's implementations, but do not mandate
any *particular* algorithm is implemented. As a result, it is perfectly spec compliant to
omit any (or, even **all**!) implementations.

With respect to testing, there are two options for results when enumerating the
set of implementations:

- Consider a lack of implementation a `FAIL`
- Omit results for known non-implementations (`MISSING` on wpt.fyi)

The current behaviour of [some WebCryptoAPI tests](https://wpt.fyi/results/WebCryptoAPI/encrypt_decrypt/aes_ctr.https.worker.html) actually has a mixture - some tests
are missing, some are "failing".

### Proposal

Add subtest result of `PRECONDITION_FAILED` - a result for subtests which
assert an unmet precondition, and thus do not run the remainder of the test.

Add an `assert_precondition(condition, description)` function helper, which will conclude
with a `PRECONDITION_FAILED` result when `condition` is not truthy.

> __assert_precondition(condition, description)__
>
> Concludes the test with `PRECONDITION_FAILED` status if `condition` is not truthy.
> Used for skipping subtests will not produce a meaningful result without the condition,
> e.g. optional features that are not implemented by the user agent.

#### Example Usage

promise_test(function(test) {
return subtle.generateKey(algorithm, extractable, usages)
.then(
function(result) { ... },
function(err) {
if (isUnsupported(err)) { // "Unsupported" case determined ad-hoc
assert_precondition(false, algorithm + ' not implemented');
} else {
assert_unreached("Threw an unexpected error: " + err.toString());
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it intentional that the execution continues after the skip() call in this example?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a point of reference, mocha does throw in its this.skip():
https://github.com/mochajs/mocha/blob/186ca3657b4d3e0c0a602a500653a695f4e08930/lib/runnable.js#L138

We could do that with a specific error type, or we could treat this as t.done() and keep going but ignore anything the test does. The difference of course is that it's unlikely there's stuff after t.done() but very likely there's stuff after t.skip().

@lukebjerring WDYT?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not intentional (prototype was tweaked to fix that).

@foolip I would expect t.skip to be the "end of the road" in so far as the test content is concerned, since "ignoring" further execution would be more computationally expensive, and extra code for the testharness.

});
}

## Advantages

For spec-compliant implementation omission, it would allow distinction from other
outcome implications.

- `FAIL` implies an incorrect implementation
- `MISSING` implies a test was not run at all
- `PRECONDITION_FAILED` implies a test was not applicable

It also allows for more expressive tests, allowing authors to be explicit about their
expectations.

## Disadvantages

- An extra outcome to consider
- Will involve viral changes to logging, and log consumers
- Possible metrics skewing from the point that tests change to use this status