Skip to content

Test builder: introduce CLI options for specific sub-tasks#419

Merged
rossberg merged 6 commits intoWebAssembly:masterfrom
bnjbvr:better-testing
Feb 23, 2017
Merged

Test builder: introduce CLI options for specific sub-tasks#419
rossberg merged 6 commits intoWebAssembly:masterfrom
bnjbvr:better-testing

Conversation

@bnjbvr
Copy link
Member

@bnjbvr bnjbvr commented Feb 7, 2017

This should allow us building only interesting subsets of tests for different purposes:

  • a browser vendor can build the JS and HTML tests only.
  • for the website, we'll be able to build the front page (in a single HTML page) only.

I need to make sure this will interact well at least with Spidermonkey and the webassembly website gh-pages before asking for review. If other browser vendors could confirm this fits their workflow too, this would be great.

- build the pure JS tests with --js out_dir
- build the WPT tests with --html out_dir
- build the front page with --front out_dir
To determine that a file contains a testharness Web Platform test case, the
runner looks for an instance of a <script src=/resources/testharness.js>. So we
use this URI when generating the WPT test cases. In this mode, the testharness
files don't need to be generated, since the WPT server serves them.

The harness files aren't desired for the pure JS environment as well, since the
JS shell has to provide a shim mimicking the testharness.js APIs.
@bnjbvr
Copy link
Member Author

bnjbvr commented Feb 16, 2017

(not a work-in-progress anymore)

So let me recap: the script has been specialized for different separate objectives, instead of generating everything at once:

  • build.py --js /tmp/out-dir will convert all the wast tests to JS into the out-dir, copy out the pure JS tests there. The testharness is not included, since shells which want to run this need an adapter for polyfilling the testharness.js APIs (assert_equals, etc.).

  • build.py --html /mozilla/testing/web-platform/mozilla/tests/wasm will do the same as above, as well as generating one Web Platform test case per JS test case (so one for each WAST test -- those in core/, and one for each pure JS test -- those in js-api, and one for each HTML test -- those in html). These tests are ready to be used in a WPT environment; somebody who'd want to run them would just need to update the WPT Manifest.json before running. The test harness is not included, since the WPT server will just... serve it. This has been tested within Gecko, and it works like a charm.

  • build.py --front /website/docs/test allow us to generate only the one-click test page for the website; see for instance this page for the current output of this command. The PR adding support for doing this flawlessly in the website repo is Add a front page with all the tests website#73 and depends on this PR.

@mtrofin and/or @rossberg-chromium and/or anybody who'd want to, can you please have a look? Happy to answer any questions.

@bnjbvr bnjbvr changed the title [WIP] Test builder: introduce CLI options for specific sub-tasks Test builder: introduce CLI options for specific sub-tasks Feb 16, 2017
automatically generated.

The WPT harness has a way to generate an expectation file specifying which
tests are known to be failing, and which ones are known to be passing. However,
to use this feature, it must have a unique identifier for each combination of
(file, test). To handle this, we generate a unique test number for each call to
test() in the wast harness.
def convert_one_wast_file(inputs):
wast_file, js_file = inputs
print('Compiling {} to JS...'.format(wast_file))
return run(WASM_EXEC, wast_file, '-h', '-o', js_file)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why -h?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The output JS file is intended to be used in both a pure JS environment (shell) and the WPT environment; the WPT environment redefines all the testing primitives, as you know.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Er, sorry, never mind, I was confusing -h with --help. :)

test/build.py Outdated
for wast_file in glob.glob(os.path.join(WAST_DIR, '*.wast')):
for wast_file in glob.glob(os.path.join(WAST_TESTS_DIR, '*.wast')):
# Don't try to compile tests that are supposed to fail.
if 'fail.wast' in wast_file:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The runner in test/core checks for .fail., probably should be in sync.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that, but this script doesn't call the run.py script which is in test/core. Are you suggesting to use it, or to implement specific logic that tests that the result is incorrect indeed? (the latter doesn't seem too valuable, because we can't specifically check that the failure happens at compile/instanciation/runtime)
Or something else?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Much simpler: only suggesting to use the same condition to identify failure tests, i.e., test for the same substring.

assert_true(e instanceof WebAssembly.CompileError, "expected invalid failure:");
}
}, "A wast module that should be invalid or malformed.");
}, testNum() + "A wast module that should be invalid or malformed.");
Copy link
Member

@rossberg rossberg Feb 20, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of doing this at every call site to test, why not do it inside test (or define a wrapper)?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea, thanks.

Copy link
Member

@rossberg rossberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, lgtm, only a couple of small suggestions.

@bnjbvr
Copy link
Member Author

bnjbvr commented Feb 20, 2017

Thanks for the review, I've updated the PR.

@rossberg
Copy link
Member

Thanks. I'll give @mtrofin a chance to comment too before merging.

@mtrofin
Copy link
Contributor

mtrofin commented Feb 21, 2017

Apologies for only getting on this just now.

I only have a nit comment (a typo).

If you didn't mind, I'd like our colleague Eric Holk to take a peek, too - he has integrated this in our tree & CQ, and is, by now, way more familiar with it all than I am. I pointed him this way.

Thanks!

@bnjbvr
Copy link
Member Author

bnjbvr commented Feb 21, 2017

@mtrofin I haven't found your comment about the typo; could it be that it were not submitted? Thanks for the review!

let instanciated = err === null;
assert_true(instanciated, err);
}, testNum() + "module successfully instanciated");
}, "module successfully instanciated");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/instanciated/instantiated

'use strict';

// WPT's assert_throw uses a list of predefined, hardcoded know errors. Since
let testNum = (function() {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this function to provide the stable test id (#415)? If we inserted a new test in the middle somewhere, this would lead to changing all the numbers afterwards, right? It'd be nice if we had an ID that was robust against that, although that probably leads to hardcoding all the IDs, which gets ugly. This function, as is, is definitely a helpful improvement though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't really provide a stable test identifier for all the test cases, but it does for all the ones that are automatically generated from the wast core tests. js-api and HTML tests still manage their id by themselves.

This is needed by the WPT harness. The wptrunner has the ability to automatically generate expectation files, which encode whether a single test is known to be passing or failing when run in a specific browser. For this to work, the (filename, test_id) couple needs to be unique, which makes sense. This is the reason why I've added the unique test id.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me.


function assert_soft_invalid(bytes) {
test(() => {
uniqueTest(() => {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about switching the order of the arguments to uniqueTest? I'd find it easier to read if the test name came before the code for the test.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I liked the consistency with the WPT test function, which takes arguments in the same order. I don't have a strong opinion though, so I am alright with changing it, if you strongly prefer it.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consistency with WPT test seems like a good enough reason. I prefer having the name first, but I think consistency is a stronger argument than my personal preferences, so I'd leave this as is.

@bnjbvr
Copy link
Member Author

bnjbvr commented Feb 23, 2017

Thanks for the reviews @eholk @mtrofin and @rossberg-chromium! I've addressed all your comments, so this might be ready for merge.

@rossberg rossberg merged commit 1610579 into WebAssembly:master Feb 23, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants