Skip to content

IDLHarness subtests "member names are unique" are missleading (pass always) and don't actually test anything on the browser #52659

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
clopez opened this issue May 19, 2025 · 5 comments

Comments

@clopez
Copy link
Contributor

clopez commented May 19, 2025

One of the idlharness subtests that run on mixin interfaces is named member names are unique

And, as far as I can tell, what this subtest seem to be doing is to compare the fragment IDLs of a mixin interface to check that the members defined inside that interface are unique and not defined on other IDL interfaces where this can be included.

But for doing that check it is simply parsing the IDL files in the WPT directory interfaces and not actually testing anything on the browser.

So, this is really a test for the IDL files and not a test for the browser.

For example, given this IDL fragment:

interface mixin NavigatorBadge {
  Promise<undefined> setAppBadge(
    optional [EnforceRange] unsigned long long contents
  );
  Promise<undefined> clearAppBadge();
};

Navigator includes NavigatorBadge;

The code would happily create a test named Navigator includes NavigatorBadge: member names are unique which will simply pass if setAppBadge and clearAppBadge are not defined on any of the other IDL files that mixin with Navigator.

This means that this subtests passing or not doesn't depend at all on the browser running the test. It doesn't even check if setAppBadge and clearAppBadge are defined in navigator

Actually this sub-tests are always going to pass (assuming the IDL files are right).

And this causes miss-leading results like this one:

17/22 pass-rate on the badging/idlharness.https.any.html test on browsers that don't implement at all this interface (wpewebkit, ladybird, etc)

So I think we should either:

  • Assert that the browser actually defines the checked members on the interface.
    (or)
  • Move this tests to be python tests or other kind of unit tests related to wpt itself and not to the browser
@bkardell
Copy link
Contributor

Should these use assert_precondition somehow @atjn ?

@clopez
Copy link
Contributor Author

clopez commented May 20, 2025

And the same happens for subtests idl_test setup and idl_test validation.

This 2 ones are also always passing because they test the IDL files and not features on the browser.

See for example how in accelerometer this adds 2 subtests passing for browsers clearly not implementing that interface/spec

@atjn
Copy link
Contributor

atjn commented May 21, 2025

@bkardell Yes, if you want to go with option 1 and assert whether the browser actually implements the interface, then I think you should use precondition_failed to test it. It makes it clear that the test is not really a fail, it is just flagging that an optional feature is missing.

But it sounds to me like option 2 is the only real solution. Even if you add an extra check for the implementation in each browser, the tests that scan the IDL interfaces are still not performing any browser-specific tests and should IMO be moved somewhere else where it is clear that they are testing WPT and not the browsers.

@bkardell
Copy link
Contributor

@atjn Option 2 means move those tests to be python tests or other kind of unit tests related to wpt itself and not to the browser?

@atjn
Copy link
Contributor

atjn commented May 22, 2025

@bkardell yes. I am basing that assertion on the description provided by clopez, I do not have a lot of experience with idlharness myself. But it sounds correct to me :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants