Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WebCryptoAPI] use `assert_precondition` to avoid spurious failures #20176

Closed
wants to merge 1 commit into from

Conversation

@foolip
Copy link
Member

foolip commented Nov 8, 2019

No description provided.

@@ -65,6 +65,8 @@ function run_test(algorithmNames, slowTest) {
assert_goodCryptoKey(result, algorithm, extractable, usages, "secret");
}
}, function(err) {
var supported = !err.toString().includes('not supported');

This comment has been minimized.

Copy link
@foolip

foolip Nov 8, 2019

Author Member

This is testing the error message which isn't defined by the spec. Unfortunately per https://w3c.github.io/webcrypto/#SubtleCrypto-Exceptions (and algorithms) it looks like the DOMException type to check for would be "NotSupportedError" but Chromium actually uses "OperationError", so I think Chromium would need to change...

This comment has been minimized.

Copy link
@jimsch

jimsch Nov 8, 2019

Contributor

Are you seeing this for an algorithm in its entirety or are you seeing this for a specific size in an algorithm. For example, if the FOO algorithm is not supported at all, then NotSupportedError is correct. However, there are discussions what the correct error would be if you do not support the key size of 192 for the AES algorithms. In this case OperationError might be considered to be a legal answer.

This comment has been minimized.

Copy link
@foolip

foolip Nov 8, 2019

Author Member

The case being hit in the test is AES-192 being explicitly not supported in Chromium (it has a dedicated error message) but the DOMException type is “OperationError”.

If there’s no per-spec way to detect this case as being unsupported (as opposed to supported but broken) then these tests can only be allowed to fail in the usual way, just like they currently are.

This comment has been minimized.

Copy link
@jimsch

jimsch Nov 8, 2019

Contributor

That is correct. Several of us argued that not supported was better but Chromium was the hold out. I don't think this is going to be fixable.

This comment has been minimized.

Copy link
@foolip

foolip Nov 8, 2019

Author Member

What does a straight reading of the spec say should happen if you don’t support AES-192 but some other sizes of AES? Is it left undefined, or does it say “OperationError”?

This comment has been minimized.

Copy link
@jimsch

jimsch Nov 9, 2019

Contributor

So I just walked the spec again looking for anything that provides any guidance, but the real answer is that even though we knew that Chrome was doing this it was ignored.

In section 27.4 Operations:
GenerateKey - In step 4 it says that if the key generation step fails, then throw an
OperationException. It could be argued that this is what Chrome is doing.

ImportKey - for option 'raw' of step 2, there is no error if the key length is 192. For the second option it would appear that the best answer that is returned would be DataError on the assumption that the second clause of step 5 is ignored for non-support. There are no steps for ImportKey which would return an OperationError.

I would expect that the same issue should be appearing for the import/export functions as well. Looking at the code it looks like it should be.

This comment has been minimized.

Copy link
@foolip

foolip Nov 9, 2019

Author Member

Could the spec be updated with an explicit step at some central place used by all methods that says what to do if the key length is not supported?

Also, what’s the background for AES-192 specifically not being supported?

This comment has been minimized.

Copy link
@jimsch

jimsch Nov 9, 2019

Contributor

I could easily write an update that would do that, but I would have absolutely no idea how to go about getting it integrated in the document. The original working group has been closed for 3 years and the follow on group appears to have closed this year. I am not a part of the W3C in any way. (I own a winery and was an invited expert.) The person to as about this is probably @wseltzer but even that is a guess on my part. If written this could be applied to the various RSA algorithms as not everybody is going to allow support for all modulus lengths.

Going purely on memory, as I cannot find anything in my mail archive or a quick search of the minutes, is that Chrome believed that if you though 128-bits is not enough then you should just jump all of the way to 256-bits. Nobody really needed the intermediate value even if NIST had defined it to be used for some of the profiles that were defined.

This comment has been minimized.

Copy link
@foolip

foolip Nov 14, 2019

Author Member

Oh... there's no way to update the spec anymore? That's going to be a problem sooner or later, and right now actually.

@sideshowbarker @plehegar I see you both in https://github.com/w3c/webcrypto/graphs/contributors, do you know the status of this?

@foolip
Copy link
Member Author

foolip commented Nov 8, 2019

@stephenmcgruer if you come across these tests in triage of Chrome-specific failures, this would be the change to drive to a conclusion. I'm going to leave it sitting here for now.

@jimsch sorry for the noise, this is web-platform-tests/wpt-pr-bot#69 acting up again. Feel free to ignore this PR.

@stephenmcgruer
Copy link
Contributor

stephenmcgruer commented Nov 27, 2019

@stephenmcgruer if you come across these tests in triage of Chrome-specific failures, this would be the change to drive to a conclusion. I'm going to leave it sitting here for now.

I did come across these! I actually started working on adding an assert_precondition myself yesterday, and then memory tickled and I recalled this email.

I'm going to try and find someone internal at Google to point to this thread.

@ericroman920
Copy link

ericroman920 commented Nov 27, 2019

Thanks for the discussion here around making Chrome pass the tests!

To add some background:

The conclusion from the WG is that implementations are required to model unsupported parameters (ex key size) as an OperationError: https://www.w3.org/Bugs/Public/show_bug.cgi?id=27598. So in practice I think that is the right error for UAs to be giving, and for the test to check for.

There was much debate on what it means to have a spec compliant implementation of Web Crypto in regards to supported algorithms and their parameters. The tradeoff between predictability for Web Developers, vs implementability. UAs may be constrained by policy around what cryptographic primitives they can - or want - to expose, especially if backed by hardware, or leveraging the platform's or TLS library.

Moreover, supported algorithms may change as algorithms become weak/compromised and are removed/disabled from the underlying stacks. While wholly unsatisfying for compliance testing, the spec intentionally doesn't guarantee any particular supported profile of algorithms: https://www.w3.org/TR/WebCryptoAPI/#concepts-underlying-implementation.

Support for 192-bit AES key is just one example of this conflict around supported parameters between implementations. For example, the same dilemma applies to RSA key sizes: the spec is non-specific about what RSA modulus may be used. Minimum and maximum RSA modulus may be constrained by implementations. In the case of Chrome it is currently 256 bits to 16384 bits (albeit keys that small offer no security, and keys that big are unbearably slow)

In the past, Chrome used to throw a NotSupported error as was suggested on this thread. However that was determined to be non-compliant so it was changed to OperationError: https://bugs.chromium.org/p/chromium/issues/detail?id=441877.

As for why Chrome doesn't support AES 192? Support for 192-bit keys was removed from the underlying crypto library as it is considered a legacy feature and not needed by TLS.. Other than these web platform tests, there hasn't been user demand for AES 192-bit support in Chrome's Web Crypto implementation, so resolving this theoretical incompatibility is unappealing: https://bugs.chromium.org/p/chromium/issues/detail?id=533699.

@foolip
Copy link
Member Author

foolip commented Nov 28, 2019

Thanks @ericroman920!

Is OperationError used for any other error condition? My assumption was that it is, and that would make it impossible to distinguish "doesn't support at all" from "supported but the test found a bug".

@gsnedders gsnedders closed this Jan 24, 2020
@gsnedders gsnedders deleted the foolip/assert_unreached-WebCryptoAPI branch Jan 24, 2020
@gsnedders gsnedders restored the foolip/assert_unreached-WebCryptoAPI branch Jan 24, 2020
@Hexcles Hexcles reopened this Jan 24, 2020
@stephenmcgruer
Copy link
Contributor

stephenmcgruer commented Feb 17, 2020

Coming back this as it is still the biggest source of Chrome-specific failures in WPT. https://www.w3.org/Bugs/Public/show_bug.cgi?id=27598 points to w3c/webcrypto#62. In that issue it seems clear to me that (at the time) the spec authors considered supporting AES-GCM but not supporting all key sizes to be a straight violation of the spec. This did not apply to RSA since they apparently have a generic 'if the operation fails throw an OperationError' step that doesn't exist for AES-GCM.

My understanding at this point is that Chrome will not implement 192-bit AES-GCM. That leaves us with two options:

  1. Consider Chrome a spec-violating browser, and close this PR. The Chrome-specific failures would remain, but that would be the correct course of action - our job is not to cover up incompatibility.
  2. Change the spec, possibly by adding a similar generic get-out clause for AES-GCM (or a specific one for key size).

I would much prefer option #2, but I see from @ericroman920 that from their perspective the WebCrypto spec is now 'closed'. I think I just have to repeat @foolip's call to action here - to ask @sideshowbarker and @plehegar their opinion on whether the WebCrypto spec could still be modified, and what effort that would take.

@wpt-pr-bot wpt-pr-bot requested a deployment to wpt-preview-20176 Feb 17, 2020 Abandoned
@sideshowbarker
Copy link
Member

sideshowbarker commented Feb 18, 2020

I think I just have to repeat @foolip's call to action here - to ask @sideshowbarker and @plehegar their opinion on whether the WebCrypto spec could still be modified,

For certain cases, current W3C process provides for producing an Amended Recommendation:

https://www.w3.org/2019/Process-20190301/#rec-amended

An Amended Recommendation is a Recommendation that is amended to include substantive changes that do not add new features, and is produced by the W3C at a time when the Recommendation does not fit within the charter of any active Working Group. Since the W3C team rather than a Working Group moves it through the Process, there are implications regarding the scope of Royalty-Free IPR licenses granted under the W3C Patent Policy.

Because we no longer have a Web Crypto working group — and because we also have no other working group that’s currently chartered with a scope that could include updating the Web Crypto spec — then unless we were to charter a new working group to work on this, producing an Amended Recommendation would be the only option available (as far as W3C process goes).

All that said, I don’t know whether the change being discussed here — “generic get-out clause for AES-GCM (or a specific one for key size)” — would be considered part of the class of what the W3C process document describes as “substantive changes that do not add new features”.

I had thought the Amended Recommendation option was mostly created for the case of outright bugs in a spec — errata — but maybe the language in the W3C process document was chosen so that the Amended Recommendation option could be used in a broader set of cases (e.g., a case like the one under discussion here).

what effort that would take

Publishing an Amended Recommendation would take a significant level of effort. It’d need several months to make its way through the necessary W3C process steps, and it’d require somebody to put some number of hours over those months into the work of taking it through those steps.

But that would be the fastest way to get an updated spec published at the W3C. The other option — chartering a new group to produce an updated spec — would take significantly longer.

@jimsch
Copy link
Contributor

jimsch commented Feb 18, 2020

Completely random data point - there is an effort going around someplace to get some new algorithms added to the WebCrypto specification. w3c/webcrypto#233 says that a presentation was made to TAG about this. Not being in the W3C, I cannot make any statements about what this may or may not mean.

@foolip
Copy link
Member Author

foolip commented Feb 18, 2020

Thanks @sideshowbarker!

I've come across in a similar limbo before. Do specs generally stay in this limbo until there are enough pressing issues to spin up a new WG, or is there a place where specs can go into maintenance mode?

@sideshowbarker
Copy link
Member

sideshowbarker commented Feb 18, 2020

Do specs generally stay in this limbo until there are enough pressing issues to spin up a new WG

Basically, yes — unless the work ends up moving back to the WICG (which has happened in a few cases) or in other W3C community groups.

or is there a place where specs can go into maintenance mode?

I think @plehegar and others have been working on some process refinements for handling “maintenance mode” specs, but I don’t know how soon those refinements will be rolled out (and I’m not very up to date on the current details around how they’ll end up being handled).

@foolip
Copy link
Member Author

foolip commented Feb 18, 2020

Thanks @sideshowbarker, sounds like right now we'd be facing a bunch of work if we did want to update the spec, with any of the available options.

@stephenmcgruer unless there's a use case for web developers for distinguishing between a specific key size not being supported and a whole algorithm not being supported, it seems hard to justify a spec+impl fix to help us distinguish them. Maybe we just annotate these failures and look at what's left? If we keep finding cases like this maybe we need a "won't fix" status for these annotations to get them out of the metrics.

@stephenmcgruer
Copy link
Contributor

stephenmcgruer commented Feb 18, 2020

Thanks all for your input, really appreciate it :)

it seems hard to justify a spec+impl fix to help us distinguish them

From a practical viewpoint, I agree. I have two main concerns, but I think for now they're just food for thought:

  1. Chrome is knowingly violating a standard, with no plan to either change Chrome or seeing if the standard could be changed. This case is mostly-harmless, but we agree to standardize the web for a reason :)
  2. The spec fix here seems (to inexperienced me!) to be quite minor, but we're saying it would take huge amounts of effort to do because of 'process'. It's very possible there are good reasons for that, but in general the concept concerns me. Process is meant to help us, and we should be cautious of process starting to stop progress.

I repeat, the above are food for thought and I'm not saying either are a reason to revisit this decision.

Maybe we just annotate these failures and look at what's left? If we keep finding cases like this maybe we need a "won't fix" status for these annotations to get them out of the metrics.

I think "won't fix" is a valid triage state, and it should be possible for us to look at the metrics without such cases, but I also think it's important to keep tracking the entire set of browser-specific failures. It's important to both enable forward progress of fixing the things we can agree to fix, whilst also keeping an eye on the actual level of the problem. Basically, I agree, with some postulating about how this case still damages the web ;)

@foolip
Copy link
Member Author

foolip commented Feb 18, 2020

The spec fix here seems (to inexperienced me!) to be quite minor, but we're saying it would take huge amounts of effort to do because of 'process'. It's very possible there are good reasons for that, but in general the concept concerns me. Process is meant to help us, and we should be cautious of process starting to stop progress.

While there might be good reasons that have led to this situation, I'm definitely not going to claim it's a good situation, it's very frustrating when one can't make changes to old specs. Sooner or later there will be a more important reason to change it, and then it'll be frustrating again.

But in each individual case it's almost never worth addressing the problem, instead it's more expedient to route around it. Hard to point fingers when everyone just wants to get stuff done and things like this never bubble to the top of priorities.

@sideshowbarker
Copy link
Member

sideshowbarker commented Feb 19, 2020

The spec fix here seems (to inexperienced me!) to be quite minor, but we're saying it would take huge amounts of effort to do because of 'process'. It's very possible there are good reasons for that, but in general the concept concerns me. Process is meant to help us, and we should be cautious of process starting to stop progress.

While there might be good reasons that have led to this situation, I'm definitely not going to claim it's a good situation, it's very frustrating when one can't make changes to old specs.

To be clear on the possible options: In order to define new requirements or to alter existing ones, it’s not strictly necessary to change an old spec; instead the requirements can be defined in a new, different spec that normatively supersedes the old spec. And that new spec doesn’t necessarily need to be published by the W3C — it just needs some level of agreement among implementors and just needs (eventually) some implementations, and it can become a de facto standard.

The Speech API is an existence proof of a spec that got implemented in multiple UAs despite never being published at the W3C or WHATWG. The WebAssembly spec is an another example of a such a spec. We eventually chartered a W3C working group to publish that spec — but it’s important to remember that WebAssembly had already been implemented and shipped in (I think) every major browser engine before that working group was ever created.

So if the goal is to publish a spec somewhere, then the effort that would take is relatively modest. But publishing a “Rec-track” deliverable at the W3C necessarily requires a lot more effort. @plehegar and others have been working on ways to mitigate the process overhead for cases like the one being discussed here — but no matter what, getting a document published by the W3C is always going to require a significant amount of time and effort.

Sooner or later there will be a more important reason to change it, and then it'll be frustrating again.

But in each individual case it's almost never worth addressing the problem, instead it's more expedient to route around it. Hard to point fingers when everyone just wants to get stuff done and things like this never bubble to the top of priorities.

Yeah, knowing what I know about process, and given the choice between voluntarily launching into something that’s going to introduce a lot of process overhead, or instead finding a way to expediently route around it, I suggest strongly considering the “find a way to expediently route around it” option.

@foolip
Copy link
Member Author

foolip commented May 26, 2020

I'm going to close this since there isn't a way to detect support of AES-192 separately from it being supported but not working properly. Also, web-platform-tests/rfcs#48 renamed the asserts, assert_implements_optional would be the thing to use if it could be detected.

@foolip foolip closed this May 26, 2020
@foolip foolip deleted the foolip/assert_unreached-WebCryptoAPI branch May 26, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

8 participants
You can’t perform that action at this time.