Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check all hashes #118

Closed
jb-wisemo opened this issue Nov 16, 2022 · 2 comments
Closed

Check all hashes #118

jb-wisemo opened this issue Nov 16, 2022 · 2 comments

Comments

@jb-wisemo
Copy link

The current specification states that the user agent should use a built in priority list of "best" hash algorithms and only check the best of the hashes provided in the integrity attribute.

It would be simpler (and potentially more secure) to check all the hashes provided, if implemented. Any hash not matching means that the resource is corrupted. Some hashes matching and others not means that the resource itself is a proof of a preimage collision in the matching hash(es) or incorrect hash values in the attribute (authoring error).

HTML validators that load subresources should warn about unimplemented hash algorithms, but not fail the document completely. As already specified ordinary user agents should ignore those as being for the benefit of other user agents or user agent versions.

@mozfreddyb
Copy link
Collaborator

We can't check all hashes. We explicitly do the logical-or so that ambiguous resources may still match.
E.g., you're rolling out a new version of your library to the CDN but keep the hashes for current + next in the integrity metadata to ensure that inconsistencies won't lead to SRI errors.

I'm also skeptical about people feeling the need to protect themselves against collisions by using two variants of sha2 in their metadata... I'm leaning towards closing this issue.

@jb-wisemo
Copy link
Author

The main point of checking all HTML author supplied hashes is to remove the ambiguity of having a browser-specific priority list. It also helps HTML authors detect wrong hashes during web site testing. For example, if the HTML author supports two browser versions, one of which checks the sha256 value, the other the sha512 value, then testing with a newer browser that checks both values will instantly fail during web site testing on a staging/development web server, alerting the HTML author to their mistake when they test their HTML.

The scenario of permitting multiple file versions on a CDN without using versioned download URLs can be easily handled by an additional rule that if multiple hash values are provided for the same algorithm, either value is accepted for that algorithm. For example, an integrity attribute specifying two SHA512 hashes and one SHA256 hash will match only if both acceptable versions (with different SHA512 hashes) have that same SHA256 hash, which is probably a HTML authoring mistake detected as specified above.

Anyway, multiple hash algorithms only provides a security benefit in the special case where a future attack doesn't work in that case. This was apparently the case in the past when MD5 and SHA1 were both cracked, as the prepackaged attacks provided single already-found magic prefixes for each algorithm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants