You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's been widely discussed that SXG certificates enable off-path attacks. This means that an attacker that has stolen an SXG private key and created SXG content can send it to a client from any server controlled by that attacker, such as a phishing site, and the browser will trust it. This opens up a lot of possibilities for impersonation that are not currently available to an attacker that compromises a TLS certificate. An attacker with a TLS certificate needs to somehow get a client that is already connecting to a site (via DNS and TCP/IP) to connect to a server under their control to fetch the content. This usually requires DNS poisoning or being on-path between the victim and the server.
Because SXG introduces a new off-path vector of attack, it's worth considering ways to mitigate this attack vector in the specification. This is the situation we found ourselves in for the Secondary Certificates project (https://tools.ietf.org/html/draft-bishop-httpbis-http2-additional-certs-05), which allows attackers with specially-issued secondary certificates to serve content from multiple sites over the same connection, which enables similar off-path attacks.
The mitigation that was discussed in Secondary Certificates to prevent arbitrary off-path domain hijacking was a simple one: we proposed that for a certificate to be used as secondary, it needs an additional field called "Required Domain". In order to accept a certificate with a "Required Domain" extension, the server must have previously served a certificate that covers the required domain on the same connection. This has the nice property that if a secondary certificate is compromised, it can only be used to hijack traffic if they also have control over the secondary domain.
I'd like to suggest that this mechanism be considered here for SXG certificates. In order to serve an SXG from a cache, the certificate would need to have that cache's domain in its set of "Required Domains". This would drastically reduce the capabilities of an attacker who steals an SXG cert key, mints SXGs and serves them from an arbitrary phishing domain.
The text was updated successfully, but these errors were encountered:
We've talked about ideas in this direction, although not exactly this one, and while they improve security, they also worsen, for example, @ekr's concern that the feature increases large organizations' control over the internet. For example, we don't want publisher.example to be able to declare that only Google is allowed to cache their SXGs.
@martinthomson, will your document discuss how Mozilla feels about mitigating stolen private keys vs mis-issued certificates? IIUC, this suggestion doesn't help with mis-issued certificates.
It seems like there are two questions here, actually:
Should it be possible for publishers to constrain the origins where there content can be cached?
Should certificates contain that constraint so that stolen keys are not a compromise threat?
Obviously, the answer to (2) is "no" if the answer to (1) is "no", but that's not a complete answer. It seems like what's really at stake is whether we think of an SXG as a portable object or as a cooperative venture between the publisher and the cache.
A useful variant may be to allow certificates to grant all origins to serve their SXG, but also provide a mechanism to restrict them. That would enable use of the SXG primitive for for decentralized web use cases, while also allowing publishers to restrict distribution.
It's been widely discussed that SXG certificates enable off-path attacks. This means that an attacker that has stolen an SXG private key and created SXG content can send it to a client from any server controlled by that attacker, such as a phishing site, and the browser will trust it. This opens up a lot of possibilities for impersonation that are not currently available to an attacker that compromises a TLS certificate. An attacker with a TLS certificate needs to somehow get a client that is already connecting to a site (via DNS and TCP/IP) to connect to a server under their control to fetch the content. This usually requires DNS poisoning or being on-path between the victim and the server.
Because SXG introduces a new off-path vector of attack, it's worth considering ways to mitigate this attack vector in the specification. This is the situation we found ourselves in for the Secondary Certificates project (https://tools.ietf.org/html/draft-bishop-httpbis-http2-additional-certs-05), which allows attackers with specially-issued secondary certificates to serve content from multiple sites over the same connection, which enables similar off-path attacks.
The mitigation that was discussed in Secondary Certificates to prevent arbitrary off-path domain hijacking was a simple one: we proposed that for a certificate to be used as secondary, it needs an additional field called "Required Domain". In order to accept a certificate with a "Required Domain" extension, the server must have previously served a certificate that covers the required domain on the same connection. This has the nice property that if a secondary certificate is compromised, it can only be used to hijack traffic if they also have control over the secondary domain.
I'd like to suggest that this mechanism be considered here for SXG certificates. In order to serve an SXG from a cache, the certificate would need to have that cache's domain in its set of "Required Domains". This would drastically reduce the capabilities of an attacker who steals an SXG cert key, mints SXGs and serves them from an arbitrary phishing domain.
The text was updated successfully, but these errors were encountered: