-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should a SXG document be considered SecureContext or not? #388
Comments
Could you explain why you don't believe that the level of security is as high? What are the properties that you think aren't met? |
As discussed in #376, if there is a buggy SXG, chances are high that attackers will use it as much as possible. |
Thanks! I was expecting you to say something about the key protection (which is similar to TLS session resumption, delegated credentials, secondary certs, and a host of other network-level things which may be invisible to the client), so that's a very different direction! Would this same logic apply to resources a client caches on disk, unrelated to SXG? That is, I'm trying to unpack what the property is that the HTTP disk cache provides (as we seen to be comfortable with SecureContext), but that SXGs do not. Similarly, I'm trying to understand a bit more about where "SecureContext" is reflective of the integrity of the transport to when it becomes about the 'security' of the content. For example, whether we'd deny SecureContext for certain CSP policies. I had always imagined SecureContext to be about the transport-level security properties and integrity. |
I think there is a difference between the two. |
A cache entry isn’t necessarily under the control of the publisher, is it?
Especially once the buggy entry is cached, only a Clear-Site-Data activity
would flush it, correct? Or is there some other element of control?
I’m trying to understand this concern more by trying to map to the world we
have, because I’m not sure it’s clear the property that is both missing and
critical enough to SecureContext to deny it for SXG, when we have much
stronger signals of origin authenticity than TLS itself.
For example, in the world we have, we allow a site served over TLS to be
treated as SecureContext, even though it may have been served by a stale
CDN that doesn’t support Flushing. A site that “could” flush their CDN, but
doesn’t, doesn’t seem fundamentally different from an SXG that “could” have
a JS flush check, but doesn’t. Should we deny SecureContext to known CDN
ASes, since we don’t know whether the content is fresh?
Understandably, there’s a tradeoff for the priority of the constituencies,
but it’s not clear why the freshness would impact SecureContext, or where
that threshold is.
Alternatively, it may be that “freshness” isn’t the essential property, but
“voluntary distribution agreement,” since only CDNs with a private key
associates with that domain can serve that content. If that’s the case, it
seems alternative designs may exist to address the “relationship” property.
|
Right, in such a case, there is a clear understanding that both parties are working together. With SXG, anybody can distribute the content. |
@youennf Thanks! I'm wanting to make sure I've got the problem well-framed enough to explore solutions. I mention this, because a liveness check is so critically disruptive to the privacy and performance properties of SXG that it seems like it would be a significant step back for a number of use cases. It sounds like your primary concern is around "A bug could be introduced in the content shipped in an SXG", is that a fair (although grossly oversimplified) summary? If it is, could you help me unpack a bit more the type of "bugs" that would be concerning? Naively, I would get the impression that this is only concerned about scriptable content, but perhaps it's being seen to generalized to other types; for example, would an SXG of a CSS file be problematic? What about for a PNG file? The concern - of a bug - sounds very different than the properties that SecureContext is meant to assert or guarantee. The client doesn't know about the relationship between the CDN and the Origin, for example, so at best, it merely seems to be an assumption that bugs "could" be fixed, not necessarily that they're prevented, purged, or otherwise managed. Given that SXGs (for scriptable content) have the ability to hotfix 'bugs', it seems similar to the status quo. I think it might be useful to compare with OCSP for checking certificate revocation. No UA has ever denied SecureContext or deferred processing content if an OCSP check does not succeed, which is (effectively) a "liveness check" for the TLS certificate. The closest that we got was Opera rendering the content, but degrading the UI, but they moved away from that. A liveness check for SXG would seem functionally identical to an OCSP request. Is there some property different for SXGs and TLS certs worth also capturing here? |
|
I am mostly concerned about navigation loads. Once you have a document, other mechanisms like SRI can be used if need be. IIUIC, there were performance issues that make things difficult with OCSP checks.
In the case where signed exchanges are fetched same-origin, this is business as usual, probably no need for additional checks. Consequences with APIs like payment API might be bad.
That might be feasible for some APIs but would add quite a bit of complexity.
That is something that would be good to better understand. Maybe the use cases require different solutions or complementary solutions. |
Sorry, now I'm even more confused, in trying to understand the principle or goal you're trying to capture from not treating as If it was about the transport properties, such as TLS, as it's used today, it would seem like it would matter equally regardless of content. You mentioned being concerned about bugs in the content, which seem possible there as well. However, the later parts of your reply leave me wondering whether the goal is to restrict access to certain APIs, and using
I don't think the data supports this conclusion, as practiced today. But I also have concern that it's a bit at odds with the goals of improving distribution for Web developers and for users in emerging markets.
While not the issue for this, it would be useful if data could be shared on that. In many cases, we've seen HTTPS faster for users; whether through enabling new protocols (H/2 or QUIC) or through avoiding network (mis)management.
I fear that may be overlooking significant use cases. I think one which we've heard from a number of developers is the idea to effectively prefetch or preload content in the background, to enable quick and efficient rendering. Using prefetch or preload, as they are today, reveals the user, just as much as liveness checks would. If this was aggregated across several domains, the act of that prefetching may reveal information about the content the user is viewing on the Distributor - for example, observing liveness fetches to Specifying a liveness check would introduce that same privacy risk, making it unlikely for Distributors to serve that content (versus, say, self-hosting it, as some may do today). Even restricting This is why I'm so keen to understand those security properties we're talking about, and how we quantify them, in order to see if there are alternative, less disruptive solutions, which both help developers and help keep the platform consistent. |
Fwiw I do like to avoid this too unless we find it'd be really the only desirable path. As far as I know (at least in most cases) security state is determined upon navigation / document creation, introducing a new intermediate state between non-SecureContext and SecureContext and allowing transition between them seems to open up another complex problem space, possibly too complex. I agree with @sleevi that we should nail down the security properties first. I also agree that the primary concern, i.e. bugs. seems something different from the property that SecurityContext is meant to guarantee. Let me also /cc @mikewest reg: SecureContext |
Sure, the point is that HTTPS initial objective was probably to be right in terms of security as well as reasonably efficient. Follow-up efforts made it even better.
Let's have a try. A website has a security issue related to a particular resource. The web site is using things like proxy-revalidate to only need to care about client caches. In a world without signed content, this might be good enough. After the client cache is empty, an attacker makes the client download a signed content of the faulty resource. The ping to the server will not trigger any cache data clearing/reload. According https://wicg.github.io/webpackage/loading.html, the signed content is not added to the HTTP cache (as some kind of a protection?). Let's say now the above web site is using service worker and Cache API. The faulty resource will be stored opportunistically by the service worker (its cache was cleared) and will poison the web site persistently.
SecureContext is about the trust you can have in a given document. We can also look at the HTTPS state of the resource. Let's say a content provider web site has a bad TLS setup.
Note that self-hosting also causes some privacy issues since the distributor might know all the newspaper articles a user is reading, and not only the first one if user is navigating to the content provider. Agreed on the principle that the privacy implications need to be cautiously evaluated. Some flexibility can also be left to user agents in the way they implement liveness checks. |
The reasoning for this isn't a security protection, it's about the privacy aspects. Those privacy aspects may be addressed/addressable in the context of double-keyed caches; however, since such work is not normatively specified right now, and there are UAs that don't double key, we took an approach to maximize privacy (in many of the design elements) This same focus on privacy is why liveness checks are deeply concerning; as shown with OCSP, liveness checks fundamentally harm efforts to protect user privacy. The design goal has been to try to ensure that SXG not only does not introduce any privacy issues from the status quo, but to also take opportunities to improve it, where they exist. @jyasskin Would it make sense to capture this in the draft privacy considerations or as an explainer, perhaps? Namely, to capture some of the explicit design goals for (privacy and security) that contributed to the current design? It doesn't quite feel right in the spec, but it seems like it'd be useful context for folks reading to understand "Why X, not Y?"
Can you please explain how this would be? This only seems like it would be possible if the Publisher actively collaborated with the Distributor, by providing Distributor-specific SXGs in which all outbound links (e.g. to other articles of the Publisher) instead explicitly specify Distributor SXGs. In such an 'active collaboration' model, it's unclear whether this is a change from the status quo - the Publisher could do this via Pings or back channels, right?
I suspect we may have differing understandings of the processing model for SXGs as proposed, and the privacy properties we're trying to achieve. It sounds as if your focus is on UA-provided information to the Distributor, such as credentials or other headers. However, a big concern on our end has been both the Publisher learning about the Distributor, and those on the network learning about activities on the Distributor. The problem with liveness checks is that they undermine privacy in ways similar to XS-Search, even in a credential-less fetch. For example, consider if With a liveness check, credentialed or not, a network observer would be able to determine whether or not a user is on Similarly, on the publisher side, a publisher that received a liveness check to As we've seen across the Web ecosystem, privacy-conscious distributors are concerned about these sorts of side-channels - to both network observers and to less privacy-conscious publishers - and so take steps, such as rehosting content same-origin to prevent these sorts of side channels. SXGs are a means of achieving those privacy-preserving properties, while allowing meaningful and accurate attribution to users. Hopefully this captures more clearly why liveness checks are fundamentally hostile to user privacy, and why they've been an important design consideration throughout. One would expect that the first steps a privacy-conscious browser or extension would take would be to disable them (or disable SXGs), both of which would then result in even worse consequences for the ecosystem, especially around privacy and authenticity of content. I'm hoping we can find alternative solutions that achieve that same goal, which is a critical use case here.
We've tried very carefully to avoid overloaded terms like 'trust', which can mean varying things for recipients. In the context of the objectives that the SecureContexts spec sets out, our view has been that it affords an appropriate level of confidentiality, integrity, and authenticity. Much of the threat model and considerations address this in the context of whether or not an origin has been authenticated. As it relates to SXGs, I hope we've got agreement that the integrity and authenticity properties have been sufficiently maintained and are equivalent to that of TLS. The SXG spec notes that there are trade-offs with respect to confidentiality/privacy are noted in the privacy considerations. I think we want to be careful about 'trust', because a definition that also includes "trust that there were no bugs", or "trust that the content is honest or accurate", or more broadly, "users can trust this", require a lot of unpacking and have differing expectations. Much like TLS doesn't and shouldn't guarantee that the content is 'trustworthy' - merely that it was delivered over a connection with C/I/A properties - we have tried to avoid introducing those more subjective and problematic elements of trust. |
Below some more thoughts related to discussions we had with Jeffrey, Kouhei and Yoav during IETF 104.
True. Currently, I think the benefits of a live check outweight its drawbacks, in the context of a browser. |
I think SecureContext should be granted provided a liveness check was done on the domain and passed, for a maximum of 30 days ago. The reason, is that the minimum RGP ("Redemption Grace Period") according to ICANN, but are, by most registrars called a "quarantine" period, is 30 days. Only past the 30 days there is a possibility that the domain have been aquired by a unrelated third-party. |
I’m not sure it’s clear why the ICANN policies would relate at all to the
liveness check. That seems largely orthogonal? In the event of a domain
registration change, the SXG certificate will have been revoked, or can be
by the new holder, by virtue of the existing rules for certificates (e.g.
the Baseline Requirements, Section 4.9.1.1).
|
The reason the liveness check should relate to the ICANN policies, is that within the grace period, we can, security-wise, know 100% that the content is trusted and by the domain owner. Away from that, the domain could potentially be by a new owner, and since automated CA's don't check whois for the domain expiration date, there will be no revocation, unless the new domain owner explicitly request it. He might not even be aware of the existence of the old cert. A TLS certificate for a domain who have changed owner, is of limited use for the old owner, as he would need to in some way capture or redirect traffic to his TLS server, and the 60 days left of the certificate, in worst case, also helps mitigate this issue. A SGX certificate however, is worser, as the signed content could be distributed, which creates a security hazard as CDNs propably will not do any further validations on content which is signed by the original domain owner where the SGX signature is still valid. |
I don’t believe we can state “100%”, unless we’re limiting the threat model
to ONLY this specific attack. Note also that the ICANN policies only apply
to a subset of TLDs (gTLDs), so it also does not provide a margin there.
However, it does seem as if the concern is a misalignment between the
lifetime of the assertion and the lifetime of the domain registration,
which does seem as if it is a new concern (compared to those raised earlier
on the thread). I am curious whether a reduction in the certificate
lifetime itself (e.g. from 90 days to 30) would be sufficient to mitigate
that concern, as an alternative to liveness checks.
|
Say that liveness checks as described in #376 are implemented and passing for a given SXG. It seems that the current document could be granted SecureContext.
Let's say that liveness checks are not passing.
It seems that the level of security is not as high, which would mean that SecureContext should not be granted. Such variation may actually break content so it might be better to not render content to the user, and render the content fetched from the actual web site instead.
A consequence is that while the liveness checks can be done in parallel to processing of the SXG (subresource loading, parsing...), the liveness checks should be validated before the first page rendering and any JavaScript execution.
For privacy/security purposes, even subresource loading should probably be postponed until these checks are done.
The text was updated successfully, but these errors were encountered: