New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The server hosting the page should be able to set Client Hint restrictions that apply to anything that runs on hosted pages #37
Comments
This is interesting. We could use Feature Policy for that. It would be, in some ways, the inverse of how we used it for CH delegation (but somewhat in line with how it's used elsewhere).
Feature Policy allows for that. Marking this as a "feature request", since this doesn't have to be a core part of UA-CA. |
I'd say that it makes sense to have feature-/privacy-parity between the headers and JavaScript API. Access to the information should be governed the same way, by Feature-Policies.
|
In terms of headers, you can set a Feature Policy to "none" to disallow use even from yourself. So for example if you send a response with the header I do agree though that these restrictions should have parity between JS and CH. |
I suspect that barring UA information from 3P iframes without explicit 1P delegation would not add much from a privacy perspective (as that usage is accounted for), but will result in more developer pain when we'd want 3P developers to adopt the new APIs. |
Maybe som "pain" will lead to a more "educated" use of the new APIs. Looking at the history that made the User-Agent develop to its current state, the reckless use of poor regular expressions in server side code could have been avoided if it was made clear that the User-Agent was a part of a bigger policy system. Today the server side code relying on the User-Agent has pretty much settled on scalable solutions, but it is not hard to find ugly JavaScript code making use of string matches on |
I don't understand why it's better for privacy to have bigger fingerprinting surface in JS APIs with less control mechanisms, than a smaller surface with more mechanisms in HTTP headers...? Is the assumption that any party with access to JS APIs on a random page are good players? At least, can the explainer elaborate and be more clear about this section:
|
I'd like to give this issue a friendly bump. What is the reason that the JS API is not protected either by Feature-Policy/Permission-Policy nor any user permission, while hints in transmitted through headers are gated both by opt-on and Feature-Policy/Permission-Policy? If the intention is to ship Chrome with the JS API completely open (even for 3rd parties!), I think it will undermine the initial motivation for this whole project; Privacy. High-entropy information is suddenly available to anyone... As a minimum I'd expect Feature-Policy/Permission-Policy to explicitly delegate access to specific hints and a user permission dialog (like we have for the geolocation API) to allow the user to be in charge of his own privacy. (this might deserve its own issue, I think) |
The assumption is that the browser needs to be able to account for collection of entropy, regardless of its vector. For JS APIs (e.g. the legacy
From that perspective, the That situation is different from request headers, where we want to prevent passive content from being able to collect similar information.
High entropy information is already available to everyone that's able to run active content. The goal of this project was to limit passive entropy from being sent to everyone by default. Clamping down on active entropy (e.g. through a Privacy Budget) would be a separate project. |
I do understand the challenge of aligning all related projects in the team, but my concern still stands: Why address passive fingerprinting by adding more active un-gated entropy? Just seems strange. I think it's naive to think that the information provided by the new APIs will not be used in functionality that normally raise privacy concerns. |
To be clear: all of the information from these APIs are currently available in most browsers' User-Agent (which is available in JS and unrestricted) as well, with the expectation that information that isn't normally available being left blank. There's no new entropy being revealed that wasn't freely available before |
Totally get the message @amtunlimited, but that is only true until the user-agent freeze. So this just feels like a lost opportunity to improve privacy. |
All new features that can be detected will provide some level of new entropy, even if the entropy is duplicate information, and with the userAgent freeze feature, there is another level of entropy introduced by comparing the 2 features.
I don't think there is anyway of escaping that this introduces a new and unique entropy. But, why introduce it if it is already available? Doesn't this now provide a new comparison fingerprint and isn't it unavoidable that some navigator.userAgent strings will match and some won't? For example, I'm on beta and userAgent returns 84.0.0.0, but uaFullVersion returns 84.0.4147.21. So, if a script is comparing these, it should be inferred that my userAgent is frozen or spoofed. |
I believe that server owners, should be able to add extra restrictions beyond the baseline on what scripts on the page are able to access, this would allow them to define a level of privacy for their users in line with my interests and allows those sites interested in increasing the level of privacy for their users to do so explicitly and potentially in a way the browser can signal to users. It could also become the basis of an additional plug in to the proposals around the privacy budget.
Let's take an example in which I am the owner of a server hosting a privacy conscious website. If I don't believe anything on my site would need access to the users' Architecture via a Client Hint, I should be able to not accept it in my server headers and also define to the browser that anything acting on the page to request that information is doing so against my wishes, thus restricting the access to the on-page object. If I want to run ads, but make this information unavailable to any ad that potentially seeks to violate users privacy out of my control, this seems an ideal mechanism.
I would think that the ideal mechanism for this would be for the browser to read
Accept-CH
and translate that policy to thenavigator
object. I see there is a discussion of potentially using Feature Policy for this, and that might be an acceptable (if potentially repetitive approach), but it would mean that it would need equal detail for being able to accept or restrict specific Client Hints.In that situation it would be especially useful if Feature Policy could set Client Hint allowances in detail and have it filter down to default settings for iFrames, and to have that level of detailed control available to place on iFrames themselves (so a user could set Allow on UA at a page level, but disallow it at a particular iframe level)
The text was updated successfully, but these errors were encountered: