Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The server hosting the page should be able to set Client Hint restrictions that apply to anything that runs on hosted pages #37

Open
AramZS opened this issue Jan 17, 2020 · 12 comments
Labels
enhancement New feature or request

Comments

@AramZS
Copy link

AramZS commented Jan 17, 2020

I believe that server owners, should be able to add extra restrictions beyond the baseline on what scripts on the page are able to access, this would allow them to define a level of privacy for their users in line with my interests and allows those sites interested in increasing the level of privacy for their users to do so explicitly and potentially in a way the browser can signal to users. It could also become the basis of an additional plug in to the proposals around the privacy budget.

Let's take an example in which I am the owner of a server hosting a privacy conscious website. If I don't believe anything on my site would need access to the users' Architecture via a Client Hint, I should be able to not accept it in my server headers and also define to the browser that anything acting on the page to request that information is doing so against my wishes, thus restricting the access to the on-page object. If I want to run ads, but make this information unavailable to any ad that potentially seeks to violate users privacy out of my control, this seems an ideal mechanism.

I would think that the ideal mechanism for this would be for the browser to read Accept-CH and translate that policy to the navigator object. I see there is a discussion of potentially using Feature Policy for this, and that might be an acceptable (if potentially repetitive approach), but it would mean that it would need equal detail for being able to accept or restrict specific Client Hints.

In that situation it would be especially useful if Feature Policy could set Client Hint allowances in detail and have it filter down to default settings for iFrames, and to have that level of detailed control available to place on iFrames themselves (so a user could set Allow on UA at a page level, but disallow it at a particular iframe level)

@yoavweiss
Copy link
Collaborator

I believe that server owners, should be able to add extra restrictions beyond the baseline on what scripts on the page are able to access, this would allow them to define a level of privacy for their users in line with my interests and allows those sites interested in increasing the level of privacy for their users to do so explicitly and potentially in a way the browser can signal to users. It could also become the basis of an additional plug in to the proposals around the privacy budget.

Let's take an example in which I am the owner of a server hosting a privacy conscious website. If I don't believe anything on my site would need access to the users' Architecture via a Client Hint, I should be able to not accept it in my server headers and also define to the browser that anything acting on the page to request that information is doing so against my wishes, thus restricting the access to the on-page object. If I want to run ads, but make this information unavailable to any ad that potentially seeks to violate users privacy out of my control, this seems an ideal mechanism.

This is interesting. We could use Feature Policy for that. It would be, in some ways, the inverse of how we used it for CH delegation (but somewhat in line with how it's used elsewhere).

In that situation it would be especially useful if Feature Policy could set Client Hint allowances in detail and have it filter down to default settings for iFrames, and to have that level of detailed control available to place on iFrames themselves (so a user could set Allow on UA at a page level, but disallow it at a particular iframe level)

Feature Policy allows for that.

Marking this as a "feature request", since this doesn't have to be a core part of UA-CA.

@jonarnes
Copy link

I'd say that it makes sense to have feature-/privacy-parity between the headers and JavaScript API. Access to the information should be governed the same way, by Feature-Policies.
Arguments for this solution include:

  • Same opt-in mechanism for same information
  • Enables control over both passive- and active fingerprinting.

@amtunlimited
Copy link
Contributor

Let's take an example in which I am the owner of a server hosting a privacy conscious website. If I don't believe anything on my site would need access to the users' Architecture via a Client Hint, I should be able to not accept it in my server headers and also define to the browser that anything acting on the page to request that information is doing so against my wishes, thus restricting the access to the on-page object. If I want to run ads, but make this information unavailable to any ad that potentially seeks to violate users privacy out of my control, this seems an ideal mechanism.

In terms of headers, you can set a Feature Policy to "none" to disallow use even from yourself. So for example if you send a response with the header Feature Policy: ch-arch 'none' then no first or third party subresource will get the sec-ch-ua-arch header, even if the send an equivalent accept-ch response.

I do agree though that these restrictions should have parity between JS and CH.

@yoavweiss
Copy link
Collaborator

I do agree though that these restrictions should have parity between JS and CH.

I suspect that barring UA information from 3P iframes without explicit 1P delegation would not add much from a privacy perspective (as that usage is accounted for), but will result in more developer pain when we'd want 3P developers to adopt the new APIs.

@jonarnes
Copy link

Maybe som "pain" will lead to a more "educated" use of the new APIs. Looking at the history that made the User-Agent develop to its current state, the reckless use of poor regular expressions in server side code could have been avoided if it was made clear that the User-Agent was a part of a bigger policy system. Today the server side code relying on the User-Agent has pretty much settled on scalable solutions, but it is not hard to find ugly JavaScript code making use of string matches on navigator.UserAgent. I can easily see the same thing happen for getUserAgent(). Restricting usage of getUserAgent() with feature policies will not solve that problem, but definitely lead to more educated use IMO.

@jonarnes
Copy link

jonarnes commented Feb 7, 2020

I suspect that barring UA information from 3P iframes without explicit 1P delegation would not add much from a privacy perspective (as that usage is accounted for)

I don't understand why it's better for privacy to have bigger fingerprinting surface in JS APIs with less control mechanisms, than a smaller surface with more mechanisms in HTTP headers...?

Is the assumption that any party with access to JS APIs on a random page are good players?

At least, can the explainer elaborate and be more clear about this section:

Top-level sites a user visits frequently (or installs!) might get more granular data than cross-origin, nested sites, for example. We could conceivably even inject a permission prompt between the site's request and the Promise's resolution, if we decided that was a reasonable approach.

@jonarnes
Copy link

jonarnes commented Jun 3, 2020

I'd like to give this issue a friendly bump.

What is the reason that the JS API is not protected either by Feature-Policy/Permission-Policy nor any user permission, while hints in transmitted through headers are gated both by opt-on and Feature-Policy/Permission-Policy?

If the intention is to ship Chrome with the JS API completely open (even for 3rd parties!), I think it will undermine the initial motivation for this whole project; Privacy. High-entropy information is suddenly available to anyone...

As a minimum I'd expect Feature-Policy/Permission-Policy to explicitly delegate access to specific hints and a user permission dialog (like we have for the geolocation API) to allow the user to be in charge of his own privacy.

(this might deserve its own issue, I think)

@yoavweiss
Copy link
Collaborator

I don't understand why it's better for privacy to have bigger fingerprinting surface in JS APIs with less control mechanisms, than a smaller surface with more mechanisms in HTTP headers...?

Is the assumption that any party with access to JS APIs on a random page are good players?

The assumption is that the browser needs to be able to account for collection of entropy, regardless of its vector.

For JS APIs (e.g. the legacy navigator.userAgent, the new navigator.userAgentData, window.innerWidth, window.devicePixelRatio and many many others that can expose fingerprintable entropy), the browser needs to be able to:

  1. Know that they are collecting that information
  2. Intervene if the patterns or quantity of collection seem like one that can be used for persistent tracking.

From that perspective, the userAgentData API is not different from others. And while it can make sense to enable the top-level origin access to those entropy-laden APIs, it doesn't necessarily make sense to do that by default.

That situation is different from request headers, where we want to prevent passive content from being able to collect similar information.

If the intention is to ship Chrome with the JS API completely open (even for 3rd parties!), I think it will undermine the initial motivation for this whole project; Privacy. High-entropy information is suddenly available to anyone...

High entropy information is already available to everyone that's able to run active content. The goal of this project was to limit passive entropy from being sent to everyone by default. Clamping down on active entropy (e.g. through a Privacy Budget) would be a separate project.

@jonarnes
Copy link

jonarnes commented Jun 3, 2020

I do understand the challenge of aligning all related projects in the team, but my concern still stands: Why address passive fingerprinting by adding more active un-gated entropy? Just seems strange. I think it's naive to think that the information provided by the new APIs will not be used in functionality that normally raise privacy concerns.
Even if there is a long term idea of Privacy Budgets etc., I can easily imagine the community embracing the new API making it difficult to gate at a later stage, and we'll have another "user-agent freeze situation".

@amtunlimited
Copy link
Contributor

adding more active un-gated entropy

To be clear: all of the information from these APIs are currently available in most browsers' User-Agent (which is available in JS and unrestricted) as well, with the expectation that information that isn't normally available being left blank. There's no new entropy being revealed that wasn't freely available before

@jonarnes
Copy link

jonarnes commented Jun 4, 2020

Totally get the message @amtunlimited, but that is only true until the user-agent freeze. So this just feels like a lost opportunity to improve privacy.

@abrahamjuliot
Copy link

abrahamjuliot commented Jun 14, 2020

@amtunlimited

There's no new entropy being revealed that wasn't freely available before

All new features that can be detected will provide some level of new entropy, even if the entropy is duplicate information, and with the userAgent freeze feature, there is another level of entropy introduced by comparing the 2 features.

if (!navigator.userAgentData) // user doesn't have this feature 
else // detect if userAgent is frozen by comparing HighEntropyValues with Navigator platform/userAgent

I don't think there is anyway of escaping that this introduces a new and unique entropy. But, why introduce it if it is already available? Doesn't this now provide a new comparison fingerprint and isn't it unavoidable that some navigator.userAgent strings will match and some won't? For example, I'm on beta and userAgent returns 84.0.0.0, but uaFullVersion returns 84.0.4147.21. So, if a script is comparing these, it should be inferred that my userAgent is frozen or spoofed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants