Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make Privacy Sandbox API unavailable for users that have opted out #335

Open
av-sherman opened this issue Aug 5, 2022 · 8 comments
Open

Comments

@av-sherman
Copy link

Chrome advises that ad tech checks whether the relevant API is available before using it, using the approaches documented here.

However, if a user opts-out of Privacy Sandbox trials (e.g., by visiting chrome://settings/privacySandbox), the sandbox APIs still appear exposed to javascript.

This presents a challenge, particularly for FLEDGE, as significant resources and latency can be incurred when attempting call FLEDGE's runAdAuction. For example, if sellers believe FLEDGE is eligible to run, they may:

  • Call out to buyers with this information, in order to fill perBuyerSignals
    • Buyers may, in turn, spend additional resources/compute in order to generate their signals
  • Run additional server-side FLEDGE-specific code (e.g., to generate an AuctionConfig in preparation of the FLEDGE auction)
  • Call the runAdAuction API, releasing the thread on the client and potentially incurring unbounded latency before rendering an available contextual ad (i.e., while waiting for runAdAuction to return, in competition with other tasks on the page)

In contrast, if access to an API is disabled via a permissions policy, callers can check whether the feature is blocked before calling the relevant API.

Ask:

  • Ideally, Chrome would provide a ‘user has opted out of sandbox APIs’ signal, which ad tech and websites can use to know whether to run any sandbox-specific setup/function calls
  • Alternatively, Chrome can more simply make the API unavailable, similar to users who are not participating in the origin trials, so that no calls against the API are possible
@michaelkleber
Copy link
Collaborator

Unfortunately, we have seen many historical cases where web sites react to user choices like "turn off third-party cookies" by pressuring the user to change their settings, sometimes including blocking website access unless they do. So revealing the user's setting in the way you suggest turns out to be harmful to people's ability to choose and to browser privacy goals.

@dmarti
Copy link
Contributor

dmarti commented Aug 5, 2022

Yes, and a detectable opt in/out would also add to the browser's fingerprinting surface.

@thegreatfatzby
Copy link
Contributor

@michaelkleber sorry very late to this thread but I'd like to understand better why the publisher choosing to block access to their content if a particular monetization path isn't available, isn't acceptable? If the publisher wants to make a choice to deny access to their content, and they lose all their viewers, they can course correct and try a different monetization path (which may include a walled garden that has it's own impact on consumers), but on the face of it that seems a valid choice for the publisher to ask the consumer to make w/r/t content they own.

@michaelkleber
Copy link
Collaborator

Web sites are welcome to allow or deny access to their content, including charging for it. But the browser's job is to be the user's agent. So browsers often let the user make choices, and certainly should prefer those choices to be free from coercion.

@thegreatfatzby
Copy link
Contributor

thegreatfatzby commented Jun 1, 2023

Thanks for the fast response!

I'm still trying to connect some dots here, when we say "user's agent" and "free from coercion" what do we mean?

  • A publisher defining what it wants as part of a transaction for content it pays to produce, whether it's money or data or nothing, doesn't seem on its own a threat or use of force.
  • The user's agent, the browser, giving the user choices and even detailing consequences of those makes sense, but what agency is the agent giving the user here?

I think when we say "harmful to people's ability to choose and to browser privacy goals" it sounds like we're implicitly saying "harmful to people's ability to choose and to browser privacy goals while keeping content available under certain terms".

If a publisher believes the monetization through Fledge isn't sufficient, is their only option to require a paywall on any page view from Chrome browser?

Providing user configuration options with privacy centric defaults, even aggressively privacy centric defaults, would balance the interests and allow for an "evolving conversation" that users and publishers can have to determine the direction(s) people want to go, and the dots I can't yet connect are that allowing that is coercive.

@michaelkleber
Copy link
Collaborator

This is not a specific choice about one API, it's a general way that browsers prefer to implement choices for the user.

When a user takes a browser action like "use incognito mode", the browser could of course have some signal of "This User Is In Incognito Mode", which broadcasts the user's choice to anyone who cares. Browsers have instead chosen to interpret the user's choice as a preference expressed to the browser but not revealed to every website they visit, as best as possible — so a user in incognito mode should look like a regular user who has just started using this browser and does not have any prior history or stored data. The browser is acting on the user's behalf here, and that generally should involve revealing user settings to websites only when necessary.

Specifically with regards to the Protected Audience API (now renamed from FLEDGE), many visitors to a website might well have zero Interest Groups stored on their browser. This could be because they have not visited many commercially-relevant websites recently, or because they have cleared some data, or because they are in incognito mode, or because they have chosen to turn off the API using a browser-provided control. There is no reason that the browser needs to reveal any of theses situations to websites the user visits. Indeed one goal of these new ads-related APIs is explicitly to not reveal ad-related information to the sites you visit, so the argument against doing so here is stronger even than it is in the general case.

@thegreatfatzby
Copy link
Contributor

thegreatfatzby commented Jun 11, 2023

Had to meditate on this one quite a bit, the "Incognito Mode" point made me think a lot, as I certainly agree that having a signal for Incognito Mode would seem odd; I don't think I'd like a world where the publisher can say "add yourself to at least 4 Interest Groups to see this content!!!" It has caused my brain to segfault trying to put words to my reactions.

First, one point of clarity, when you say one goal of these new ads-related APIs is explicitly to not reveal ad-related information to the sites you visit: I wasn't asking for more information to flow from embedded to embedder, I (and I think the original comment) was talking about a browser level setting that the consumer would be able to use to choose their preferences. I do see how adding those settings (via a X-PS-Settings header or some such) would add to the fingerprinting surface of a request; I guess I'm not sure how much it would add given there wouldn't be too many degrees of freedom, and it seems like fingerprinting surface reduction is being worked on elsewhere. That said, you could allow the "conversation" to occur all on the browser by allowing the configurations, and letting the publisher put in some meta tag that defines acceptable settings, and blocks the content with some configureable message, and the browser mediates.

(Also, as to why I'm pushing for "conversation" and what I mean: I think we're generally framing this as improving data protections, which is true but I think incomplete; it would be more precise, nerdy, grandiose, and fun to say that what we're actually doing is "a) improving society's Privacy-ContentQuality-Monetization-Competitiveness Pareto Curve, and b) choosing a point on it for people". I think (a), the work to push the curve out and enable more of each, is great and why I'm excited to work on this; I think if we choose (b), the point on the curve, we're making a mistake. Giving consumers and publishers tools to let the market choose (b) is more "pro-ecosystem", and will let the balance of power in the industry move in a more organic, less directed, way, which I think has value for all participants.)

Specifically on the Incognito Mode case, I think there's daylight between the two. In the case of Incognito Mode the user makes a choice by clicking a button which gives her user-agent instructions that has certain behaviors associated with it, different from the default behaviors. With Privacy Sandbox (PaAPI et al) we're changing the default behaviors.

Finally, specifically for the original ask, which is asking about preserving resources when they aren't useful due to opt out, could you also maintain the API, but the code inside Chromium just knows to do nothing, and it "silently saves resources"?

@michaelkleber
Copy link
Collaborator

Giving consumers and publishers tools to let the market choose (b) is more "pro-ecosystem", and will let the balance of power in the industry move in a more organic, less directed, way

What you are asking for sounds like moving the balance of power away from the person using the browser, which is pretty contrary to the goal of the browser as a user agent.

Finally, specifically for the original ask, which is asking about preserving resources when they aren't useful due to opt out, could you also maintain the API, but the code inside Chromium just knows to do nothing, and it "silently saves resources"?

Yes indeed — that is what we do when it doesn't cause an undesirable leak of private information.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants