Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how useful is private state token for k-anonymity abuse mitigation? #484

Open
npdoty opened this issue Mar 15, 2023 · 2 comments
Open

how useful is private state token for k-anonymity abuse mitigation? #484

npdoty opened this issue Mar 15, 2023 · 2 comments

Comments

@npdoty
Copy link

npdoty commented Mar 15, 2023

k-anonymity here is intended, I believe, to provide the protection that an ad can't be microtargeted to an individual or very small group. That is, a company can't tag a customer by name/identifier, and then send ads to them elsewhere of the style "Nick Doty, don't you want to buy this item that you had in your cart?" (The privacy protection would of course also apply to ads that don't reveal in their content that they are microtargeted, but that can use that identifier information to learn something about any user that clicks on the ad.)

https://github.com/WICG/turtledove/blob/main/FLEDGE_k_anonymity_server.md proposes using private state tokens (privacy pass tokens) where the blind signature confirms that each user has a Google account.

Does this provide much protection? If an attacker controls more than k accounts (50, for example, not a large burden), then they can remove the protection altogether for n users. I'm not sure what order of magnitude n is, but it seems to be much, much larger than 1, since it's at least the number of interest groups that any valid user can join.

Does the protection apply a minimum number of ads that won auctions and are shown to users? Or just the number of users in the interest group before the interest group can be used in an auction? It seems like the latter, which makes the attack cheaper; the attacker doesn't have to pay to show k ads in order to display it to the targeted individual (an attack that will seemingly always be possible even if the k-anon guarantee is kept).

@dmdabbs
Copy link
Contributor

dmdabbs commented Mar 15, 2023

This bit provides more color re. the k-anonymity bar:

FLEDGE has the additional requirement that the tuple of the interest group owner, bidding script URL, and rendered creative must be k-anonymous for an ad to be shown

This lags the spec/proposal since the ad size is now part of the tuple IIRC.

@michaelkleber
Copy link
Collaborator

Hi Nick,

k-anonymity here is intended, I believe, to provide the protection that an ad can't be microtargeted to an individual or very small group.

The motivation for k-anonymity is less focused on microtargeting and more on having some backstop during the interim phase where FLEDGE allows event-level reporting. The event-level report says "Ad X was shown on the page with URL Y", and if ad X is being shown to only a single user, then this is giving out one page of that user's browsing history. If ad X that's being shown to me is also genuinely being shown to at least 50 different people, then we're giving an observer much less information about what pages I personally visited. (The k-anon threshold is indeed about "ads that won auctions and are shown to users", btw.)

You're completely correct that an attacker could circumvent this protection: they could readily create 100 accounts and show the ad to each of them to get it over the threshold. The DP noise we're implementing on the over-threshold value doesn't offer any protection from this attack. The k-anon here may be a speed bump but not a substantial barrier.

Of course k-anonymity is an inherently flawed privacy tool, for many well-known reasons. Suppose we succeeded at making a single ad not pick out a single person. It seems very likely that, say, a set of 3 or 4 FLEDGE-targeted ads would act as a probably-unique fingerprint, even without any malicious ad tech involved.

Fundamentally this is not about k-anon, but rather a demonstration that FLEDGE cannot deliver on our privacy goals until we remove event-level reporting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants