Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gate Timestamps behind existing permission prompts #64

Closed
pes10k opened this issue Mar 10, 2019 · 27 comments
Closed

Gate Timestamps behind existing permission prompts #64

pes10k opened this issue Mar 10, 2019 · 27 comments
Labels
privacy-tracker Group bringing to attention of Privacy, or tracked by the Privacy Group but not needing response.

Comments

@pes10k
Copy link

pes10k commented Mar 10, 2019

Following up from #20

The privacy risks associated with these high res timestamps are documented in the draft, as well as some of the issues (e.x. #56), and a long list of research papers (happy to provide citations, but they're already linked to elsewhere in the repo).

The usefulness of the timestamps are also well documented in #56, and it seems like there are cases where these would be very useful to web users.

However, these cases seem to be the rare case, and not what users will encounter on the vast majority of pages. And the most compelling examples of the usefulness of these timestamps are gated behind permission prompts (fullscreen API for games, USB inputs and WebVR for VR uses, etc).

I suggest only making these timestamps available when the user has approved one of the following permissions (using the permissions in Chrome currently):

  • Fullscreen
  • MIDI
  • USB Devices

Could also add to the above list, if there are other permissions that'd be relevant. But it seems like this would be a way of blocking the privacy violating uses of these timestamps (at least in the common case) while also making them available in the cases where they're likely to be useful.

@yoavweiss
Copy link
Contributor

The often-cited paper, Fantastic Timers and Where to Find Them, suggests that performance.now() is far from being the only high-precision timer in town. If we look at Table 1 in that paper, we can see that timers are also available using SharedArrayBuffers, MessageChannels, postMessage, BroadcastChannel, setTimeout and finally CSS animations. I'm sure that list is not exhaustive.

While some of those are of lower resolution than performance.now() in some browsers or platforms, the paper also shows usage of coarse grained timers with sharp edges to interpolate higher resolution ones.

Since blocking all those timers behind a user permission doesn't seem tenable, browsers have chosen a different path to tackle this issue:

  • Clamping/fuzzing timers - at least performance.now() specifically is being clamped and potentially fuzzed in all browser implementations to the extent they consider safe for their users.
  • Site-isolation is a major effort on that front to create process barriers between cross-origin documents.
  • Out of renderer CORS implementation - implementing CORS checks outside of the renderer enables browsers to prevent reading non-CORS-safe resources into the renderer.
  • Cross-Origin Read Blocking prevents resources which don't fit their destination from being read into the renderer process, avoiding no-CORS access to normally CORS-protected resources.
  • Cross-Origin Resource Policy is a way for sensitive resources to instruct the browser that they should not be loaded into other renderer processes.
  • Cross-Origin Opener Policy enables developers to tell the browser that opener access is not desired, to that browsers without site-isolation can place such top-level sites in a different process.
  • And finally, there are discussions about creating a CORS-only mode or a "Credential-less subresources" mode, where subresources loaded into the page are protected by default.

If you haven't yet, I suggest you'd read @creis' excellent document on Long-Term Web Browser Mitigations for Spectre.

With all the above said, browsers are free to clamp performance.now() values as they see fit, so one can imagine a browser which provides that timer with very coarse timing granularity, but increases its resolution given user permission for the features you mentioned, or other signals (e.g. for sites which are installed to homescreen by the user).

Given the fast-moving pace of this area, I doubt any of those implementation decisions should be baked into the spec.

@pes10k
Copy link
Author

pes10k commented Mar 11, 2019

@yoavweiss Thank you for your comments above. I was familiar with many of these efforts, but not all. so I'm grateful for the the links. Thanks much!

(e.g. for sites which are installed to homescreen by the user).

This is an interesting idea too (although this could be its own fingerprinting target) First party vs third party, etc, all could also useful signals

With all the above said, browsers are free to clamp performance.now() values as they see fit, so one can imagine a browser which provides that timer with very coarse timing granularity, but increases its resolution given user permission for the features you mentioned, or other signals

Given the fast-moving pace of this area, I doubt any of those implementation decisions should be baked into the spec.

I take your point here, but I disagree. The current practice of proposing and adopting standards that include clear risks for privacy loss, and then relying on browsers to nerf / muck-with those standards has been a catastrophe for browser privacy. Its bad for web compat reasons, for baking in developer expectations that this privacy-risky functionality will be around forever, and as precedent for future standards.

We'll be in a much better place privacy wise if standards address privacy more directly in the normative parts of standards, even if some vendors will still go further than the standard. Saying "lets standardize the functionality, but things are moving too fast to deal with mitigations" seems likely to only increase the privacy headache down the road.

Would you / other authors be receptive to talking with PING about the kinds of signals / situations where performance.now() could be heavily clamped?

@noncombatant
Copy link

noncombatant commented Mar 14, 2019

I've opened our PARTIAL, INCOMPLETE enumeration of precise-enough clocks: https://bugs.chromium.org/p/chromium/issues/detail?id=798795

The sheer number and variety of these clocks means to me that (a) we can never coarsen them all without breaking legitimate functionality and web compatibility; and (b) we can't create a permission prompt that would defend against attack while still being meaningful.

See also the Attenuating Clocks section of the Post-Spectre Threat Model Re-Think: https://chromium.googlesource.com/chromium/src/+/master/docs/security/side-channel-threat-model.md#attenuating-clocks

I think our goal going forward should be that an origin cannot time another without explicit opt-in from the timee (such as with the Timing-Allow-Origins header or a similar mechanism).

@tomrittervg
Copy link

(a) we can never coarsen them all without breaking legitimate functionality and web compatibility;

FWIW, when Firefox deployed our 2ms bump and then reduced it to 1ms + jitter, this applies to every explicit clock exposed by the web platform - not just performance.now. (If we missed one, it's a bug.)

Additionally, to try to hurt implicit timers we integrated Fuzzyfox, but have not had occasion to try enabling it (or doing much performance/compatibility testing on it beyond some initial manual-QA efforts.)

In general, I agree with you though: with arbitrary amplification, you can't coarsen timers enough to get to a point where you're defeating a medium-bandwidth attack while also maintaining web compatibility.

@psnyde2
Copy link

psnyde2 commented Mar 15, 2019

@tomrittervg
Thanks much for the added details!

with arbitrary amplification, you can't coarsen timers enough to get to a point where you're defeating a medium-bandwidth attack while also maintaining web compatibility.

What I take away from your message is that standardizing privacy-risking end points (in this case, extremely high res timers), and then relying on non-standardized, vendor dependent mitigations is a losing strategy, or at least one with quickly declining marginal returns.

If I'm reading you right, that seems to suggest we should limit the number, and situations, where such timers are available in the web API, no? Which is my point / goal / suggestion with this issue. :)

@igrigorik
Copy link
Member

[noncombatant]: The sheer number and variety of these clocks means to me that (a) we can never coarsen them all without breaking legitimate functionality and web compatibility; and (b) we can't create a permission prompt that would defend against attack while still being meaningful.

[tomrittervg] In general, I agree with you though: with arbitrary amplification, you can't coarsen timers enough to get to a point where you're defeating a medium-bandwidth attack while also maintaining web compatibility.

@psnyde2 I strongly agree with both points made by Chris. Yes, there are differences in the resolution that different platforms expose, and those will continue to evolve based on capabilities of each architecture, the underlying hardware, etc.

Per Chris's point, there isn't a meaningful defense or "access to time" prompt that we can present to the user. Based on this, speaking on behalf of Chrome, our position on this proposal is "no". I won't speak on @tomrittervg's behalf, but given that FF and Safari clock resolutions are same as Date.now(), presenting a prompt is meaningless. Plus there is the web compatibility angle..

@pes10k
Copy link
Author

pes10k commented Apr 22, 2019

@igrigorik i think there is a misunderstanding. The proposal is to not to create additional prompts, but to gate these additional timing sources behind existing prompts, since these existing prompts seem to exist 1-to-1 to the stated use cases for the timing information. Again, there is no suggestion for addition "date" or "access to time" prompts.

Point taken that there are other troublesome timing sources present, but the goal here is to avoid adding additional technical / privacy debt that will need to be paid down further. When you realize you're in a privacy hole, the first thing to do is to stop digging.

@igrigorik
Copy link
Member

@snyderp what existing prompts are you referring to?

@pes10k
Copy link
Author

pes10k commented Apr 23, 2019 via email

@plehegar plehegar added the privacy-tracker Group bringing to attention of Privacy, or tracked by the Privacy Group but not needing response. label Apr 23, 2019
@rniwa
Copy link

rniwa commented Apr 23, 2019

I don't think the proposed change makes much sense because:

  1. There are many 1ms timers on the Web like Date.now, setTimout, workers, etc...
  2. Ordinary people don't understand the implication of what it means to grant access to high precision time meausrements, let alone that fullscreen may result in higher resolution time measurements.

@pes10k
Copy link
Author

pes10k commented Apr 23, 2019

  1. Again, the goal here is to avoid adding additional timing sources that will need to be dealt with later. Previous standards / decisions / implementations have already created a mess of "privacy related technical debt". Please don't add more.
  2. The goal is not to have an implicit way of asking users "do you want the page to use timers". The goal is keep the additional timers out of the common case, and limit to them to cases where they'll be useful (e.g. the use cases motivating their addition)

@rniwa
Copy link

rniwa commented Apr 23, 2019

Avoid adding additional timing sources that will need to be dealt with later

performance.now is already an integrated part of the web platform. It has been in the web platform long enough that removing it would break web compatibility. Because the precision of performance.now is no better than Date.now in WebKit's implementation, for example, there is no added precision.

The goal is not to have an implicit way of asking users "do you want the page to use timers".

Because users don't expect using fullscreen on a given website may pose a new security or privacy threat, it would be not okay to grant any security or privacy sensitive API even if performance.now were a real security surface.

@pes10k
Copy link
Author

pes10k commented Apr 23, 2019

1a. We're discussing standardization here / what should exist in the platform / where things should move, not just codifying what exists in one or two implementations.

1b. WebKit and Chromium are not the only games in town…

1c. Standards act also a sign for where the web should move in the future / what developers should target / etcc

  1. Users don't expect that visiting a website will give third party code access to high resolution timers either The goal is to keep things as private as possible as default. I'm not sure I understand the argument here… Reduce privileges to whats likely to actually be needed for the task at hand, not everything all the time and let the user sort it out.

@rniwa
Copy link

rniwa commented Apr 23, 2019

Restricting access to performance.now doesn't accomplish anything is what I'm saying. If you're concerned about high precision timing measurements, you'd need to reduce the precision everywhere. Allowing it while fullscreen is used is not okay.

@pes10k
Copy link
Author

pes10k commented Apr 23, 2019

I think we're at disagreement, so I don't want to turn this into an argument. But to state the case on last time, and try and avoid any misunderstanding, the motivations behind the proposal (and objections to the standard as is) are:

  1. Mirroring privacy attack surface that already exists is very very very bad, since it makes it difficult to clean things up going forwards. Restricting access to performance.now has the concrete accomplishment / benefit of reducing one more place where the web expects high res timers to exist, and so prevents the fixing-privacy-without-breakingsites problem from getting even worse.

  2. The attitude that "there are already similar / identical privacy risks on the web, so whats the harm in adding another" is one (of many) ways we got the privacy catastrophe that is the modern web; it can't continue if the web is going to get better.

  3. I believe the proposal that there are useful purposes for high res timers. It also seems clear that the web has a demonstrably terrible habit of offering functionality in the common case (i.e. to everyone) when its really only needed in tiny niche cases. Gating timers behind related permission prompts seems like the correct way to square this circle; let apps that need it use it (after they've paid the price of a prompt) and keep common case script / frames / etc from accessing.

  4. "Allowing it while fullscreen is used is not okay" does not make any sense to me; if the argument is that "full screen apps" / "vr apps" etc need high res timers to work well, then it seems like obvious to tie these together. If the standard's motivations are wrong, and these kinds of apps dont need this functionality, then why is it being proposed at all?

@igrigorik
Copy link
Member

Peter, thanks for the feedback.

I would like to suggest that we step back from the "don't be cavalier" and invoking "privacy catastrophe" framing, as neither of those is helpful in the technical discussion where the current disagreement lies.

Per discussion above, we (a) don't believe it's possible to eliminate, coarsen, or jitter all explicit and implicit clocks in the platform (note), (b) we enforce Timing-Allow-Origin opt-ins on cross-origin resources that controls access to high res timestamps for such resources, (c) we don't believe there is a meaningful prompt that can be presented to the user to ask access to a clock and we disagree with the proposed strategy of gating access to a clock behind other permission prompts (it's unintuitive and unexpected both to the user and developer, and it blocks valid use cases that may not otherwise require a prompt), (d) browsers have already adopted various strategies to mitigate #56, ranging from coarsening clocks to equivalent of Date.now() to platform level re-architecture — e.g. site isolation.

As a result, we disagree both with the motivation for and proposed method of the proposal.

@noncombatant
Copy link

I agree with @igrigorik, and will add: We'll also be able to get rid of a lot of cross-origin timing side-channels by getting the Open Web Platform to a point where clients don't send credentials in requests for a page's sub-resources unless the sub-resource origin has opted in to being called from the main page's origin (if different). One version that of that idea is whatwg/html#4175, although one can imagine others. Like Site Isolation, it's a stronger defense than hacking on clocks and less bad for the OWP's overall utility. That's the direction we should be going.

@pes10k
Copy link
Author

pes10k commented Apr 25, 2019

I appreciate the above input. I appreciate that my suggestion many not the best way to address this. But I strongly think it's necessary to include normative mitigations or reduce the availability of the timers. Its just not a defense of any standard to say "the standard calls for X, but its not a problem because browsers can just not do X" (i.e. Safari and Firefox, Date.now, etc)

Are there other patterns / strategies can be standardized (i.e. in normative text) that would keep the functionality out of the common path and restrict it to only the cases where it's likely needed? Even things like requiring a user gesture in the frame, making it un-available to third-party frames, requiring aliasing with Date.now() in 3p frames, etc. Anything would be better for privacy (and future privacy+web compat work) than the current "available to everyone, all the time".

@noncombatant Re: whatwg/html#4175, its a neat idea! But it wouldn't address a large number of the kinds of history sniffing and related privacy concerns, plus trackers are likely to opt-in anyway, no?

@igrigorik just to summarize the above in reply to your a-d notes

a) you might be right here, but reducing the number of clocks, especially high res ones, makes it easier to deal with the existing ones.
b) this doesn't address the privacy concern. This allows well behaved sites to protect themselves, but the privacy concern comes from the fact that users don't (and shouldn't) fully trust any website
c) I appreciate that we disagree on this
d) these are non-normative solutions. E.g. we know the standard introduces privacy risks, but others can deal with fixing it. Its a standards anti-pattern. The solution to the risk introduced by the standard should be in the standard (i.e. normative).

@pes10k
Copy link
Author

pes10k commented Apr 25, 2019

TL;DR; I'm happy to close this issue, since its specifically tied to my specific proposal, and re-open another, broader issue mentioning the problem, instead of a specific solution, if it'd be better for org.

But the current pattern (not limited to this spec) of:

  1. Adding privacy-sensitive functionality (that has limited use cases)
  2. Making it available to all code / domains
  3. Motivating as "its no worse than what already exists", and
    4a. leaving it to implementors to deal with the privacy harm (non-normative mitigations) OR
    4b. saying future work will solve the current harm

is the anti-pattern that makes privacy-improvements such a web-comapt problem today (and came up as a pattern to avoid at the most recent AC meeting).

@yoavweiss
Copy link
Contributor

@snyderp I suspect the source of misunderstanding is in the threat model you think high resolution timers pose. With Spectre and Meltdown timers can be used in order to read arbitrary process memory. That means that limiting access to them in third party contexts or behind user interaction won't help much, as first party sites pose a similar threat. This is also why browsers choose to protect against them by augmenting process boundaries (site isolation, CORP, COWP/COOP, CORB, CORP-P and probably other future things that start with CO).

reducing the number of clocks, especially high res ones, makes it easier to deal with the existing ones.

How? How would barring access to explicit clocks help in mitigating the implicit ones?

this doesn't address the privacy concern. This allows well behaved sites to protect themselves, but the privacy concern comes from the fact that users don't (and shouldn't) fully trust any website

Can you elaborate on that? How would limiting one explicit timer help establish user trust? Again, I think this comes down to misunderstandings around the threat model.

d) these are non-normative solutions. E.g. we know the standard introduces privacy risks, but others can deal with fixing it. Its a standards anti-pattern. The solution to the risk introduced by the standard should be in the standard (i.e. normative).

To re-iterate previous points - this standard does not introduce new risks, as there are plenty timers in the platform. You keep repeating that we should block access to this specific one, but we'd need a bit more reasoning as to how that would help users and their privacy and security.

TL;DR; I'm happy to close this issue, since its specifically tied to my specific proposal, and re-open another, broader issue mentioning the problem, instead of a specific solution

I think it will be helpful if you could clearly point out the threat model that you think timers pose in general, and this one in particular. From my perspective, you can keep it in this issue.

is the anti-pattern that makes privacy-improvements such a web-comapt problem today

Can you point out specific examples where A exposed information, B followed it, we were then able to get rid of the information exposure in A, but not able to get rid of it in B?

came up as a pattern to avoid at the most recent AC meeting

I'd appreciate if you could point me to the minutes.

@tomrittervg
Copy link

I'm told that the current thinking of the committee is to add language that permits UAs to gate access but does not require them. Implementers are going to do whatever they need to do to protect
users. If we want to add something to the spec, I would I suggest something broadly worded to encompass as many of the possible actions that have been taken, including:

  • Resolution reduction
  • Jitter (Fuzzyfox on a single time source)
  • Detecting too may calls tot he API and there-after throttling calls by enabling resolution reduction/jitter
  • Full on Fuzzyfox (on all browser timing sources)
  • Disabling mitigations after certain conditions are met (such as those required for SAB (re-)enablement, or other somewhat-related permissions like this issue proposed)

@pes10k
Copy link
Author

pes10k commented Apr 26, 2019

@yoavweiss

@snyderp I suspect the source of misunderstanding is in the threat model you think high resolution timers pose. With Spectre and Meltdown timers can be used in order to read arbitrary process memory. That means that limiting access to them in third party contexts or behind user interaction won't help much, as first party sites pose a similar threat. This is also why browsers choose to protect against them by augmenting process boundaries (site isolation, CORP, COWP/COOP, CORB, CORP-P and probably other future things that start with CO).

For attacks, I imagine we're thinking of 100% all the same kinds attacks (though narrowly for this concern, i'm not considering things like spector etc). So attacks like this
https://www.usenix.org/system/files/conference/usenixsecurity17/sec17-kohlbrenner.pdf

Environment learning attacks like
https://www.usenix.org/system/files/conference/woot14/woot14-ho.pdf

The related work / threat models discussed in the fantastic timers paper and the below
https://arxiv.org/pdf/1708.06774

etc etc etc…

How? How would barring access to explicit clocks help in mitigating the implicit ones?

Can you elaborate on that? How would limiting one explicit timer help establish user trust? Again, I think this comes down to misunderstandings around the threat model.

To re-iterate previous points - this standard does not introduce new risks, as there are plenty timers in the platform. You keep repeating that we should block access to this specific one, but we'd need a bit more reasoning as to how that would help users and their privacy and security.

This is the webcompat issue being discussed. If we break benign code paths that depend on implicit timers, thats breakage thats likely a-ok; thats the authors relying on unmade promises. If the platform makes promises saying "its okay to rely on high resolution time" and then that has to get coarsened / removed later on and breaks high-res-dependent benign code, thats a real problem.

So, again, not promising / building APIs that provide high res timers preserves web-compat privacy paths going forward. It preserves possibility space, where further committing to high res timers removes that possibility space.

Similarly, there are practical differences between explicit and implicit timing sources. If a vendor, for example, sees a page doing strange / suspicious things to build timing signals, thats a signal to impose new interventions (or even force close a worker / frame). If though, the platform says "it is okay to rely on high res timers" though explicit sources, that makes it much more difficult for us to protect users.

e.g. browsers implementing the normative sections of this standard makes it more more difficult to get to a fuzzyfox like privacy-preserving web, w/o breaking existing code that has been built to rely on the promises in the proposal.

Can you point out specific examples where A exposed information, B followed it, we were then able to get rid of the information exposure in A, but not able to get rid of it in B?

I'm sorry but I'm having difficulty parsing this. Not trying to be difficult, but can you rephrase?

I'd appreciate if you could point me to the minutes.

https://www.w3.org/2019/04/09-ac-minutes.html#item04 (if you do a search for my name or Chaals or normative you can get to the stubs of the conversation during the panel discussion)

The metapoint here in this issue is that this is functionality seems likely to be user-serving in only a very very small number of websites (ex in previous research I was involved in, :Most Websites Don't Need To Vibrate", our testers couldn't find a single site that needed it for user-serving purposes).

As high-res timers (implicit and explicit) have well know privacy issues, it would be good to explore ways of making these timers less available (implicit and explicit), and figure out ways to limit their use to when they'll be used for things users care about (instead of turning users into debugging / feedback tools for websites, etc.).

@pes10k
Copy link
Author

pes10k commented Apr 26, 2019

@tomrittervg

Thanks you for this! I think many of these are very promising ideas, but, tbh, adding them in non-normative sections of the document doesn't help the situation; it just leads to web compatibility problems.

If there is a floor (not ceiling) of mitigation that the group thinks would be useful, then the best course of action is to add them as normative text. If there aren't agreed to successful / practical mitigations, then that's a reason to keep working on the standard until those mitigations exist (or the functionality requiring mitigation no longer exists).

Put differently (sincerely, w/o snark), what is gained by standardizing high resolution timers, if a suggested mitigation is "resolution reduction / aliasing with low resolution timers"?

@yoavweiss
Copy link
Contributor

For attacks, I imagine we're thinking of 100% all the same kinds attacks (though narrowly for this concern, i'm not considering things like spector etc). So attacks like this
https://www.usenix.org/system/files/conference/usenixsecurity17/sec17-kohlbrenner.pdf

Environment learning attacks like
https://www.usenix.org/system/files/conference/woot14/woot14-ho.pdf

The related work / threat models discussed in the fantastic timers paper and the below
https://arxiv.org/pdf/1708.06774

OK, thanks. I ask because you suggested that permissions or user gestures may be helpful to prevent attacks here, as well as limiting the API in third party contexts.
Reading through the attacks, all seem relevant when the user is browsing to a malicious first-party site, enabling that site to exfiltrate security and privacy related data.

  • Resolution reduction
  • Jitter (Fuzzyfox on a single time source)
  • Detecting too may calls tot he API and there-after throttling calls by enabling resolution reduction/jitter
  • Full on Fuzzyfox (on all browser timing sources)
  • Disabling mitigations after certain conditions are met (such as those required for SAB (re-)enablement, or other somewhat-related permissions like this issue proposed)

@tomrittervg - That's a great list of mitigations. I'm happy to add those as part of the Security & Privacy section.

If there is a floor (not ceiling) of mitigation that the group thinks would be useful, then the best course of action is to add them as normative text. If there aren't agreed to successful / practical mitigations, then that's a reason to keep working on the standard until those mitigations exist (or the functionality requiring mitigation no longer exists).

As we've seen in the past, today's mitigations could be insufficient tomorrow. Similarly, some of today's mitigations may become unnecessary as browsers evolve (e.g. Out-of-process iframes) or as content opts-in to restrict its own access to third party resources. In the past as a group, we were reluctant to set the temporary mitigations in stone, as part the normative spec language. This has not changed AFAIK.

Put differently (sincerely, w/o snark), what is gained by standardizing high resolution timers, if a suggested mitigation is "resolution reduction / aliasing with low resolution timers"?

  • Monotonically increasing timers
  • Accessible from workers
  • Relative to a well-known time origin
  • Even if mitigated to the same level as current low-resolutions timers today, will be able to be augmented as soon as those mitigations are no longer necessary.

@pes10k
Copy link
Author

pes10k commented May 6, 2019

OK, thanks. I ask because you suggested that permissions or user gestures may be helpful to prevent attacks here, as well as limiting the API in third party contexts.
Reading through the attacks, all seem relevant when the user is browsing to a malicious first-party site, enabling that site to exfiltrate security and privacy related data.

I may misunderstand, but all the attacks above seem equally executable in a 3p frame as in the 1p. Which ones seemed like they'd require being the 1p / top level frame?

As we've seen in the past, today's mitigations could be insufficient tomorrow…

Even if mitigated to the same level as current low-resolutions timers today, will be able to be augmented as soon as those mitigations are no longer necessary.

Would it suffice then to modify the standard to be low-resolution now, and then to revise the standard later on when a cross-browser / implementation-independent / normative-text solution was in place to mitigate the privacy concern?

@yoavweiss
Copy link
Contributor

I may misunderstand, but all the attacks above seem equally executable in a 3p frame as in the 1p.

This is exactly what I was saying. Earlier you suggested we may want to limit access to this API in 3P contexts (e.g. "requiring aliasing with Date.now() in 3p frames"). I was trying to understand why you think that made sense. I guess we agree that it doesn't.

Would it suffice then to modify the standard to be low-resolution now, and then to revise the standard later on when a cross-browser / implementation-independent / normative-text solution was in place to mitigate the privacy concern?

So far we've seen 0 interest in doing that on this thread, from all the three independent browser engine vendors. The spec does not specify a specific resolution, and implementations are required to make its resolution as granular as possible while not granular enough to attack their users. Where that line is drawn will vary based on browser architecture, CPU architecture, current attack landscape and probably other factors.

I understand that is not the outcome you were hoping for. Nevertheless, that's the decision the group has reached. I'm therefore closing this issue, and the related #20.

@change007chan
Copy link

change007chan commented Jun 29, 2019

#204

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
privacy-tracker Group bringing to attention of Privacy, or tracked by the Privacy Group but not needing response.
Projects
None yet
Development

No branches or pull requests

9 participants