Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Normatively mitigate privacy concerns related to poses outside presenting (e.g., magic window) #9

Open
NellWaliczek opened this issue Sep 12, 2018 · 7 comments

Comments

@NellWaliczek
Copy link
Member

From @ddorwin on June 27, 2017 19:27

In general, WebVR presentation (exclusive session) requires a user gesture. However, magic window (non-exclusive) sessions do not currently require a user gesture. Providing pose data without requiring a user gesture or clear indication that such data is being provided presents privacy concerns, especially for mobile clients where the users is always holding the device.

While external desktop HMDs, may appear to pose less concern, there are potential issues for external desktop HMDs as well, including:

  • Browsing in VR - VR user agents need to apply similar consideration if they choose to expose magic window in the browser in the HMD.
  • There may be privacy concerns about access to high-frequency pose data even if the user is not wearing or touching the headset. See, for example, https://crbug.com/421691.

In addition, some future use cases/capabilities, such as Tango-style 6-DoF or "punchthrough" magic window in a VR browser, may enable the application to derive a lot more information, including data that might enable page-wide gaze tracking.

Since requiring a gesture for magic window would break a number of use cases, we need to consider other mitigations, require some, and allow user agents flexibility to implement others.

Examples include:

  • Follow the Generic Sensor API’s security and privacy considerations (#249)
  • Only expose WebVR on secure origins (#249)
  • Only provide poses to the focused frame. (Likely implied by #249.)
  • Only provide poses for frames that are same origin to the top level document.
  • Throttle frequency and/or provide reduced precision (e.g., a low-pass filter).
    • This could be applied to:
      • All tracking-only/non-presenting/non-gesture instances
      • OR just instances in unfocused or non-same-origin frames frames, though this could create unexpected behavior for users.
  • Use permissions or other mechanisms to allow users to opt-out of tracking for individual sites or entirely.
    • Integration with the Permissions API should enable this option for user agents.
  • Use indicators or other UI to inform users that device tracking is in use.
  • Attempt to ensure that the use of pose data is legitimate.
    • For example, by ensuring it results in changes that are clearly visible to the user.
    • It may be difficult to ensure this since a malicious app could try to make subtle visual changes, but some ideas include:
      • Requiring that the UA only process frames for visible and non-obscured of a specific size or percentage of the window.
      • Requiring that the frame changes with pose changes.
      • Throttling frequency and/or precision as appropriate.
    • Note: Such mitigations may require an explicit link between VRSession.requestFrame() and an output area, such as is proposed for magic window in #237.

We may also want to consider allowing the application to request specific ranges of accuracy. This would allow applications to ensure consistent resolution/frequency for all frames and for the user agent to make more intelligent decisions about whether to require permission, display indicators, etc. Similarly, it might make sense to require the page to request, though not necessarily be given, capabilities such as 6-DoF and "punchthrough."

Copied from original issue: immersive-web/webxr#250

@NellWaliczek
Copy link
Member Author

From @toji on July 24, 2018 16:57

Further information about this topic in #77 and #217.

@NellWaliczek
Copy link
Member Author

Closing in favor of the work being done in the privacy-and-security repo. When the explainer from that repo is complete, there will be a task to address the findings cohesively.

@NellWaliczek
Copy link
Member Author

From @ddorwin on September 12, 2018 23:17

@NellWaliczek Do we have any reference to this issue or its contents in that repo? Maybe this should be in some list of deliverables? Otherwise, I worry that we'll lose some of the information in this and related issues.

Also, there should be at least one open issue to address privacy in WebXR Device API. Maybe we don't iterate on the text in that issue, but it is an issue that must be addressed for the spec to be complete.

@NellWaliczek
Copy link
Member Author

Ah my mistake. I could have sworn I read a topic where folks were discussing 6DOF (and even 3DOF!) position data as needing to be addressed by that repo's explainer. But, now I can't find where I read that. I wonder if it was on a call instead? Either way, I'll reopen this issue and migrate it to the privacy-and-security repo for management there.
And yes, good idea to open a general issue for addressing the findings from the privacy-and-security repo. I'll get that filed in just a minute

@ddorwin
Copy link

ddorwin commented Sep 13, 2018

The original post referenced frame focus in regards to providing poses. We may also want to consider whether frame focus is required to request creation of certain types of sessions. For example, in cases where user activation is required (i.e., immersive-web/hit-test#27). For example, can an unfocused frame that previously had user activation request an AR session or an immersive VR session?

@blairmacintyre
Copy link

In general, WebVR presentation (exclusive session) requires a user gesture. However, magic window (non-exclusive) sessions do not currently require a user gesture. Providing pose data without requiring a user gesture or clear indication that such data is being provided presents privacy concerns, especially for mobile clients where the users is always holding the device.

Having user-gesture be all that was required to start getting webvr data was always a massive privacy hole (since there was no way to verify that the element the user interacted with had anything to do with WebVR for them).

That said, my hope is that we will end up with a combination of UA-based user-permission (via some mechanism in the UA, that obtains informed consent from the user to start sampling sensors and so on) and perhaps also some form of gesture in certain cases (e.g., arriving at a page while already in AR/VR mode may not require gesture OR permission, assuming that there was some permission granted to follow the link "in AR/VR" by the UA already ...)

Since requiring a gesture for magic window would break a number of use cases, we need to consider other mitigations, require some, and allow user agents flexibility to implement others.

What uses cases do you imagine here?

@ddorwin
Copy link

ddorwin commented Sep 15, 2018

For VR presentation, you also have to put on the headset or, if already in the headset, will be taken to an immersive experience that you can exit and stop access to data. User safety might be a bigger issue in the latter case.

immersive-web/webxr#394 might address some of this, especially related to magic window.

Use cases affected by requiring a gesture for magic window include:

  • Frictionlessly looking around a 360 video or similar experience by just moving a mobile device (and even the ability to have that indicate that such navigation is possible).
    • For example, you're navigating around a video site and arrive at a new page. (This could be mitigated by preserving the gesture from previous pages.)
    • Or scrolling through an article and encountering such a video or experience.
  • Galleries. For example, a page with multiple videos or experiences to choose from. Today, they could each respond to movement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants