-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Address site access to real-world geometry data #4
Comments
Thoughts and considerations from our team:
|
An additional privacy concern with world geometry data applies to AR headsets that continuously improve their map of world geometry during system use, so as to not require the user to manually rescan their environment again each time they enter a new app or return to an existing app. On such devices, some geometry data that is available to the app will have been captured outside of this app session, perhaps in previous device sessions. Processed world geometry can thus create novel privacy concerns that go beyond raw camera data, as they can pull in historical perception of the environment:
|
From an ad-targeting perspective, historical world geometry data may also allow a page to:
|
Thanks Alex, that's a really good point. I can think of a couple of possible mitigations, WDYT?
|
I've actually got an unfinished blog post (I was delaying it till after we updated the APIs of WebXR Viewer to be more like the WebXR proposal) that discusses this. In the WebXR Viewer, we do two things by default:
The problem is that separating this historical data from the "new" data is very very difficult. Even on ARKit/ARCore (which just expose planes right now), those planes get merged. So, if I have a large, flat, main floor in my house, it may eventually get merged into one, large plane. It's not really feasible to only expose part of that to the viewer: the underlying system is constantly updating, for example, and without knowing all the geometry of the space (e.g., walls I haven't scanned), I can't determine visibility. We don't want to just use distance over time, I don't think (e.g., some arbitrary value like a few meters) because that might prevent seeing across a large room; if the far side of the room is already known to the platform, when I "see it for the first time" in my web view, it may not update/change, so I may not have a way of knowing that I'm actually seeing it now. It might be that when we have platforms like Hololens and Magic Leap, that really do full 3D reconstruction, we can device the space up into something akin to "rooms" and when you enter a room you may get access to all the knowledge of that room, but that only really works if the space is unchanged and if it has been fully mapped. I don't have a good answer for this, beyond that; I'll eventually publish the blog post, but I agree this is a big issue. |
User agents may also wish to limit the distance and location of hit tests. For example, a hit test could potentially expose whether the user has been in another part of a plaza, in the building across the street, or even the floor below if "historical" data is included in the result from the platform. (Limiting the distance could affect use cases, such as placing art on the buildings around a plaza.) The problem could be exacerbated if the application is able to access information about other locations, such as by passing a ray that doesn't originate from the user's current location. |
Excellent points.
I'm not sure how best to deal with this; it might be the kind of thing that makes it into "best practices" or "things to think about".
Right, among very many other things! It would be interesting to push some of the platforms to be able to expose "only parts of the space that has been seen by the user while this app is running" (although they would likely see the browser as "an app" which would be awkward).
This should be pushed over to the hitTest discussion; we've been talking about expressing hit tests as relative to know coordinate systems (e.g., screen, head, controller, finger, hand, etc). Perhaps this is an argument for only allowing the relative offset to be orientation, not position, forcing the hit test ray to originate from one of these coordinate frames. |
A big threat vector as noted earlier is the ability to use world geometry data to do facial modeling/recognition of people in view of the HMD's sensors. This could allow for tracking a user or their associates by a threat actor in a way similar to normal facial recognition from cameras. The tracking of movement could also allow for variations of gait tracking, and as other nasty biometric tricks as well. This could have serious concerns beyond just the user wearing the headset, but could also expose people around the user as well. This could allow profiling who a user associates with and who is near a user at any given time. As mentioned earlier, with historical data, this could lead to some interesting profiling mechanisms. Perhaps a mechanism in place to allow things that look like faces/people to be sensed with consent opt in, and skewed/blurred to deny facial recognition data if not opted in would be feasible? |
There was a comment over in #5 about this too. If we are giving camera images to the apps, being able to remove them or cover them will be critical. I don't think bluring/distorting will be sufficient, since enough frames will allow reconstruction and recognition. Regarding doing these facial recognition from world geometry, this is another argument in favor of having lower levels of permission not get the highest fidelity data. If a user gives permission to have video frames sent to an app, then by all means, send in high fidelity geometry too. But, if not, then perhaps we suggest sticking to the kind of detail Hololens and other real-time SLAM systems create and provide right now. Even if a Hololens picks up a stationary person's shape, the face ends up being only a few polygons. |
Alright, fair enough. I'm more worried about the new kinect v4 sensor as for level of detail. I don't have access to a sensor to test though. |
Create the initial file for the explainer. This PR addresses two repo issues: Structure of the overall document (immersive-web#2) Accessing real-world geometry data (immersive-web#4)
I've updated the explainer to include the comments above; closing this issue. Please re-open (or create a new issue) if you have further ideas on this topic or want edits to the explainer. |
An explainer should outline user privacy and security concerns (particularly threat vectors) when sites have access to real-world geometry. An explainer should additionally explore approaches to mitigating those concerns.
The text was updated successfully, but these errors were encountered: