Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Address confusion regarding XRFrameOfReference types #396

Closed
NellWaliczek opened this issue Sep 14, 2018 · 3 comments
Closed

Address confusion regarding XRFrameOfReference types #396

NellWaliczek opened this issue Sep 14, 2018 · 3 comments
Assignees
Milestone

Comments

@NellWaliczek
Copy link
Member

A few weeks back, @toji and I discovered that we didn't have the same understanding of the definitions of the "eye-level", "stage", and "head-model" XRFrameOfReference types. We did a poll on one of the weekly calls and it sounded like there were a handful of others that had various interpretations as well.
This issue tracks the need to reach agreement the definitions of these types. It also covers clarifying the explainer/spec text to reflect the expected behaviors on various devices such as 3DOF/6DOF or those which might need to emulate the floor offset. It is also related to issue #389 filed by @Artyom17

@NellWaliczek NellWaliczek added this to the TPAC 2018 milestone Sep 14, 2018
@RafaelCintron
Copy link

Thank you for bringing this us, @NellWaliczek .

Reading through the current crop of frames of reference, I was confused how they're partially defined in terms of each other.

For head-model, it says: An XRFrameOfReference with a frame of reference type of "head-model" describes a coordinate system identical to an eye-level frame of reference, but where the device is always located at the origin.

For eye-level, it says: Describes a coordinate system with an origin that corresponds to the first device pose acquired by the XRSession after the "head-model" frame of reference is created

Does this mean that you need to make a head-model frame of reference before you make an eye-level frame of reference?

@NellWaliczek NellWaliczek self-assigned this Sep 18, 2018
@lincolnfrog
Copy link

lincolnfrog commented Sep 18, 2018

As per conversation in the f2f, maybe we can simplify this by splitting the role of FOR types apart into ~3 different concerns:

  1. Getting a view matrix
    Since the main use-case of these seems to be for generating a view matrix, we can just have a method XRSession.getViewMatrix(frameOfReferenceType, offsetTransform). This type parameter would likely not need to include "stage" as the stage bounds could be separated out and things like emulated height could be included in the offsetTransform (after having been queried separately - see below). Likely then we just need two options for frameOfReferenceType if we make it so that the "world" type where accuracy of tracking is best in the immediate vicinity of the headset is the standard behavior and we require ubiquitous anchoring of all virtual content (with emulated anchors for 3DOF or outside-in systems). The two options would basically be "head" and "world", where head has the translation zeroed out and world does not. Note: we might need a third setting for whether a neck-model should be used or not since we agreed we want to avoid people hacking the view matrix post-hoc to remove translations.

  2. Stage bounds / emulated height
    Just move these to XRSession as well - XRSession.getStageBounds() and XRSession.getEmulatedHeight(). The emulated height could then be optionally passed into the getViewMatrix() function as an offset transform to differentiate between seated and standing modes.

  3. Feature detection
    In order to determine whether the user's system supports 3DOF/6DOF/etc., we would make that something you query independently on the session and/or request as part of the session.

@cwilso cwilso added the agenda Request discussion in the next telecon/FTF label Sep 27, 2018
@NellWaliczek NellWaliczek removed agenda Request discussion in the next telecon/FTF labels Oct 10, 2018
NellWaliczek added a commit that referenced this issue Oct 12, 2018
#409)

Addresses most of the confusion and concerns discussed at the Sept `18 F2F regarding tracking systems, the purpose of frames of references, and their relationship to coordinate systems. It also addresses issue #396, issue #389, issue #367, issue #355, and supersedes PR #358. This change does not address issue #384 and issue #403, though it will impact the approach to solving them as well.  It's also worth pointing out that because we haven't officially agreed on whether or not XRAnchor should be part of the core of WebXR 1.0, there is relatively little reference the concept in this new document as of yet.

The frame of reference types are now: `XRBoundedFrameOfReference`, `XRUnboundedFrameOfReference`, and `XRStationaryFrameOfReference`.  The latter has three subtypes: `floor-level`, `eye-level`, and `position-disabled`.  The unified rendering path is now supported even when an XR device is not present by allowing the `getDevicePose()` and `getInputPose()` to accept a null frame of reference.
@NellWaliczek
Copy link
Member Author

Fixed by #409

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants