Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WebXRCamera's projection matrix is incorrect #8944

Closed
ryantrem opened this issue Sep 11, 2020 · 6 comments
Closed

WebXRCamera's projection matrix is incorrect #8944

ryantrem opened this issue Sep 11, 2020 · 6 comments
Assignees
Labels
Milestone

Comments

@ryantrem
Copy link
Contributor

ryantrem commented Sep 11, 2020

WebXRCamera has a set of "rig cameras" that represent the views/eyes of the XR device. The world and projection matrices of those views are copied over to the rig cameras, but no projection matrix is assigned to the WebXRCamera itself. Since the WebXRCamera is the active scene camera, and many APIs use the active scene camera by default, some of those APIs (that depend on the projection matrix) don't work correctly. For example, all the picking related APIs (scene.pick, scene.createPickingRay, etc.) produce unexpected results. If you explicitly pass in a rig camera to those APIs, they work as expected since the correct projection matrix is then used, but I think the behavior should be correct with the default camera when the active camera is the WebXRCamera.

I created a Playground example where tapping on the screen (for a mobile device) uses scene.createPickingRay at screen coordinate 0,0 and places a box along the ray. It should show up in the upper left corner of the display. This works correctly when the rig camera is explicitly passed in to createPickingRay, but does not if the default (WebXRCamera) is used.
With WebXRCamera: https://playground.babylonjs.com/#AC8XPN#25
With WebXRCamera.rigCameras[0]: https://playground.babylonjs.com/#AC8XPN#28

@deltakosh deltakosh added this to the 4.2 milestone Sep 11, 2020
@RaananW
Copy link
Member

RaananW commented Sep 14, 2020

The projection matrix we are using is the one provided by the XR hosts itself.

As the main camera (the parent of both rig cameras) has no projection matrix defined, we are calculating it on our own. I would assume that due to parameters incorrectly set (fov?) our calculation of the projection matrix is wrong.

The simplest solution (that should work out of the box) is set the main camera's projection matrix to be the first eye's projection matrix. This won't work on a split-screen emulation, but should work in the immersive session. I will submit a PR, waiting for your feedback

@ryantrem
Copy link
Contributor Author

I think the solution you submitted in your PR should work for us. I'm not really sure what you mean above by this:

As the main camera (the parent of both rig cameras) has no projection matrix defined, we are calculating it on our own. I would assume that due to parameters incorrectly set (fov?) our calculation of the projection matrix is wrong.

Each XR view provides an FOV, and I think these should be correct, and they seem to be correctly set in the rig cameras. What do you mean when you say "we are calculating it on our own"?

@RaananW
Copy link
Member

RaananW commented Sep 14, 2020

what I mean by that is that even thou the two eyes have a projection matrix straight from the xr view pose (and we use them for both eyes), the main camera does not. The main camera (WebXRCamera class) has its projection matrix calculated by the framework itself using the default getProjectionMatrix function and the camera's parameters and not from XR.

@ryantrem
Copy link
Contributor Author

I see, so it might be better to make it so the WebXRCamera returns fov, minZ, and maxZ according to the values in the XRView(s)? Then the correct projection matrix would be calculated? I guess if these are public properties of WebXRCamera and we return wrong values for them that's not great in case something else uses them.

@RaananW
Copy link
Member

RaananW commented Sep 15, 2020

Those values do not exist on XRView nor in XRViewerPose.

We ask for a specific base layer with those parameters when initializing the scene, so the values do pass correctly to XR. The purpose of my PR was partly to unlock you before investigating further.

As it is not recommended to change FOV in XR (and the value cannot be changed after the scene was initialized), it was always recommended not to use these values directly and instead use the information provided directly by XR (namely the projection matrix). This hasn't changed even after the PR was merged - in certain cases and due to limitations not set by us, there's values can be incorrect, especially if actively changed by the user.

I guess we could decompose the PM (should be mathematically possible, right), but i don't see the reason behind it ATM.

@RaananW
Copy link
Member

RaananW commented Oct 5, 2020

Closing this issue. Using the first camera's projection matrix is the best solution. Apart from changing the way we calculate the projection matrix (which we won't), I don't see a different way of getting the information from the data we do have.

@RaananW RaananW closed this as completed Oct 5, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants