New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WebXRCamera's projection matrix is incorrect #8944
Comments
The projection matrix we are using is the one provided by the XR hosts itself. As the main camera (the parent of both rig cameras) has no projection matrix defined, we are calculating it on our own. I would assume that due to parameters incorrectly set (fov?) our calculation of the projection matrix is wrong. The simplest solution (that should work out of the box) is set the main camera's projection matrix to be the first eye's projection matrix. This won't work on a split-screen emulation, but should work in the immersive session. I will submit a PR, waiting for your feedback |
I think the solution you submitted in your PR should work for us. I'm not really sure what you mean above by this:
Each XR view provides an FOV, and I think these should be correct, and they seem to be correctly set in the rig cameras. What do you mean when you say "we are calculating it on our own"? |
what I mean by that is that even thou the two eyes have a projection matrix straight from the xr view pose (and we use them for both eyes), the main camera does not. The main camera (WebXRCamera class) has its projection matrix calculated by the framework itself using the default getProjectionMatrix function and the camera's parameters and not from XR. |
I see, so it might be better to make it so the |
Those values do not exist on XRView nor in XRViewerPose. We ask for a specific base layer with those parameters when initializing the scene, so the values do pass correctly to XR. The purpose of my PR was partly to unlock you before investigating further. As it is not recommended to change FOV in XR (and the value cannot be changed after the scene was initialized), it was always recommended not to use these values directly and instead use the information provided directly by XR (namely the projection matrix). This hasn't changed even after the PR was merged - in certain cases and due to limitations not set by us, there's values can be incorrect, especially if actively changed by the user. I guess we could decompose the PM (should be mathematically possible, right), but i don't see the reason behind it ATM. |
Closing this issue. Using the first camera's projection matrix is the best solution. Apart from changing the way we calculate the projection matrix (which we won't), I don't see a different way of getting the information from the data we do have. |
WebXRCamera has a set of "rig cameras" that represent the views/eyes of the XR device. The world and projection matrices of those views are copied over to the rig cameras, but no projection matrix is assigned to the WebXRCamera itself. Since the WebXRCamera is the active scene camera, and many APIs use the active scene camera by default, some of those APIs (that depend on the projection matrix) don't work correctly. For example, all the picking related APIs (
scene.pick
,scene.createPickingRay
, etc.) produce unexpected results. If you explicitly pass in a rig camera to those APIs, they work as expected since the correct projection matrix is then used, but I think the behavior should be correct with the default camera when the active camera is the WebXRCamera.I created a Playground example where tapping on the screen (for a mobile device) uses
scene.createPickingRay
at screen coordinate 0,0 and places a box along the ray. It should show up in the upper left corner of the display. This works correctly when the rig camera is explicitly passed in tocreatePickingRay
, but does not if the default (WebXRCamera
) is used.With
WebXRCamera
: https://playground.babylonjs.com/#AC8XPN#25With
WebXRCamera.rigCameras[0]
: https://playground.babylonjs.com/#AC8XPN#28The text was updated successfully, but these errors were encountered: