Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow dynamic frame timing #1233

Closed
cabanier opened this issue Oct 12, 2021 · 3 comments · Fixed by immersive-web/layers#270
Closed

Allow dynamic frame timing #1233

cabanier opened this issue Oct 12, 2021 · 3 comments · Fixed by immersive-web/layers#270

Comments

@cabanier
Copy link
Member

OpenXR and Oculus' VRAPI have support for moving the start of the frame to get improved tracking.
/tpac What could we add to WebXR to enable this?

@probot-label probot-label bot added the TPAC label Oct 12, 2021
@Squareys
Copy link
Contributor

This is super cool! I was considering to use XRFrame.predictedDisplayTime() to delay the start of rendering according to performance (which the spec for it warns about), but while a WebXR app could only attempt even a naïve implementation as compositor timing is not available (only CPU time measured and in some cases GPU time also (depends on a WebGL extension)), the attempt would fail, because of the requirements for sampling the input poses:

The getViewerPose(referenceSpace) method provides the pose of the viewer relative to referenceSpace as an XRViewerPose, at the XRFrame's time.

Which effectively means that the input pose has to be sampled at the time that requestAnimationFrame() is called by the UA, instead of "as late as possible to reduce latency" (e.g. when getViewerPose() is called). The frame sync could therefore help with hand/joint tracking and controller input as well.

I would not have been surprised if you had written that Oculus Browser already did this, btw, judging by the blog post, the app/developer would only notice if it has widely fluctuating performance, right?
From the blog post I couldn't tell if there is any downsides to enabling frame sync (apart from the obvious cases where app performance is unpredictably fluctuating), would it be possible to simply loosen (if needed) the spec such that implementations could enable it implicitly?

@cabanier
Copy link
Member Author

This is super cool! I was considering to use XRFrame.predictedDisplayTime() to delay the start of rendering according to performance (which the spec for it warns about),

Yes, you will have a bad time if you try to do that :-)

but while a WebXR app could only attempt even a naïve implementation as compositor timing is not available (only CPU time measured and in some cases GPU time also (depends on a WebGL extension)), the attempt would fail, because of the requirements for sampling the input poses:

The getViewerPose(referenceSpace) method provides the pose of the viewer relative to referenceSpace as an XRViewerPose, at the XRFrame's time.

That's actually incorrect :-\
All poses are reported relative to the predictedDisplayTime. @toji , should I file an issue against the spec?

Which effectively means that the input pose has to be sampled at the time that requestAnimationFrame() is called by the UA, instead of "as late as possible to reduce latency" (e.g. when getViewerPose() is called). The frame sync could therefore help with hand/joint tracking and controller input as well.

Indeed. Frame sync will help because you can start the Raf call closer to the predictedDisplayTime which gets you a better predicted pose. (Of course, this is only the case if the frame takes less time than it is allotted.)

I would not have been surprised if you had written that Oculus Browser already did this, btw, judging by the blog post, the app/developer would only notice if it has widely fluctuating performance, right? From the blog post I couldn't tell if there is any downsides to enabling frame sync (apart from the obvious cases where app performance is unpredictably fluctuating), would it be possible to simply loosen (if needed) the spec such that implementations could enable it implicitly?

We had a conversation at TPAC about this.
The WebXR Layers spec can already support this but we should make it more explicit. I'm planning on adding a non-normative note and update our implementation in the near future.

So, to get automatic support for this you should:

  • switch to WebXR Layers
  • do your game logic when the Raf start
  • draw your scene with WebGL and defer asking for WebXR textures as long as possible

@cabanier
Copy link
Member Author

See also https://www.youtube.com/watch?v=PpIXjrO7yrk around 7 minutes in.

After our discussion at TPAC, we concluded that we can already implement this. WebXR might benefit, but WebXR Layers' model already works with this scheme.
I will update the layers spec with a note and implement it as an experiment in the browser.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants