Join GitHub today
GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.Sign up
Behavior of window.requestAnimationFrame and window.requestIdleCallback when presenting #225
Short and sweet: When we're in WebVR presentation mode on devices without an external display, should the page continue pumping the window rAF and rIC loops?
For context: In Chrome at least when we kick into VR mode on mobile we stop the main page from running it's normal compositing process for performance reasons. This has a natural side effect of preventing rAF and rIC from running unless we take explicit (and slightly hacky) steps to avoid that. It may make sense, though, to treat this like any other time we hide the page (switching tabs, etc.) and suspend rAF simply because the page isn't actually meant to be updating and rAF is meant to be tied to page animation. (I'm not as sure about the intended behavior for rIC.)
That would create a behavioral disparity with desktop, though, which WILL have the page visible while presenting and does want to keep rAF running. That could create a situation where developers test code primarily on one platform and are surprised by the behavior of the others.
Would love opinions on this from other vendors!
As noted in this Chromium commit, there may be a race condition between the window rAF and the beginning of the WebVR mechanism. Whatever the outcome of this issue, we should ensure the behavior is clearly defined and that there is a reliable path for authors.
Quick update: This was discussed on the WebVR implementors call today and the general consensus is that if the page isn't visible it shouldn't be pumping rAF. (Didn't discuss rIC as much, because not everyone implements it.)
In the future exceptions would probably need to be made for DOM content that is displayed in VR via a layer or similar mechanism, but at that point it's reasonable to expect that the DOM compositor will have to be running to generate that content anyway, and rAF will naturally run as a result. Some questions came up about what framerate the rAF should run at in that scenario (monitor vs. HMD?) but if the in-VR layer was able to be updated asynchronously from headtracked rendering on the quad it wouldn't be a user comfort issue.
Also, as a follow-up from the WebVR implementers call, we wish to avoid any artifacts related to video framerate if parts of the page are mirrored into the headset.
For example, if 24fps video was rendered in an HTMLVideoElement it is temporally upsampled to the 2d monitor's 60hz framerate. This is commonly done by repeating some frames, causing a variability in frame duration as the 24hz video catches up to the 60hz output rate. Effectively, half of the video frames are presented for 2 vsync cycles and the other half of the video frames are presented for 3 vsync cycles:
If a piece of the page, containing such a video is then sampled into a VR layer at 60hz, it will be again upsampled. A VR display could be running at 90hz typically:
If we were upsampling the video directly from 24hz to 90hz:
The video frame uniformity issues would also apply to CSS animations and WebGL canvases that will be projected onto "floating quad in space" layers or onto textures within the VR presentation.
I believe we should always prioritize the VR experience when a VR presentation is active.
The key question should be if the effective framerate of the 2d page should bump up to the HMD's framerate in the event that parts of it are captured for VR presentation, or if it should remain at 60hz.
This might not be a simple task for browser implementers, and could slow adoption for the WebVR 2.0 API if required.
If the video frame uniformity is the issue affecting users the most, perhaps we should first implement a specialized video VR layer or implement optimized functions for decoding video directly to WebGL textures.
referenced this issue
Apr 24, 2018
Regarding the "requestIdleCallback", there may be issues with running that out-of-sync from the normal XR framerate, as effectively there would be "idle" work happening during the period of a XR frame that would not be ideal, such as when the VR compositor is performing the hard-realtime task of compositing asynchronously transformed / warped layers at vblank.
For many (most?) tracked XR headsets, the "requestIdleCallback" would ideally occur immediately after this vblank interval occurs (often signaled by lower level api's blocking until this ideal time).
One reason that XR sites themselves would want to use "requestIdleCallback" is to move non-rendering tasks outside of "requestAnimationFrame" so that the XR runtime environment can optimize the start of "requestAnimationFrame" based on prior frame times. The goal of such systems include moving the rendering as late as possible to ensure the least work done by the reprojection (less delta time between actual frame display and the predicted pose). If idle work, such as managing game state, updating physics, and playing audio also occurs within "requestAnimationFrame" and there is no longer a "submit" call like in WebVR, the precise time needed for the rendering phase of the frame can not be inferred.
One solution is to have a "requestIdleCallback" added to the XRFrame, triggered some time in the frame after the "requestAnimationFrame". Another would be to re-introduce the "submit" call on the XRWebGLLayer and have it return a promise that resolves when the rest of the per-frame processing can continue.
Synchronizing window raf to XRSession raf would preclude any possibility of multiple simultaneous XRSessions for separate devices with un-aligned frames or varying frame rates.
Sync window.requestIdleCallback to XRSession but not alter window.requestAnimationFrame