Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need an animation timestamp #347

Closed
klausw opened this issue Apr 24, 2018 · 10 comments · Fixed by #381
Closed

Need an animation timestamp #347

klausw opened this issue Apr 24, 2018 · 10 comments · Fixed by #381

Comments

@klausw
Copy link
Contributor

klausw commented Apr 24, 2018

The explainer's rAF callback section says:

(Timestamp is given for compatibility with window.requestAnimationFrame(). Reserved for future use and will be 0 until that time.)

Why is that, and how are applications supposed to get an accurate target timestamp for animations that matches the expected time when the frame is going to be displayed? That's important for smooth animations.

Is there a field in the frame data that's supposed to be used for this?

The CubeSea sample uses wallclock time from performance.now() which seems like a hack, and is also inaccurate. You don't base animations on the current time when your rendering starts. If your render loop starts late but still finishes in time for the next VSync equivalent, you're supposed to use that VSync's time, not the late start time. See also immersive-web/webxr-samples#13 .

@Artyom17
Copy link

Artyom17 commented May 1, 2018

This sounds right. This is what we use currently for WebVR animation - timing of the VSync, rather than current time. The problem with performance.now is that the actual AF execution time may fluctuate depending on multiple factors (CPU load, threads running in parallel, other services/apps activity) and therefore animation will look inconsistently jittery, if we don't use VSync timestamp.

@darktears
Copy link
Contributor

And it's definitively hard to measure the time it will take to display a frame in the HMD after we get it from the application as @Artyom17 said. It's not just a swapBufffer there is a lot involved in the VR runtime and the GPU rendering pipeline to get it on the HMD screen. Usually the device pose you get to render the current frame (using getDevicePose) are predicted information that take into account some of the latency of the rendering pipeline (usually that prediction aligns with the next vsync when that frame will be displayed).

performance.now() is probably not a good approach because there is no way the WebXR app can predict what's going on in the VR runtime and the GPU pipeline. Doing estimation based on previous readings may just make things worst.

@klausw
Copy link
Contributor Author

klausw commented Jun 4, 2018

So far everyone who chimed in on the bug seems to be in favor of providing an animation time - what was the rationale for leaving it out of the spec, and would it be possible to revisit this?

I think this is important, I don't know how applications are supposed to do smooth animations without hardcoding their own assumptions about frame timing in workarounds.

@toji
Copy link
Member

toji commented Jul 10, 2018

So upon revisiting this issue at the prompting of @mrdoob and others, I have a proposal that may get some pushback but which I feel is probably necessary for platform consistency. We have a rAF-shaped API, so it should behave like window.requestAnimationFrame, and pass along the same timestamp that window.rAF does. More specifically, as MDN puts it:

The callback method is passed a single argument, a DOMHighResTimeStamp, which indicates the current time when callbacks queued by requestAnimationFrame() begin to fire. Multiple callbacks in a single frame, therefore, each receive the same timestamp even though time has passed during the computation of every previous callback's workload.

(That's what the spec says should happen too, but it's split across multiple files and not exactly intuitive to link to.)

As explained earlier in this thread ⬆️ performance.now() isn't exactly great for steady animation, and this is actually worse because the timestamp it delivers may be made more obsolete by previously scheduled callbacks. However by using the same value we enable developers to predictably track timing when switching between the various rAF-like callbacks and set precedent for future, similar APIs. And, crucially, it frees us from needing to pick a single, canonical frame value that works for everyone, because if our past conversations are any indication it's likely there's not one.

Instead, we should pack XR-relevant timing data into the XRFrame object, and expose several values we're confident we can get across platforms that have concrete, real world uses. I haven't vetted this against any real-world APIs, but I'm guessing we could expose something like the following?

// WARNING! NOT ACTUALLY BASED ON API RESEARCH!
interface XRFrameTiming {
  readonly attribute DOMHighResTimeStamp frameStart;
  readonly attribute DOMHighResTimeStamp targetFrameDuration;
  readonly attribute unsigned int droppedFrames;
}

partial interface XRFrame {
  readonly attribute XRFrameTiming timing;
}

What I want to be careful about is not getting too hung up on the contents of a potential XRFrameTiming, and definitely not going overboard and filling it with values that either aren't actionable by the developer or aren't reasonable to deliver across platforms (and keep in mind that means magic window too.) As such I feel like it would be prudent for us to determine the XRSession.rAF timestamp behavior now to ensure platform consistency and then iterate on the additional timing information we want to expose as a non-blocking task.

@blairmacintyre
Copy link
Contributor

I like this, thanks @toji. I'm especially in favor of having the rAF parameter behave as you describe.

@klausw
Copy link
Contributor Author

klausw commented Jul 10, 2018

I'm fine with @toji 's suggested approach, and consistency with window.rAF is a strong argument. We can do fancier timing parameters such as expected-time-on-screen separately if needed, but for simple cases with steady timing it's equivalent to use a start-of-animation time as long as it differs from the display time by a consistent offset.

@Artyom17
Copy link

I like this idea, thanks, @toji.

@DRx3D
Copy link

DRx3D commented Jul 10, 2018 via email

@toji
Copy link
Member

toji commented Jul 17, 2018

Since there seems to be general agreement on this I've put up a pull request to add it to the explainer: #381

@DRx3D: Fortunately window.requestAnimationFrame() has already defined sensible behavior for us regarding callbacks queued within callbacks. When callback processing starts the callback queue is locked and any new callbacks added after that point are pushed onto the next frames queue. The timestamp is only the same for all callbacks in a single queue.

I'm not aware of any limits on how many callbacks you can queue, but browsers have long had mechanism to notify users "This page is taking to long to respond" (read: Javascript hasn't returned control to the browser for a long time) and allow them to close the page in response. As such the deterrent is more of a social one than an explicit limit: Abuse the animation queues and your page becomes unresponsive and users leave.

@kearwood
Copy link
Contributor

I'm in favor of @toji 's proposal for XRFrameTiming, with some additional text to explicitly define what the frameStart and targetFrameDuration represent. In particular I would expect frameStart to represent the time that the frame is expected to be presented (which would vary potentially from frame to frame), while targetFrameDuration would be expected to be more constant and used to drive physics and gameplay potentially.

Also to consider would be how these values respond when the headset dynamically changes frame rates (eg, when Oculus Rift drops from 90hz to 45hz with Asynchronous Space Warp).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants