Join GitHub today
GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.Sign up
Add a low latency mode for OffscreenCanvas #2659
Since OffscreenCanvases do not need to synchronize their graphics updates with the rest of the DOM, there is an opportunity to provide a low latency rendering path that would allow the OffscreenCanvas to commit content into a a memory buffer that is directly scanned out to the display. This behavior should not be the default because it may result in tearing artifacts.
The reduction in latency would be a great UX improvement for painting apps that allow the user to draw using a stylus or touch interface. In such applications the presence of ~50ms of latency is enough to interfere with the user's hand-eye coordination, making the application difficult to use.
Since not all graphics hardware provide the features necessary for implementing a low-latency path (e.g. hardware overlay buffers), support may be device dependent.
Commit processing model:
In ‘lowLatency’ mode the OffscreenCanvas has a render buffer, which is where draw operations are rasterized. Calling commit() will immediately copy the contents of the render buffer to the scan-out buffer. There is no waiting for vsync and no overdraw mitigation. In the case of 2d contexts, it will be possible to track the bounding box of the portions of the canvas that have changed since the previous commit, and only update that sub-region.
I don't know this part of the code well, but AFAICT, in the chromium code base we currently only have implementations for low-latency rendering buffers on two platforms: On MacOS we use IOSurfaces, and on ChromeOS there is something called a native pixmap.
referenced this issue
Aug 1, 2017
I have a drawing app that also allows panning and zooming. When drawing, I want as low latency as possible, ideally by writing directly to a single-buffered hardware overlay. When panning and zooming, I want atomic updates to prevent tearing. Can the behavior of a canvas be changed at run-time or only when created?
If it's only possible when created, is there a way to share GL resources (VBO, FBO, etc.) across contexts to enable switching from a single-buffered canvas to a composited canvas?
If I understand this correctly I feel like it might be a mistake to do this
My understanding is people want to be able to render directly to the screen like a native app can. I think that's a great goal. What I don't think is a great goal is discarding the rest of the browser. If I understand this proposal you can't use any HTML with this API. Once you opt in your app is 100% responsible for rendering everything.
I'd prefer a solution that doesn't throw away the rest of the platform.
Ideas: you opt in to direct rendering, you get a callback to render, the DOM is then rendered on top of your render. You're required to render every frame in this case. If there's no renderable elements in the DOM then you get the same result but at least you don't have to throw away the entire platform to achieve the low-latency. And of course if the DOM is rendered you can't call