-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable creation of VideoFrame from relevant WebGPU objects. #83
Comments
That sounds reasonable to me. We may still need to copy it if we can't guarantee that the buffer won't be mutated. |
triage note: marked 'extension', as the proposal introduces a new constructor |
E.g., |
@tidoust elaborates on a use case for this here: Essentially providing frames to the encoder that were created by WebGPU. I'm not sure there's any performance cost aside from some memory usage when going through an OffscreenCanvas today, but it'd certainly be more convenient. We should at least confirm there's no performance cost and see what it takes to wire this up. Tagging as P1 to at least investigate that. |
WebGPU tries to keep its interop points with other systems at a minimum. There are a number of gotchas when trying to design interop of a WebGPU-internal object like GPUBuffer with other APIs:
Once you design all of these into a system, you're going to end up with something quite similar to what you get now with OffscreenCanvas. Canvas/OffscreenCanvas are already carefully designed as one of the interop points, and solve all of those: the GPUTexture you get from it is specially allocated for interop, it's a texture in one of a few formats so it has clearly defined encoding, and it has alpha/colorspace information. Data can be pulled out of a WebGPU OffscreenCanvas at any time, so it also has the synchronization. There should theoretically be no significant memory or performance cost by going through OffscreenCanvas, as long as you're okay with the pixel formats it provides. The canvas and its GPUTexture use the same image memory. So I think it comes down to a few particular use cases that might not fit well in the OffscreenCanvas interface:
|
Thanks @kainino0x! I think you've summarized it well. If we expect there to be no performance cost to going through canvas, there's not a good reason to duplicate effort at this time given the use cases that we're aware of. I think almost everything can be done with canvas. As you note, the only thing that can't be done is keeping a non-RGBA buffer (or set of planar buffers) on the GPU all the way through to the hardware encoder. Today, developers can use |
It's not really addressed, but since we have a OffscreenCanvas path we can postpone direct interop with WebGPU textures and buffers until we have clear signal from web developers to justify spec and implementation work. |
Back when the spec was still a google doc, Jianhui @ Intel wrote
I think the plan is to one day have a VideoFrame constructor that takes a GpuBuffer (or some TBD type). Perhaps this is a strong enough hint to use that buffer as the backing? @sandersdan
The text was updated successfully, but these errors were encountered: