Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable creation of VideoFrame from relevant WebGPU objects. #83

Closed
chcunningham opened this issue Oct 2, 2020 · 8 comments
Closed

Enable creation of VideoFrame from relevant WebGPU objects. #83

chcunningham opened this issue Oct 2, 2020 · 8 comments
Labels
extension Interface changes that extend without breaking. p1

Comments

@chcunningham
Copy link
Collaborator

Back when the spec was still a google doc, Jianhui @ Intel wrote

Add a hint to create VideoFrame backed by gpu memory buffer or system memory buffer. Support gpu memory buffer VideoFrame creation will avoid a frame copy and more power efficient for hardware encoder.

I think the plan is to one day have a VideoFrame constructor that takes a GpuBuffer (or some TBD type). Perhaps this is a strong enough hint to use that buffer as the backing? @sandersdan

@sandersdan
Copy link
Contributor

That sounds reasonable to me. We may still need to copy it if we can't guarantee that the buffer won't be mutated.

@chcunningham chcunningham added the extension Interface changes that extend without breaking. label May 12, 2021
@chcunningham
Copy link
Collaborator Author

triage note: marked 'extension', as the proposal introduces a new constructor

@dalecurtis dalecurtis changed the title Hint for video frame gpu backing Enable creation of VideoFrame from relevant WebGPU objects. Apr 4, 2023
@dalecurtis
Copy link
Contributor

E.g., GPUBuffer and GPUTexture.

@dalecurtis
Copy link
Contributor

@tidoust elaborates on a use case for this here:
https://lists.w3.org/Archives/Public/public-media-wg/2023Apr/0001.html

Essentially providing frames to the encoder that were created by WebGPU. I'm not sure there's any performance cost aside from some memory usage when going through an OffscreenCanvas today, but it'd certainly be more convenient.

We should at least confirm there's no performance cost and see what it takes to wire this up. Tagging as P1 to at least investigate that.

@kainino0x
Copy link

WebGPU tries to keep its interop points with other systems at a minimum. There are a number of gotchas when trying to design interop of a WebGPU-internal object like GPUBuffer with other APIs:

  • The WebGPU implementation may need to know before creating the GPUBuffer that it needs to be accessible from the outside world.
  • It only holds raw data, you have to supplement it with the pixel layout, channel encoding, alpha/colorspace information, etc.
  • Careful synchronization is necessary in implementations because the application can go and start modifying the GPUBuffer as soon as they're done creating a VideoFrame from it.

Once you design all of these into a system, you're going to end up with something quite similar to what you get now with OffscreenCanvas. Canvas/OffscreenCanvas are already carefully designed as one of the interop points, and solve all of those: the GPUTexture you get from it is specially allocated for interop, it's a texture in one of a few formats so it has clearly defined encoding, and it has alpha/colorspace information. Data can be pulled out of a WebGPU OffscreenCanvas at any time, so it also has the synchronization.

There should theoretically be no significant memory or performance cost by going through OffscreenCanvas, as long as you're okay with the pixel formats it provides. The canvas and its GPUTexture use the same image memory.

So I think it comes down to a few particular use cases that might not fit well in the OffscreenCanvas interface:

  • Are there additional formats and encodings that can't reasonably be supported through OffscreenCanvas? Maybe YUV or multiplanar?
  • Is GPUBuffer support particularly useful?
  • Probably others.

@dalecurtis
Copy link
Contributor

Thanks @kainino0x! I think you've summarized it well. If we expect there to be no performance cost to going through canvas, there's not a good reason to duplicate effort at this time given the use cases that we're aware of.

I think almost everything can be done with canvas. As you note, the only thing that can't be done is keeping a non-RGBA buffer (or set of planar buffers) on the GPU all the way through to the hardware encoder. Today, developers can use GPUBuffer.map() to construct a CPU backed VideoFrame, but will take a performance cost.

@aboba
Copy link
Collaborator

aboba commented May 2, 2024

@Djuffin @padenot Has this Issue been addressed? If so, can we close it?

@Djuffin
Copy link
Contributor

Djuffin commented May 9, 2024

It's not really addressed, but since we have a OffscreenCanvas path we can postpone direct interop with WebGPU textures and buffers until we have clear signal from web developers to justify spec and implementation work.

@Djuffin Djuffin closed this as not planned Won't fix, can't repro, duplicate, stale May 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
extension Interface changes that extend without breaking. p1
Projects
None yet
Development

No branches or pull requests

6 participants