Skip to content

Commit

Permalink
Allow reading back from canvases after present (#2905)
Browse files Browse the repository at this point in the history
* Allow reading back from canvases after present

This allows drawImage/toDataURL/etc. to see the canvas contents
presented in the previous frame, as long as getCurrentTexture (or
configure/unconfigure) hasn't been called yet this frame.

alphaMode (née compositingAlphaMode) now affects using the canvas as an
image source (drawImage/etc.) as well as compositing, so that the
observed contents don't change on a frame boundary.

As a weird aside (necessary to fully define the image source behavior),
defines super-luminant values as being in the extended color space (i.e.
once un-premultiplied). This definition emerges naturally, but it's also
weird.

Fixes #2743
Fixes #1847
Fixes a leftover bit from #2373 (placeholder canvases)

* nit

* Remove the "cancel present" behavior of destroy()

This fixes several problems:
- Unnecessary complexity in how currentTexture logic works.
- Errors in the previous commit, where this was just not fully handled.
- Using the "destroyed" state on the content process (minor issue).

* nits

* nit
  • Loading branch information
kainino0x committed May 26, 2022
1 parent 52ec46c commit b085e00
Show file tree
Hide file tree
Showing 2 changed files with 183 additions and 102 deletions.
30 changes: 17 additions & 13 deletions explainer/index.bs
Original file line number Diff line number Diff line change
Expand Up @@ -1040,15 +1040,17 @@ even though context state does not persist across loss/restoration.)
In order to access a canvas, an app gets a `GPUTexture` from the `GPUCanvasContext`
and then writes to it, as it would with a normal `GPUTexture`.

### Swap Chains ### {#canvas-output-swap-chains}
### Canvas Configuration ### {#canvas-output-swap-chains}

Canvas `GPUTexture`s are vended in a very structured way:

- `canvas.getContext('gpupresent')` provides a `GPUCanvasContext`.
- `GPUCanvasContext.configureSwapChain({ device, format, usage })` provides a `GPUSwapChain`,
invalidating any previous swapchains, attaching the canvas to the provided device, and
setting the `GPUTextureFormat` and `GPUTextureUsage` for vended textures.
- `GPUSwapChain.getCurrentTexture()` provides a `GPUTexture`.
- `canvas.getContext('webgpu')` provides a `GPUCanvasContext`.
- `GPUCanvasContext.configure({ device, format, usage })` modifies the current configuration
invalidating any previous texture object, attaching the canvas to the provided device,
and setting options for vended textures and canvas behavior.
- Resizing the canvas also invalidates previous texture objects.
- `GPUCanvasContext.getCurrentTexture()` provides a `GPUTexture`.
- `GPUCanvasContext.unconfigure()` returns the context to its initial, unconfigured state.

This structure provides maximal compatibility with optimized paths in native graphics APIs.
In these, typically, a platform-specific "surface" object can produce an API object called a
Expand All @@ -1057,14 +1059,16 @@ into.

### Current Texture ### {#canvas-output-current-texture}

A `GPUSwapChain` provides a "current texture" via `getCurrentTexture()`.
A `GPUCanvasContext` provides a "current texture" via `getCurrentTexture()`.
For <{canvas}> elements, this returns a texture for the *current frame*:

- On `getCurrentTexture()`, `[[currentTexture]]` is created if it doesn't exist, then returned.
- During the "[=Update the rendering=]" step, the browser compositor takes ownership of the
`[[currentTexture]]` for display, and that internal slot is cleared for the next frame.
- On `getCurrentTexture()`, a new `[[drawingBuffer]]` is created if one doesn't exist for the
current frame, wrapped in a `GPUTexture`, and returned.
- During the "[=Update the rendering=]" step, the `[[drawingBuffer]]` becomes readonly. Then, it is
shared by the browser compositor (for display) and the page's canvas (readable using
drawImage/toDataURL/etc.)

### `getSwapChainPreferredFormat()` ### {#canvas-output-preferred-format}
### `getPreferredCanvasFormat()` ### {#canvas-output-preferred-format}

Due to framebuffer hardware differences, different devices have different preferred byte layouts
for display surfaces.
Expand All @@ -1089,8 +1093,8 @@ As today with WebGL, user agents can make their own decisions about how to expos
capabilities, e.g. choosing the capabilities of the initial, primary, or most-capable display.

In the future, an event might be provided that allows applications to detect when a canvas moves
to a display with different properties so they can call `getSwapChainPreferredFormat()` and
`configureSwapChain()` again.
to a display with different properties so they can call `getPreferredCanvasFormat()` and
`configure()` again.

#### Multiple Adapters #### {#canvas-output-multiple-adapters}

Expand Down

0 comments on commit b085e00

Please sign in to comment.