New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create sketch.webidl #47
Conversation
void present(); | ||
}; | ||
|
||
interface WebGPURenderingContext : WebGPUSwapChain { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be able to canvas.getContext('webgpu')
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need it? It's desirable (for Yandex Maps JS API at least) to be able to "drive" several canvases from one "instance" of an engine (so we can create shaders and some ohter resources once and also not run into limitation on number of contexts). That calls for ability to create several swap chains, one for each canvas element. And there're some definitions for creating them in the sketch. Doesn't that introduce duplication?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I understand, in this sketch the device is created out of thin air, and multiple canvases can be driven from the same device through their swapchain. A canvas rendering context is the only way to get a swapchain in this sketch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I think I've misunderstood the sketch at first. For some reason I missed that WebGPURenderingContext
is just a canvas-specific swap chain (am I right?) and thought of it as a wrapper around a swap chain and a device.
Rafael make comment in hangouts discussion about constructors being preferable way to create entities for a Web API. I'd say having something like WebGPUCanvasSwapChain
and being able to create it with new
:
const swapChain = new WebGPUCanvasSwapChain(canvas); // BTW, do we need device here?
renderFrameTo(swapChain.getNextTexture());
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WebGPURenderingContext is just a canvas-specific swap chain
Yes, that's right.
constructors being preferable way to create entities
I think this is one of the few places where this isn't true. A canvas can only have one RenderingContext. getContext
is fallible and can return the same object multiple times. For example:
cvs.getContext('webgl') === cvs.getContext('webgl')
cvs.getContext('2d')
cvs.getContext('webgl') === null
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be too difficult to add something like getWebGPUSwapChain
to HTMLCanvasElement
? I find calling it Context
really confusing:(
design/sketch.webidl
Outdated
void setVertexBuffers(u32 startSlot, sequence<WebGPUBuffer> buffers, sequence<u32> offsets); | ||
|
||
void draw(u32 vertexCount, u32 instanceCount, u32 firstVertex, u32 firstInstance); | ||
void drawIndexed(u32 indexCount, u32 instanceCount, u32 firstIndex, u32 firstInstance); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor thing, but let's not forget baseIndex
(or fistVertex
, as in Vulkan).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: The used term in D3D12 and Vulkan is baseVertex
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
design/sketch.webidl
Outdated
WebGPUCompareFunctionEnum compare; | ||
WebGPUStencilOperationEnum stencilFailOp; | ||
WebGPUStencilOperationEnum depthFailOp; | ||
WebGPUStencilOperationEnum passFailOp; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
stencilPassOp
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
WebGPUPipelineLayout createPipelineLayout(WebGPUPipelineLayoutDescriptor descriptor); | ||
WebGPUBindGroup createBindGroup(WebGPUBindGroupDescriptor descriptor); | ||
|
||
WebGPUBlendState createBlendState(WebGPUBlendStateDescriptor descriptor); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's desirable to be able to set blend state on per-color-attachment basis.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is already the case with the following:
dictionary WebGPURenderPipelineDescriptor : WebGPUPipelineDescriptorBase{
// ...
sequence<WebGPUBlendState> blendState;
// ...
};
That said we'll have to discuss if we when to support it be default given it is an optional Vulkan feature.
// **************************************************************************** | ||
|
||
typedef u32 WebGPULoadOpEnum; | ||
interface WebGPULoadOp { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DONT_CARE
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd lean for a defer: we don't want to introduce uninintialized reads in WebGPU.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not an obvious question to dismiss. Given all memory initialized on allocation (conceptually) by WebGPU, it's not a security concern to have the contents from previous rendering showing up, it's a portability concern. And the cost of that is not being able to optimize the case where user knows they are rendering a full-screen quad (for example) so they don't want either clearing or storing the contents.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The punctuation made this unclear. I was suggesting we defer the discussion and also gave our PoV.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd lean for a defer: we don't want to introduce uninintialized reads in WebGPU.
I agree. However, a small point. Can an application directly read from current draw attachment? That would introduce a feedback loop, and we probably would want to eliminate such situations anyway. There's a possibility to read that data "indirectly", e.g. via blending, but we can validate against that too (I can't imagine a well behaved app that would try to draw with blending into an attachment with DONT_CARE
load op).
|
||
typedef u32 WebGPUStoreOpEnum; | ||
interface WebGPUStoreOp { | ||
const u32 STORE = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DISCARD
or DONT_CARE
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto: defer?
const u32 DST_ALPHA = 8; | ||
const u32 ONE_MINUS_DST_ALPHA = 9; | ||
const u32 SRC_ALPHA_SATURATED = 10; | ||
const u32 BLEND_COLOR = 11; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OpenGL, Vulkan, Metal (and I assume D3D12 too) has separate enum for constant alpha. At glance it seems that only BLEND_COLOR
isn't enough since one can use BLEND_ALPHA
as blending factor for rgb to get blendA, blendA, blendA
factor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't see one in D3D12
// Texture | ||
typedef u32 WebGPUTextureDimensionEnum; | ||
interface WebGPUTextureDimension { | ||
const u32 e1D = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TEXTURE_nD
? It'd be a bit strange to have Hungarian notation in one place:)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah it's weird. Let's defer this I'm sure we'll have a bunch of discussions about naming.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! A few questions/nits from my side (I don't expect them to be addressed immediately)
const u32 e1D = 0; | ||
const u32 e2D = 1; | ||
const u32 e3D = 2; | ||
// TODO other dimensions (cube, arrays) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
D3D12 and Vulkan only use these three variants. Only views use specific variants for cubes, arrays, cubearrays
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's not the case for Metal though and it wants to know if a texture is a cubemap or not. Let's defer.
void copyBufferToTexture(); | ||
void copyTextureToBuffer(); | ||
void copyTextureToTexture(); | ||
void blit(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC only Vulkan (and GL) support blitting (including scaling etc). I guess this needs to be emulated on other backends?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Metal supports blitting too: https://developer.apple.com/documentation/metal/mtlblitcommandencoder.
UPD But it doesn't support scaling, that I've missed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IDK if we want this but this is a placeholder list of other operations we want to support.
void blit(); | ||
|
||
// Allowed in both compute and render passes | ||
void setPushConstants(WebGPUShaderStageFlags stage, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do you define push constants? In Vulkan these are associated with the pipeline layout but it's not included in the pipeline layout descriptor as far as I can see. Also, mapping push constants to D3D12 is somewhat non trivial in case of overlapping push constant usage in multiple shader stages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For simplicity there's a small number like 16*4 bytes allowed on every stage separately. The D3D12 code just needs to update two spots in the root signature instead of just one.
u32 offset, | ||
u32 count, | ||
ArrayBuffer data); | ||
void setBindGroup(u32 index, WebGPUBindGroup bindGroup); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these associated with a pipeline layout? A pipeline layout is changed once a pipeline changes? What happens with the currently bound bindgroups? Will the user have to re-set them assuming they are outdated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are associated with a WebGPUBindGroupLayout. At least in NXT there is a mechanism like Vulkan's "descriptor set inheritance". The currently bound bind group at that index is unbound. I'm not sure what you mean by "Will the user have to re-set".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I'm meaning the inheritance properties by vulkan. Setting pipeline layouts is not exposed in the current API and changing the pipeline layouts will affect the bound descriptor sets depending on the compatibility of the new and previous pipeline layout. As it's not really exposed is the user supposed to reset all binding points on pipeline layout change (which is required for dx12).
u32 count, | ||
ArrayBuffer data); | ||
void setBindGroup(u32 index, WebGPUBindGroup bindGroup); | ||
void setPipeline((WebGPUComputePipeline or WebGPURenderPipeline) pipeline); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Single binding slot for on both pipelines or one for each pipeline (meaning will setting a compute pipeline influence the graphics pipeline)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setting a compute pipeline in a renderpass is invalid and vice-versa.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, so it's only possible to do it inside of a pass? I mean that Vulkan has two different binding slots in Vulkan which don't infer with each other in contrast to D3D12 where changing the compute pipeline requires rebinding the graphics pipeline.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WebGPU will probably have explicit begin/endCompute and begin/endRender so I guess we can have the bind point set to null at the beginning of passes.
// **************************************************************************** | ||
|
||
// Fence | ||
interface WebGPUFence { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In which case are fences needed at the current state? Allowing users to reset/reuse resources in use (binding groups for example)? How can we access the fence status on the CPU side?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are needed when you want to throttle the CPU with the GPU execution but that's it. You can use WebGPUFence::getValue to get its status (a super-set of Vulkan's boolean status)
|
||
interface WebGPUSwapChain { | ||
void configure(WebGPUSwapChainDescriptor descriptor); | ||
WebGPUTexture getNextTexture(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can multiple textures be accessed without calling present in between?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not in the sketch, it's Metal-like semantics.
WebGPUCommandEncoder createCommandEncoder(WebGPUCommandEncoderDescriptor descriptor); | ||
WebGPUFence createFence(WebGPUFenceDescriptor descriptor); | ||
|
||
WebGPUQueue getQueue(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will always return the same queue object?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the sketch yes.
Thanks for the comments. Almost all of them are points of discussion which is the point of the sketch, so we can defer discussions on them WDYT? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't have enough time for a full review, but here are some cursory comments
|
||
// Buffer | ||
typedef u32 WebGPUBufferUsageFlags; | ||
interface WebGPUBufferUsage { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We haven't had consensus about whether or not resources should have usages. I don't think we should put this in the IDL until we have consensus.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fairly convinced this is necessary, see my investigation into various hardware in this comment: #37 (comment)
Likewise it is something that can be discussed after the IDL is landed so if you have more concerns let's defer?
}; | ||
|
||
typedef u32 WebGPUTextureUsageFlags; | ||
interface WebGPUTextureUsage { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto.
interface WebGPU { | ||
static WebGPUExtensions getExtensions(); | ||
static WebGPUFeatures getFeatures(); | ||
static WebGPULimits getLimits(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exposing device-specific limits encourages fingerprinting. Instead, we should require a certain set of limits are required by any implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In terms of fingerprinting, how is getLimits
any different from most of the queries through getParameter
(especially those prefixed with MAX
)? Perhaps we need to revisit the discussion from before because I don't think there was clear consensus on this (performance vs. portability/fingerprinting concerns).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the comments. Almost all of them are points of discussion which is the point of the sketch, so we can defer discussions on them WDYT?
I agree. Let's get this landed so we can start collaborating on changes. If people prefer the PR-comment format for the first round of discussions, we can keep commenting here after it lands.
Landing as per the discussions in the meeting. |
Yep. And then just move some discussions into issues. |
No description provided.