Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create sketch.webidl #47

Merged
merged 2 commits into from Feb 14, 2018
Merged

Create sketch.webidl #47

merged 2 commits into from Feb 14, 2018

Conversation

Kangz
Copy link
Contributor

@Kangz Kangz commented Feb 13, 2018

No description provided.

void present();
};

interface WebGPURenderingContext : WebGPUSwapChain {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be able to canvas.getContext('webgpu')?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

Copy link

@dmikis dmikis Feb 15, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need it? It's desirable (for Yandex Maps JS API at least) to be able to "drive" several canvases from one "instance" of an engine (so we can create shaders and some ohter resources once and also not run into limitation on number of contexts). That calls for ability to create several swap chains, one for each canvas element. And there're some definitions for creating them in the sketch. Doesn't that introduce duplication?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand, in this sketch the device is created out of thin air, and multiple canvases can be driven from the same device through their swapchain. A canvas rendering context is the only way to get a swapchain in this sketch.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I think I've misunderstood the sketch at first. For some reason I missed that WebGPURenderingContext is just a canvas-specific swap chain (am I right?) and thought of it as a wrapper around a swap chain and a device.

Rafael make comment in hangouts discussion about constructors being preferable way to create entities for a Web API. I'd say having something like WebGPUCanvasSwapChain and being able to create it with new:

const swapChain = new WebGPUCanvasSwapChain(canvas); // BTW, do we need device here?

renderFrameTo(swapChain.getNextTexture());

Copy link
Contributor

@kainino0x kainino0x Feb 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WebGPURenderingContext is just a canvas-specific swap chain

Yes, that's right.

constructors being preferable way to create entities

I think this is one of the few places where this isn't true. A canvas can only have one RenderingContext. getContext is fallible and can return the same object multiple times. For example:

cvs.getContext('webgl') === cvs.getContext('webgl')
cvs.getContext('2d')
cvs.getContext('webgl') === null

Copy link

@dmikis dmikis Feb 19, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be too difficult to add something like getWebGPUSwapChain to HTMLCanvasElement? I find calling it Context really confusing:(

void setVertexBuffers(u32 startSlot, sequence<WebGPUBuffer> buffers, sequence<u32> offsets);

void draw(u32 vertexCount, u32 instanceCount, u32 firstVertex, u32 firstInstance);
void drawIndexed(u32 indexCount, u32 instanceCount, u32 firstIndex, u32 firstInstance);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor thing, but let's not forget baseIndex (or fistVertex, as in Vulkan).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: The used term in D3D12 and Vulkan is baseVertex

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

WebGPUCompareFunctionEnum compare;
WebGPUStencilOperationEnum stencilFailOp;
WebGPUStencilOperationEnum depthFailOp;
WebGPUStencilOperationEnum passFailOp;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stencilPassOp?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

WebGPUPipelineLayout createPipelineLayout(WebGPUPipelineLayoutDescriptor descriptor);
WebGPUBindGroup createBindGroup(WebGPUBindGroupDescriptor descriptor);

WebGPUBlendState createBlendState(WebGPUBlendStateDescriptor descriptor);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's desirable to be able to set blend state on per-color-attachment basis.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is already the case with the following:

dictionary WebGPURenderPipelineDescriptor : WebGPUPipelineDescriptorBase{
    // ...
    sequence<WebGPUBlendState> blendState;
    // ...
};

That said we'll have to discuss if we when to support it be default given it is an optional Vulkan feature.

// ****************************************************************************

typedef u32 WebGPULoadOpEnum;
interface WebGPULoadOp {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DONT_CARE?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd lean for a defer: we don't want to introduce uninintialized reads in WebGPU.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not an obvious question to dismiss. Given all memory initialized on allocation (conceptually) by WebGPU, it's not a security concern to have the contents from previous rendering showing up, it's a portability concern. And the cost of that is not being able to optimize the case where user knows they are rendering a full-screen quad (for example) so they don't want either clearing or storing the contents.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The punctuation made this unclear. I was suggesting we defer the discussion and also gave our PoV.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd lean for a defer: we don't want to introduce uninintialized reads in WebGPU.

I agree. However, a small point. Can an application directly read from current draw attachment? That would introduce a feedback loop, and we probably would want to eliminate such situations anyway. There's a possibility to read that data "indirectly", e.g. via blending, but we can validate against that too (I can't imagine a well behaved app that would try to draw with blending into an attachment with DONT_CARE load op).


typedef u32 WebGPUStoreOpEnum;
interface WebGPUStoreOp {
const u32 STORE = 0;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DISCARD or DONT_CARE?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto: defer?

const u32 DST_ALPHA = 8;
const u32 ONE_MINUS_DST_ALPHA = 9;
const u32 SRC_ALPHA_SATURATED = 10;
const u32 BLEND_COLOR = 11;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OpenGL, Vulkan, Metal (and I assume D3D12 too) has separate enum for constant alpha. At glance it seems that only BLEND_COLOR isn't enough since one can use BLEND_ALPHA as blending factor for rgb to get blendA, blendA, blendA factor.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't see one in D3D12

// Texture
typedef u32 WebGPUTextureDimensionEnum;
interface WebGPUTextureDimension {
const u32 e1D = 0;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TEXTURE_nD? It'd be a bit strange to have Hungarian notation in one place:)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah it's weird. Let's defer this I'm sure we'll have a bunch of discussions about naming.

Copy link

@msiglreith msiglreith left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work! A few questions/nits from my side (I don't expect them to be addressed immediately)

const u32 e1D = 0;
const u32 e2D = 1;
const u32 e3D = 2;
// TODO other dimensions (cube, arrays)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

D3D12 and Vulkan only use these three variants. Only views use specific variants for cubes, arrays, cubearrays

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not the case for Metal though and it wants to know if a texture is a cubemap or not. Let's defer.

void copyBufferToTexture();
void copyTextureToBuffer();
void copyTextureToTexture();
void blit();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC only Vulkan (and GL) support blitting (including scaling etc). I guess this needs to be emulated on other backends?

Copy link

@dmikis dmikis Feb 14, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Metal supports blitting too: https://developer.apple.com/documentation/metal/mtlblitcommandencoder.

UPD But it doesn't support scaling, that I've missed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IDK if we want this but this is a placeholder list of other operations we want to support.

void blit();

// Allowed in both compute and render passes
void setPushConstants(WebGPUShaderStageFlags stage,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do you define push constants? In Vulkan these are associated with the pipeline layout but it's not included in the pipeline layout descriptor as far as I can see. Also, mapping push constants to D3D12 is somewhat non trivial in case of overlapping push constant usage in multiple shader stages.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For simplicity there's a small number like 16*4 bytes allowed on every stage separately. The D3D12 code just needs to update two spots in the root signature instead of just one.

u32 offset,
u32 count,
ArrayBuffer data);
void setBindGroup(u32 index, WebGPUBindGroup bindGroup);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these associated with a pipeline layout? A pipeline layout is changed once a pipeline changes? What happens with the currently bound bindgroups? Will the user have to re-set them assuming they are outdated?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are associated with a WebGPUBindGroupLayout. At least in NXT there is a mechanism like Vulkan's "descriptor set inheritance". The currently bound bind group at that index is unbound. I'm not sure what you mean by "Will the user have to re-set".

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I'm meaning the inheritance properties by vulkan. Setting pipeline layouts is not exposed in the current API and changing the pipeline layouts will affect the bound descriptor sets depending on the compatibility of the new and previous pipeline layout. As it's not really exposed is the user supposed to reset all binding points on pipeline layout change (which is required for dx12).

u32 count,
ArrayBuffer data);
void setBindGroup(u32 index, WebGPUBindGroup bindGroup);
void setPipeline((WebGPUComputePipeline or WebGPURenderPipeline) pipeline);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Single binding slot for on both pipelines or one for each pipeline (meaning will setting a compute pipeline influence the graphics pipeline)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting a compute pipeline in a renderpass is invalid and vice-versa.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so it's only possible to do it inside of a pass? I mean that Vulkan has two different binding slots in Vulkan which don't infer with each other in contrast to D3D12 where changing the compute pipeline requires rebinding the graphics pipeline.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WebGPU will probably have explicit begin/endCompute and begin/endRender so I guess we can have the bind point set to null at the beginning of passes.

// ****************************************************************************

// Fence
interface WebGPUFence {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In which case are fences needed at the current state? Allowing users to reset/reuse resources in use (binding groups for example)? How can we access the fence status on the CPU side?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are needed when you want to throttle the CPU with the GPU execution but that's it. You can use WebGPUFence::getValue to get its status (a super-set of Vulkan's boolean status)


interface WebGPUSwapChain {
void configure(WebGPUSwapChainDescriptor descriptor);
WebGPUTexture getNextTexture();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can multiple textures be accessed without calling present in between?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not in the sketch, it's Metal-like semantics.

WebGPUCommandEncoder createCommandEncoder(WebGPUCommandEncoderDescriptor descriptor);
WebGPUFence createFence(WebGPUFenceDescriptor descriptor);

WebGPUQueue getQueue();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will always return the same queue object?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the sketch yes.

@Kangz
Copy link
Contributor Author

Kangz commented Feb 14, 2018

Thanks for the comments. Almost all of them are points of discussion which is the point of the sketch, so we can defer discussions on them WDYT?

Copy link
Contributor

@litherum litherum left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't have enough time for a full review, but here are some cursory comments


// Buffer
typedef u32 WebGPUBufferUsageFlags;
interface WebGPUBufferUsage {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We haven't had consensus about whether or not resources should have usages. I don't think we should put this in the IDL until we have consensus.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fairly convinced this is necessary, see my investigation into various hardware in this comment: #37 (comment)

Likewise it is something that can be discussed after the IDL is landed so if you have more concerns let's defer?

};

typedef u32 WebGPUTextureUsageFlags;
interface WebGPUTextureUsage {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.

interface WebGPU {
static WebGPUExtensions getExtensions();
static WebGPUFeatures getFeatures();
static WebGPULimits getLimits();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exposing device-specific limits encourages fingerprinting. Instead, we should require a certain set of limits are required by any implementation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In terms of fingerprinting, how is getLimits any different from most of the queries through getParameter (especially those prefixed with MAX)? Perhaps we need to revisit the discussion from before because I don't think there was clear consensus on this (performance vs. portability/fingerprinting concerns).

Copy link
Contributor

@kainino0x kainino0x left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the comments. Almost all of them are points of discussion which is the point of the sketch, so we can defer discussions on them WDYT?

I agree. Let's get this landed so we can start collaborating on changes. If people prefer the PR-comment format for the first round of discussions, we can keep commenting here after it lands.

@Kangz
Copy link
Contributor Author

Kangz commented Feb 14, 2018

Landing as per the discussions in the meeting.

@Kangz Kangz merged commit 8ffe46f into master Feb 14, 2018
@dmikis
Copy link

dmikis commented Feb 15, 2018

If people prefer the PR-comment format for the first round of discussions, we can keep commenting here after it lands.

Yep. And then just move some discussions into issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants