-
Notifications
You must be signed in to change notification settings - Fork 304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not require vertexInput in GPURenderPipelineDescriptor #378
Conversation
I find it not very useful (given that most render pipelines will have vertex input) and not harmful at the same time :) So I'll abstain from the vote. |
It's helpful for the FS tri trick seen in postprocessing shaders ( https://web.archive.org/web/20140719063725/http://www.altdev.co/2011/08/08/interesting-vertex-shader-trick/ ). +1 from me. |
@magcius if you know this trick, you are unlikely going to be confused or burdened to specify the empty vertex input declaration. It's not the point of the PR to support anything, the point, as I understand it, is to make the tutorials a tiny bit nicer. |
2019-08-26 meeting resolution: merge. |
Hm, if we do this then I think it also makes sense to change:
to
(and we need to do the same elsewhere), and change:
to
|
@kainino0x I've added your suggestions to this PR. Thanks! |
Following WebGPU spec change at gpuweb/gpuweb#378, vertexInput descriptor from GPURenderPipelineDescriptor should not be required anymore. Bug: 877147 Change-Id: Ifdd0ba7ff6af2b8648ad45d029e5ef7b8484f8a8 Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/1776023 Commit-Queue: François Beaufort <beaufort.francois@gmail.com> Commit-Queue: Corentin Wallez <cwallez@chromium.org> Reviewed-by: Corentin Wallez <cwallez@chromium.org> Cr-Commit-Position: refs/heads/master@{#691978}
* Add a component type for GPUBGLBinding compatiblity (#384) In shaders there are several texture types for each dimensionality depending on their component type. It can be either float, uint or sint, with maybe in the future depth/stencil if WebGPU allows reading such textures. The component type of a GPUTextureView's format must match the component type of its binding in the shader module. This is for several reasons: - Vulkan requires the following: "The Sampled Type of an OpTypeImage declaration must match the numeric format of the corresponding resource in type and signedness, as shown in the SPIR-V Sampled Type column of the Interpretation of Numeric Format table, or the values obtained by reading or sampling from this image are undefined." - It is also required in OpenGL for the texture units to be complete, a uint or sint texture unit used with a non-nearest sampler is incomplete and returns black texels. Similar constraints must exist in other APIs. To encode this compatibility constraint, a new member is added to GPUBindGroupLayoutBinding that is a new enum GPUTextureComponentType that give the component type of the texture. * Make GPUBGLBinding.textureDimension default to 2d. This is the most common case and avoids having an optional dictionary member with no default value (but that still requires a value for texture bindings). * Add a component type for GPUBGLBinding compatiblity (#384) In shaders there are several texture types for each dimensionality depending on their component type. It can be either float, uint or sint, with maybe in the future depth/stencil if WebGPU allows reading such textures. The component type of a GPUTextureView's format must match the component type of its binding in the shader module. This is for several reasons: - Vulkan requires the following: "The Sampled Type of an OpTypeImage declaration must match the numeric format of the corresponding resource in type and signedness, as shown in the SPIR-V Sampled Type column of the Interpretation of Numeric Format table, or the values obtained by reading or sampling from this image are undefined." - It is also required in OpenGL for the texture units to be complete, a uint or sint texture unit used with a non-nearest sampler is incomplete and returns black texels. Similar constraints must exist in other APIs. To encode this compatibility constraint, a new member is added to GPUBindGroupLayoutBinding that is a new enum GPUTextureComponentType that give the component type of the texture. * Make GPUBGLBinding.textureDimension default to 2d. This is the most common case and avoids having an optional dictionary member with no default value (but that still requires a value for texture bindings). * unifinished createBindGroupLayout algorithm * draft of BindGroupLayout details * draft of BindGroupLayout details * polish before PR * fix typo * replace u32/i32/u64 with normal int types or specific typedefs (#423) * Do not require vertexInput in GPURenderPipelineDescriptor (#378) * Add a default for GPURenderPassColorAttachmentDescriptor.storeOp (#376) Supersedes #268. * Initial spec for GPUDevice.createBuffer (#419) * Start writing spec for device/adapter, introduce internal objects (#422) * Move validation rules out of algorithm body and better describe GPUBindGroupLayout internal slots * Include limits for dynamic offset buffers * Rename 'dynamic' boolean to 'hasDynamicOffsets' * Fix indentation for ci bot * More indentation errors * Fix var typos * Fix method definition * Fix enum references * Missing </dfn> tag * Missing </dfn> tag * Remove bad [= =] * Fix old constant name * Half-formed new validation rule structure for createBindGroupLayout * An interface -> the interface * Remove old 'layout binding' reference * fix device lost validation reference * Fix 'dynamic' typo
In the case of a vertex shader with no input,
vertexInput
doesn't make sense. It would be great if we wouldn't require it in GPURenderPipelineDescriptor.I believe these "do not require X" PRs will improve the learning phase for developers interested in WebGPU:
vertexStage
)fragmentStage
)vertexInput
)Preview | Diff