-
Notifications
You must be signed in to change notification settings - Fork 304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validation of whole shader modules #974
Comments
Don't we know which device features are available at
To go further, I think we should force some things to be checked at |
We briefly touched on that in the last WGSL call: it would be nice if you could have entry points that use specific features. I.e. same module having entry points for subgroup operations and entry points without them (as a fall-back). In this case, shader module creation ("compilation") is totally fine, but at pipeline creation we'd compare the list of required features by the entry points with what the device supports. |
Discussed at the 2020-08-11 meeting. Resolution: Accept this and move it into Needs Specification |
There's a possible complication here, which is how much cross-stage work WebGPU and drivers are going to do which might affect your "portable performance" goal. Some drivers (basically NVIDIA) are a lot more aggressive at cross-stage optimizations, so they might notice that a vertex shader export isn't used in a pixel shader, and DCE all the way back up. This makes the compilation stage kind of pointless; it does basically the bare minimum error checking necessary by the standard, but defers most, if not all compilation work, to PSO creation, with options to do more shader compilation and shader replacement in the background on special driver threads. Other drivers don't have these implemented, e.g. AMD's official driver does not do that many cross-stage DCE optimizations to the best of my knowledge, and so games that have only been tested on NVIDIA drivers might run more poorly on AMD. There is -- possibly! -- an opportunity for middleware like WebGPU to apply cross-stage optimizations, but it's a tricky tradeoff and it's always difficult to do serious performance analysis on a black box driver. |
@magcius I agree with the input data you referenced (i.e. a lot of driver work happens at linking), but not with the "This makes the compilation stage kind of pointless" statement. Please note that this issue doesn't mention performance: it's focused on correctness. From that perspective, we can parse, load, and validate everything in a module before any linking is done. This is work that we have to do anyway, and deferring it to later doesn't bring any benefits, only confusing the users. The example I provided on the call is users having a shader module that is not used in any pipeline. If the class (1) or (2) work is deferred to linking by the implementation, the user will not get any errors. But a different implementation will throw an error. |
#1241 fixes this partially and there's the notion of "spurious errors". Can this be closed? |
This is spawned from #889, related to #644
The issue is split into 3 sections: problem statement, investigation, and proposal. Please consider these parts to be separate when reviewing.
Problem
Main question is, what can we validate at
createShaderModule
("compiling"), and what should we put in the spec, versus what we validate atcreateXxxPipeline
("linking").The question has 2 sides:
Investigation
I can think of splitting all the shader work in 3 stages.
(1) Things we can do at compilation for the module as a whole:
texture_sample
([wgsl] Adding textures to the spec. #909) with wrong number of texture coordinates(2) Things we can do at compilation for any function:
For entry points specifically, we can detect binding collisions (#889) here.
(3) Things that we have to do at link time:
Proposal
We should check for shader validity early (i.e. (1) and (2) at shader compilation time), and report errors eagerly. Letting invalid code to be dead weight is only going to serve as a footgun, and I don't see how this would make our implementors life easier either.
The only things we have to validate at linking (e.g.
createRenderPipeline
) are basically interface matching rules, plus device features.Whether or not we allow the implementations to defer the work to linking is something we can compromise on. However, forcing the implementations to be silent at shader compilation time is something I feel strongly against.
The text was updated successfully, but these errors were encountered: