New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Run all index buffers through a compute shader validator #117
Comments
To not compromise on performance you could introduce immutable read-only index buffers and indirect draw buffers (indirect draw would need fixed offset |
@devshgraphicsprogramming I think we can validate without any extra info (like the read-only index buffers), in theory:
I'm inclined to propose that WebGPU MVP doesn't support index buffers changed on the GPU, since this is quite a bit of headache, but eventually we can do that. |
It's possible for me to upload a buffer that has an index that isn't used that's too high, or upload a single buffer that has both Uint8 and Uint16 indexes in it, and I use a separate buffer offset when setting my index buffer. I have done both of these before. So "the max range of an index buffer" isn't a thing, it's "the max index range of a draw call". WebGL will always keep a shadow buffer on the CPU which is how it can validate this. |
Keeping a CPU copy of the buffer and validating it for cases where it's populated by the GPU makes that population rather useless... |
I was advocating for a native compute shader to do the validation and correction such that the data never has to do a GPU-CPU roundtrip and also the CPU is not involved in this at all!
Then I'm not quite sure what webGPU brings to the table over webGL 2.0 and how it can do dekstop-class graphics in a browser.
You have to enforce [native] API alignment rules, so you cannot i.e. specify and index buffer offset that is not a multiple of the data type. Furthermore the validation compute shader would have to run just before the use as index buffer, so it would check for the correct index type and range used.
Yes and that is awful.
My idea was to generate a copy of the buffer range with appropriately |
I'm talking about MVP - Minimum Viable Product here. Basically, a usable demo. In an actual 1.0 release we'll absolutely need to support GPU-generated indices, there is no question here. Sorry about being unclear! |
In that case I can rest easy. |
Closing. The group decided to do 1) CPU validation of the index range for |
This is for security, let me explain why.
Why Normal Draws are insecure
This is because the GPU hardware consumes an index buffer which usually sits in device-local memory, now that index buffer tells which offsets in the vertex buffers to fetch as input data.
This stage is not programmable, hence its impossible to prevent out-of-buffer-bound vertex data to be fetched.
In webGL this is not a problem because the intermediate layer actually checks each index buffer whether it contains out-of-range indices and modifies them while rendering.
However if you plan to allow SSBOs/UAVs and using these outputs as index buffers then the possibility of checking the index buffer validity on the CPU is gone (unless you want to pull the index buffer data before EVERY draw from device local memory, and completely kill perfomance by a factor of >100x).
What I propose
The compute shader can validate the index buffer, create a copy with only valid offsets and use that copy for drawing (GPU2GPU validation).
That way the data stays in device local memory and no unnecessary performance tradeoff happens (unmodified buffers can be not validated).
P.S. Similar applies to Indirect Draw Buffers
The text was updated successfully, but these errors were encountered: