Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Position: Data races and Lacking Synchronisation for GPU Memory Objects are not a robustness concern* #119

Closed
devshgraphicsprogramming opened this issue Nov 16, 2018 · 5 comments
Labels

Comments

@devshgraphicsprogramming
Copy link

devshgraphicsprogramming commented Nov 16, 2018

With properly handled Out-Of-Bounds reads and writes, all data races regarding memory in UAVs (SSBOs), Textures (Images), Vertex and Constant Buffers will not have unwanted side-effects outside the memory ranges belonging to the sandboxed webGPU processs.

This is because Index and Indirect Draw Buffers are the only buffer indings where the data is hardware-fixed-function-fetched based on an indirection value from the said buffers without the possibility of a clamp to make sure only the valid range is accessed.

Correct me if I am wrong.

@devshgraphicsprogramming devshgraphicsprogramming changed the title Position: Data races and Lacking Synchronisation is not a robustness concern* Position: Data races and Lacking Synchronisation are not a robustness concern* Nov 16, 2018
@devshgraphicsprogramming devshgraphicsprogramming changed the title Position: Data races and Lacking Synchronisation are not a robustness concern* Position: Data races and Lacking Synchronisation for GPU Memory Objects are not a robustness concern* Nov 16, 2018
@kvark
Copy link
Contributor

kvark commented Nov 20, 2018

First, "Lacking Synchronisation for GPU Memory Objects" needs to be clarified here.

Secondly, the topic is between "security" and "portability" concerns. Even if an operation is guaranteed to have no unwanted side-effects, be sandboxed, etc, doesn't necessarily mean that we can afford it, as I noted in #35 (comment)

@kvark kvark added the proposal label Nov 20, 2018
@devshgraphicsprogramming
Copy link
Author

doesn't necessarily mean that we can afford it

My position is that you're already sacrificing a lot of performance on sandboxing and security, you cannot afford to sacrifice even more for kind-of ensuring portability of ill-behaved programs that misuse the API.

First, "Lacking Synchronisation for GPU Memory Objects" needs to be clarified here.

By lacking synchronisation I mean to allow simultaneous read and write access to GPU resources such as buffers and images, with the sole exception of draw-indirect and index buffers which will be copied and validated (here I mean actually "made safe to use", validation does not mean stopping the command from executing upon failure of data to meet validation rules) before use (if they have been bound with write access since the last validation).

@kvark
Copy link
Contributor

kvark commented Nov 20, 2018

I can't find an easy answer here. Agree that it would be great to not pay for the extra validation on every run of the shader/program. But we can't afford non-portable behavior to be out there on the Web. Thus, the question is largely about the feasibility of validation layers in WebGPU.

@litherum
Copy link
Contributor

As you have described, the issue is portability, not security. Portability is a measure of degrees; it is measured by what percentage of WebGPU programs can be run interoperably (or something like that). Obviously, we want more than 0% portability, and the group has agreed that achieving 100% portability would be impossible without a significant performance hit. Therefore, we’re going to land somewhere in the middle.

As such, portability decisions need to be made by weighing the pros and cons of that particular decision. If we’re considering trading performance for portability, we need to determine what kind of sites would behave non-interoperably, the severity of non-interoperability, and how popular the kinds of techniques are/will be on the Web. We then need to balance that against how much performance is lost, on which kinds of workloads, and how common those workloads are.

Historically, the CG has made most of these decisions based on intuition rather than data, but regardless of the method, we can make informed decisions about each individual trade off. With each decision, we can move closer toward making an API that has a good balance of portability, performance, usefulness, understandability, and ergonomics.

@Kangz
Copy link
Contributor

Kangz commented Sep 2, 2021

Closing. The group agreed to go with implicit synchronization for now with the concept of "usage scopes". Further features to allow applications to better control barriers can go in new proposals.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants