-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
texture_2d_depth for loading depth values might be unnecessary? #2094
Comments
See also: #1198 , 82500965 |
Thank you for the investigation work!
|
Good question. I would clearly prefer if Are there any other requirements that mandate |
No other requirements. It's only a separate type on MSL. |
I think we still need |
Yeah, I am definitely OK with keeping "depth" for comparison sampling. |
re: OP / question 1: @litherum would it be possible to get input from the Metal team on why this restriction exists? (that depth-format textures must be bound as |
FYI, from more porting experience: because you already are forced to specify viewDimension in the pipeline layout, I don't think it's too onerous to require specifying depth at the same time. I'd be OK with moving forward with removing texture_2d_depth and just using the pipeline layout to write the correct texture type in the shader, which sounds like it wouldn't require correspondence with the Metal team. But it would be a nice future to allow float as well. |
@magcius I agree, having this outcome would be totally fine. The reason @kainino0x tries to dig the Metal behavior is that - if it turns out to be valid, then we wouldn't need to do any of the extra implementation work to glue the BGL that disagrees with the shader. So if we can get this answer soon-ish, it would help for sure. |
WGSL meeting minutes 2021-09-14
|
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
@litherum did you have any chance to make the list of builtins that would need to maybe be emulated in the future for depth textures? |
Looking at the docs, there are 2 results:
|
Meeting: We didn't have time to finish this discussion but seemed likely we'll need to go with the pipeline-creation-time code generation solution above due to |
We also need to decide whether to return Technically with the pipeline-time compilation being proposed above I think we could precisely specify behavior for both depth and stencil, but it doesn't seem necessary. |
Thoughts from the meeting: |
Meeting: overall agreement on the pipeline-creation-time code generation being an acceptable approach, the open question is whether we should instead just take advantage of the de facto behavior of Metal and just allow depth textures to be bound to regular "float" binding points. IIUC:
action item @litherum: determine whether we need to guard against users using gather() on the wrong component (need to know for both depth AND stencil, I think). If so, that prevents option 3 and we have to do option 1/2 (and same for stencil). |
Based on #1266 (comment) + #744 (comment) I think actually we would only be able to bind depth textures to |
From the last meeting we have general agreement to remove allow binding
@litherum have you been able to check if it is a problem on Metal to use stencil / depth with gather operations not on the first component? I'm hoping that it may be ok on stencil, so we can keep it the way it currently is. For depth textures turned |
WebGPU meeting minutes 2022-02-16
|
I just wrote the simplest little test program I could - all it does is fill a 2D depth texture with data, bind it to a Running (assume the 4 relevant samples in the depth texture are named "cell1", "cell2", "cell3", and "cell4"):
Note how [cell1, cell1, cell1, 1.0] is not either of the acceptable formats for either sampling stencil textures or sampling depth textures. Given that I found this edge case literally immediately after starting investigating this, I don't think we should expect |
This doesn't match any of the items in your table, is there a typo?
The premise is that it's OK if we don't have portable behavior - the spec already makes loose guarantees for stencil with I think the question we want to answer is whether it's safe, i.e. won't return uninitialized data or cause undefined behavior. The Metal shading language spec just says it's undefined. But it seems unlikely there are more than a few fixed behaviors. |
And update the "Reading and Sampling Depth/Stencil Textures" section. Fixes #2094
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as outdated.
Went ahead and filed the new proposal as #3115. |
I'm not sure this argument has been made before, but option 2 has a problem. Consider a shader like this:
Imagine that After some more discussion, we're now comfortable with option 3 instead of option 2, provided the spec indicates that operations which can observe/return the y/z/w channels will observe/return uninitialized and nonportable data. (Sorry for this to come in so late.) |
Yes, you'll have to either inline or duplicate functions, however, the number of functions you'll need is the number of times called with different texture type arguments, not the number bound to the pipeline. You already require the pipeline layout at shader compilation time, which will tell you the texture type for a given binding. |
Option 2 gives implementations the freedom to choose between rewriting the shader, and relying on de facto behavior, by adding validation that makes the first possible. It also means that, if we add texture operations in the future that aren't safe on depth textures, we can reject those at pipeline creation time (because we know whether they're used with depth textures or not). So far, feedback has been that the extra validation is acceptable from a developer standpoint. For these reasons, I think we should keep the previous resolution of option 2. |
Right - that makes this worse, not better. In general, function calls are expected to be more numerous than bindings. Here's an example program which describes one thing I'm worried about:
Above, we end up having to make a lot of copies of Here's another example of something else I'm worried about:
This describes how the blowup is global, rather than local - The conclusion that I'm coming to is that, if the information about what is bound to a depth texture is present in the bind group layout, but isn't present in the shader source, it's too easy for an author to accidentally create a binary size footgun.
This decision seems false to me. If compilers are going to be relying on the de-facto behavior in any situation, then option 2 doesn't seem to have much value. If the de-facto behavior can be trusted, then option 3 is strictly better. The value of option 2 is "the compiler will intentionally blow up the binary size of your program to generate texture operations that follow the letter of the law." I'm making a judgement call here: blowing up the binary size of the program is more harmful than relying on the de-facto behavior. And, if no compilers are actually going to be blowing up the binary size of the program, then it seems unfortunate to require authors to supply information which won't end up being used. Do Tint or Naga currently plan on blowing up the binary size of the program based on
|
This is a key observation. Given this I'm pretty sure I'm convinced it makes sense to do option 3. |
Meeting: Re-resolved on option 3. |
And update the "Reading and Sampling Depth/Stencil Textures" section. Fixes gpuweb#2094
We currently have a special requirement that all depth textures must use a special
texture_2d_depth
type in the WGSL shader. While it's common to have special shadow sampler types for doing comparison tests, it's not common for shaders just loading depth values to require it. This is somewhat of an annoying requirement for engines, though not an impossible one; I'd like to see if it is strictly necessary.This requirement seems to originate from the Metal Shading Language Specification, which says:
However, I looked at the common shader tools for Metal, and found:
These two tools cover a substantial amount of games shipping on Unity and Unreal. So, it seems that these operations are "de facto" spec'd to work.
I think this could be evidence that we could drop the texture_2d_depth requirement for simply loading depth textures, though it could still be required for doing comparison sampling.
The second issue is that loading from a depth texture seems to have undefined values for the second, third and fourth components in loads, as seen in the Metal Shading Language Specification:
(This is a bizarre contradiction in the spec. depth2d's special load function only returns one component, there are no "unspecified components", so how could they even be undefined? This gives me some confidence that the texture loads are actually spec'd.)
On new enough devices, we could use texture swizzles, but that doesn't work for all platforms we want to support. However, given that I expect loading the G/B/A channels from a depth texture to be relatively rare in content, I think it could be solved by compiling an extra shader at pipeline creation time on these devices only if they access G/B/A of a depth texture load. But others can respond with the difficulty of such an approach.
The text was updated successfully, but these errors were encountered: