-
-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The "pick info texture" in MultiSampled RenderPipeline #740
Comments
Is it possible (with changes to our codebase) that the color texture is multisampled, and the pick texture is not? With option2, would there be a like an additional pass for just the picking info (e.g. with ordered2 we'd get opaque, transparent, pick)? I don't have a lot of experience with MSAA. IIUC MSAA is a bit like SSAA, in the sense that the target texture has more samples. For fragments that are inside a triangle, the fragment shader is calculated once, to fill all its samples. For fragments that are on the edge of a triangle, the fragment shader is invoked for each sample. Is this about right? Do you know if it's also possible to force the fragment shader to run for all samples? If that's the case, MSAA will also help with aa for lines and points, and other objects like grids too. This would be quite a win, because it means that opaque objects are actually fully opaque, which means they don't have to be rendered in the transparency pass. Aside from performance, it also avoids certain artifacts (the black rings in #724). |
Yes, to achieve this, we must use different RenderPipelines to generate the color texture and the pick texture separately, which is the approach taken by Option 2.
Yes, I think so.
MSAA and SSAA are very similar (in fact, MSAA can be regarded as an optimized version of SSAA). This is a diagram illustrating MSAA from the DX11 rasterization documentation, which shows the application principle of MSAA in great detail. Like SSAA, MSAA also sets up sub-pixel samples for each final pixel. Here, the actual sampling position is always at the center of the pixel. Sometimes, the triangle may not cover the center of the pixel, and if the center of the pixel is still sampled in this case, it may result in an incorrect rendering effect. The GPU hardware uses centroid sampling to adjust the sampling position. When the pixel center is covered, normal pixel center sampling is performed; however, when the pixel center is not covered by the triangle, the GPU will select the nearest sub-pixel sample that passes the coverage test as the sampling point. As shown in the image below, each pixel corresponds to four sub-pixel samples, and the current pixel to be shaded is covered by two objects. The red object covers the center of the pixel, while the blue object does not cover the center of the pixel. Since the red object covers the center of the pixel, sampling is performed directly at the center of the pixel. As for the blue object, the sampling point is set at the position of sub-pixel sample 1.
After all rendering work is completed, the MSAA RenderTarget can be resolved to obtain the final result. Generally, MSAA is resolved directly by hardware using a box filter, which means taking the average color of the corresponding subpixel samples within a pixel (But not support for
I think MSAA can definitely help achieve this. |
Also reading into alpha coverage. I googled "msaa alpha coverage" and the top post was from the same person who wrote the prestine grid shader we use in #743 😮 It explains MSAA pretty well. But also its alpha to coverage feature, which, if I understand correctly, can be used to aa the edges for all our objects. And maybe even be used to realize an alternative alpha blending mechanism 🤔. https://bgolus.medium.com/anti-aliased-alpha-test-the-esoteric-alpha-to-coverage-8b177335ae4f#620e |
In any case, I agree option two makes the most sense. Even if it means we need an extra render pass for pickable objects, now that objects are not pickable by default, only a few (or none) objects participate in that pass in most cases. |
To be honest, I do find the current design of Blender a bit difficult to understand, and it feels somewhat complex. 😅 I believe that by configuring the GPUBlendState of the ColorTargetState in the RenderPipeline, we should be able to achieve the same effects we desire, including transparent objects, blending of transparent and opaque objects, and so on. |
Well, to support more advanced blend modes like (order independent) weighted transparency, and the "plus" version of that, we need some sort of system. I could be that |
I encountered a problem in the process of implementing MSAA. Let me try to explain it clearly.
To enable MSAA, we need to enable the MultisampleState in the RenderPipeline, use MultiSampledTexture in the ColorAttachments, and resolve the MultiSampled texture into a common texture when rendering to the final texture buffer.
However, in current pygfx, the "color texture" and "pick_info texture" are generated simultaneously in the same RenderPipeline.
Using multisampling for the "pick_info texture" is meaningless, and more importantly, the "pick_info texture" is in the "rguint16" format, which does not support automatic texture resolve (resolving from a MultiSampled texture to a common texture).
I thought two solutions:
Keep the current logic, and use a multisampled texture for Blender's “pick_tex” when MSAA is enabled. Then, create additional RenderPipeline (and related resources) to implement texture resolve in the shader.
Separate the generation logic of the "pick_info_map" from the main rendering shader and use an additional independent RenderPipeline to process it, (similar to the process of generating shadow maps).
Both solutions are relatively complicated, but the second solution may require more changes to the code structure.
BTW, I strongly recommend adopting the second solution. This approach can provide higher flexibility for potential advanced rendering pipelines in the future.
The text was updated successfully, but these errors were encountered: