Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The "pick info texture" in MultiSampled RenderPipeline #740

Open
panxinmiao opened this issue Apr 25, 2024 · 6 comments
Open

The "pick info texture" in MultiSampled RenderPipeline #740

panxinmiao opened this issue Apr 25, 2024 · 6 comments

Comments

@panxinmiao
Copy link
Contributor

I encountered a problem in the process of implementing MSAA. Let me try to explain it clearly.

To enable MSAA, we need to enable the MultisampleState in the RenderPipeline, use MultiSampledTexture in the ColorAttachments, and resolve the MultiSampled texture into a common texture when rendering to the final texture buffer.

However, in current pygfx, the "color texture" and "pick_info texture" are generated simultaneously in the same RenderPipeline.

Using multisampling for the "pick_info texture" is meaningless, and more importantly, the "pick_info texture" is in the "rguint16" format, which does not support automatic texture resolve (resolving from a MultiSampled texture to a common texture).

I thought two solutions:

  1. Keep the current logic, and use a multisampled texture for Blender's “pick_tex” when MSAA is enabled. Then, create additional RenderPipeline (and related resources) to implement texture resolve in the shader.

  2. Separate the generation logic of the "pick_info_map" from the main rendering shader and use an additional independent RenderPipeline to process it, (similar to the process of generating shadow maps).

Both solutions are relatively complicated, but the second solution may require more changes to the code structure.

BTW, I strongly recommend adopting the second solution. This approach can provide higher flexibility for potential advanced rendering pipelines in the future.

@panxinmiao panxinmiao changed the title The "pick info texture" in MutiSampled RenderPipeline The "pick info texture" in MultiSampled RenderPipeline Apr 25, 2024
@almarklein
Copy link
Collaborator

Is it possible (with changes to our codebase) that the color texture is multisampled, and the pick texture is not?

With option2, would there be a like an additional pass for just the picking info (e.g. with ordered2 we'd get opaque, transparent, pick)?


I don't have a lot of experience with MSAA. IIUC MSAA is a bit like SSAA, in the sense that the target texture has more samples. For fragments that are inside a triangle, the fragment shader is calculated once, to fill all its samples. For fragments that are on the edge of a triangle, the fragment shader is invoked for each sample. Is this about right?

Do you know if it's also possible to force the fragment shader to run for all samples?

If that's the case, MSAA will also help with aa for lines and points, and other objects like grids too. This would be quite a win, because it means that opaque objects are actually fully opaque, which means they don't have to be rendered in the transparency pass. Aside from performance, it also avoids certain artifacts (the black rings in #724).

@panxinmiao
Copy link
Contributor Author

panxinmiao commented Apr 26, 2024

Is it possible (with changes to our codebase) that the color texture is multisampled, and the pick texture is not?

Yes, to achieve this, we must use different RenderPipelines to generate the color texture and the pick texture separately, which is the approach taken by Option 2.

With option2, would there be a like an additional pass for just the picking info (e.g. with ordered2 we'd get opaque, transparent, pick)?

Yes, I think so.

IIUC MSAA is a bit like SSAA, in the sense that the target texture has more samples. For fragments that are inside a triangle, the fragment shader is calculated once, to fill all its samples. For fragments that are on the edge of a triangle, the fragment shader is invoked for each sample. Is this about right?

Do you know if it's also possible to force the fragment shader to run for all samples?

MSAA and SSAA are very similar (in fact, MSAA can be regarded as an optimized version of SSAA).
The biggest difference from SSAA is that the fragment shader in MSAA is executed only once per fragment (not per sampling), therefore, MSAA will have much better performance compared to SSAA).

This is a diagram illustrating MSAA from the DX11 rasterization documentation, which shows the application principle of MSAA in great detail.

image

Like SSAA, MSAA also sets up sub-pixel samples for each final pixel.
For each sub-pixel sample (the black dots in the figure), a coverage test is first performed, which tests whether the sub-pixel sample is inside the triangle. If it is inside the triangle, it means sampling (calculating the pixel shader) is required. For performance reasons, multiple sub-pixel samples within the same pixel do not each perform a separate pixel shader calculation. Instead, they share the pixel calculation result from the pixel center. For each pixel, if at least one of the corresponding sub-pixel samples passes the coverage test (the circled positions in the figure), sampling will be performed once, with the interpolated sample position being the center of the pixel (the diamond block in the figure). The result of this single sampling is used for multiple sub-pixel sample points.

Here, the actual sampling position is always at the center of the pixel. Sometimes, the triangle may not cover the center of the pixel, and if the center of the pixel is still sampled in this case, it may result in an incorrect rendering effect. The GPU hardware uses centroid sampling to adjust the sampling position. When the pixel center is covered, normal pixel center sampling is performed; however, when the pixel center is not covered by the triangle, the GPU will select the nearest sub-pixel sample that passes the coverage test as the sampling point.

As shown in the image below, each pixel corresponds to four sub-pixel samples, and the current pixel to be shaded is covered by two objects. The red object covers the center of the pixel, while the blue object does not cover the center of the pixel. Since the red object covers the center of the pixel, sampling is performed directly at the center of the pixel. As for the blue object, the sampling point is set at the position of sub-pixel sample 1.

1714122923613
MSAA uses a special texture format (MultiSampledTexture) for storage, and like SSAA, it consumes more memory. An MSAA texture with 4 sub-sample points occupies 4 times as much memory as a regular texture.

After all rendering work is completed, the MSAA RenderTarget can be resolved to obtain the final result. Generally, MSAA is resolved directly by hardware using a box filter, which means taking the average color of the corresponding subpixel samples within a pixel (But not support for uint format texture like "pick_info_tex"). After this filtering process, a smooth anti-aliasing edge effect can be achieved, and the more sampling points, the better the result.

If that's the case, MSAA will also help with aa for lines and points, and other objects like grids too. This would be quite a win, because it means that opaque objects are actually fully opaque, which means they don't have to be rendered in the transparency pass. Aside from performance, it also avoids certain artifacts (the black rings in #724).

I think MSAA can definitely help achieve this.

@almarklein
Copy link
Collaborator

almarklein commented Apr 26, 2024

Also reading into alpha coverage. I googled "msaa alpha coverage" and the top post was from the same person who wrote the prestine grid shader we use in #743 😮

It explains MSAA pretty well. But also its alpha to coverage feature, which, if I understand correctly, can be used to aa the edges for all our objects. And maybe even be used to realize an alternative alpha blending mechanism 🤔.

https://bgolus.medium.com/anti-aliased-alpha-test-the-esoteric-alpha-to-coverage-8b177335ae4f#620e

@almarklein
Copy link
Collaborator

In any case, I agree option two makes the most sense. Even if it means we need an extra render pass for pickable objects, now that objects are not pickable by default, only a few (or none) objects participate in that pass in most cases.

@panxinmiao
Copy link
Contributor Author

if I understand correctly, can be used to aa the edges for all our objects. And maybe even be used to realize an alternative alpha blending mechanism 🤔.

To be honest, I do find the current design of Blender a bit difficult to understand, and it feels somewhat complex. 😅

I believe that by configuring the GPUBlendState of the ColorTargetState in the RenderPipeline, we should be able to achieve the same effects we desire, including transparent objects, blending of transparent and opaque objects, and so on.

@almarklein
Copy link
Collaborator

Well, to support more advanced blend modes like (order independent) weighted transparency, and the "plus" version of that, we need some sort of system. I could be that blender.py can be simplified, but simply tweaking gpu state will not suffice for these cases, I think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants