Best way to render a buffer generated by a computer shader? #2193
-
|
I'm trying to understand Markus Schütz's technique for rendering point clouds efficiently with compute shaders (which he gave an excellent talk about in the May WebGL meetup: https://www.youtube.com/watch?v=OIfqWD5NlNc). One thing I'm stuck on is, how do you efficiently take a buffer generated by a compute shader and draw that to the screen? Do you treat that buffer as an input to a simple frag shader that just reads it in and outputs each value as a pixel color? Or is there a way to more "directly" use the output of a compute shader onto the screen? For context, the general idea of this technique as I understand it is that in some cases when rendering point clouds, it can be faster to output pixel values from a compute shader compared to rendering the points directly as The paper talks about doing a "resolve pass" to copy the contents of the compute shader buffer to a texture. I'm not sure if that's special terminology or if that's the same as a frag shader that draws pixels from that input buffer into a render target? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
|
Hi @OmarShehata, If each point is a single texel, it might be more efficient to skip the typical rasterization stages and write directly to a storage texture or buffer. Point and line primitives can have poor performance on some hardware, and currently WGSL does not support point sizes greater than one. If the points are all single texels, and you want to accumulate the values (two points projected to the same coordinate result in twice the value), it might make sense to use If the points need to be larger than a single texel, be nicely rounded, or have more advanced blending, it might be better to render them as instanced quads. I wrote a particle sample last week that you might find interesting: Cheers, |
Beta Was this translation helpful? Give feedback.
-
I think this is a general question about graphics APIs. There isn't such a thing as "draw buffer to screen" at all. |
Beta Was this translation helpful? Give feedback.

Hi @OmarShehata,
If each point is a single texel, it might be more efficient to skip the typical rasterization stages and write directly to a storage texture or buffer. Point and line primitives can have poor performance on some hardware, and currently WGSL does not support point sizes greater than one.
If the points are all single texels, and you want to accumulate the values (two points projected to the same coordinate result in twice the value), it might make sense to use
atomicAdd()on a storage buffer, and then use that for presentation.If the points need to be larger than a single texel, be nicely rounded, or have more advanced blending, it might be better to render them as instanc…