Replies: 1 comment
-
Batching definitely is better in this scenario. GPU upload/download are costly operations. Not quite because of the amount of data (modern GPU are bandwidth monsters) but the operation itself might block the rendering pipeline for a while independently of how much data you upload/download. So yes, batching, especially for this amount of data, it is much better to do one call than 100 of them. (If we're talking about general graphics programming, this scenario can be optimized depending on how a swap chain/command buffer is setup, but MonoGame has a very old school approach in that regard which definitely makes GPU download/upload a blocker.) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am using @cpt-max's custom monogame fork in order to write to a structured buffer within a pixel shader to increment a value in a structure counting how often the pixel shader is approximately called (something like what an occlusion query does). In order to retrieve the data i am calling the .GetData()-method to populate an array of one integer. My question now is whether or not it is smart to transfer and retrieve four bytes every frame. Would it be more appropiate to implement some sort of "batching"-mechanism i.e. there are as many structures avaible in the structuredbuffer as models for which the pixels has to be counted. Additionally, every vertex will have an additional id-attribute assigned through which the corresponding structure can be accesed In the respective pixel shader. Now, instead of retrieving four bytes, i would retrieve at least 400 bytes let's say in one .GetData()-call, rather than calling .GetData() a hundred times. Which one is more appropiate?
Best regards
Beta Was this translation helpful? Give feedback.
All reactions