You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to visualize dynamic data in a rectangular area inside my imgui UI, and I'm rendering the data each frame into a texture that then gets rendered using an imgui button.
The problem is, this causes huge CPU usage (that CPU core is then at full usage because of this).
I used the Very Sleepy profiler to investigate:
Screenshots/Video
Btw, my data is currently rendered as simple 2d quads (rects), several thousand small rects.
It looks like this:
So most time is spent in DrvPresentBuffers, which is part of the Nvidia OpenGL driver.
(The memcpy is from copying the rect data when constructing the vertex buffer before rendering.)
Another symptom of this is that the UI input event handling lags (e.g. moving the mouse over buttons or table rows that are highlighted when hovering makes the lag very obvious). It seems that each frame's time is not enough to process everything fast enough because DrvPresentBuffers takes so long each frame.
Is DrvPresentBuffers the function for swapping the rendered buffer with the rendered one?
Is this high CPU usage caused by using the texture almost immediately as source for rendering to the screen after using it as target for rendering, such that the GPU stalls for some reason?
What would be the best way to reduce the CPU usage?
Would it make sense to not use a texture but render to the screen directly after imgui has rendered everything each frame?
(By reserving a rect area and storing its coords and size.)
But then it would draw over any tooltips and currently I have translucent buttons drawn over the data widget which would also not work then:
Which strategy do other applications use with imgui for rendering custom widgets rendering dynamic data with several thousand small shapes? Do they all use imgui's built-in shape rendering? Is it fast enough with ~10k shapes?
(I prefer to ask this before I rewrite my renderer again. I used another gui lib before, where it was too slow because each rect got its own node in the widget graph and it caused HUGE processing overhead & CPU usage to render 10k small rects.)
Standalone, minimal, complete and verifiable example:
Unfortunately not easily possible because it's part of a large application and I hope it's possible to figure out a way to resolve this from the info above.
The text was updated successfully, but these errors were encountered:
As you state you as rendering this texture, It looks like your issue isn't really a dear imgui issue but CPU/GPU time spent rendering your texture, so I am not really sure how I can help here. We don't know how you are rendering it, how many draw calls, state changes, which data format and shaders are involved, etc. A million things could be your bottleneck, maybe consider using RenderDoc to debug your application and try to see if things are happening the way you expect them to happen.
Which strategy do other applications use with imgui for rendering custom widgets rendering dynamic data with several thousand small shapes? Do they all use imgui's built-in shape rendering? Is it fast enough with ~10k shapes?.
You can easily try yourself with a single for loop drawing rectangle.
On my fancy desktop PC, rendering 10k AddRectFilled() takes about 0.5 ms CPU time in Optimized x64 build (GPU time would mostly depends on surface), and 1.2 ms CPU in debug x64 build. You can test it with:
ImDrawList* draw_list = ImGui::GetForegroundDrawList();
if (io.KeyShift) // Hold shift to compare unthrottled frame-time with/without
for (int n = 0; n < 10000; n++)
draw_list->AddRectFilled(ImVec2(0, 0), ImVec2(n * 0.01f, n * 0.01f), IM_COL32_WHITE);
Of course, your real application would have more overhead to fetch the correct data, etc. But generally 10k is an amount of shape you can expect to reach with ImDrawList primitives.
If your data doesn't change you may want to just render into the texture and not have to render all of it.
Another symptom of this is that the UI input event handling lags (e.g. moving the mouse over buttons or table rows that are highlighted when hovering makes the lag very obvious).
To handle input betters with very low-framerate, if you cannot kept your main/ui thread fast enough, you can potentially trickle inputs using an IO queue such as https://gist.github.com/ocornut/8417344f3506790304742b07887adf9f (I'd like to have one in core imgui at some point). Of course the better solution is to keep decent framerate :)
Version/Branch of Dear ImGui:
Version: https://github.com/Gekkio/imgui-rs
Branch: master
Back-end/Renderer/Compiler/OS
Back-ends: imgui-glium-renderer (OpenGL) from https://github.com/Gekkio/imgui-rs
Operating System: Win 8.1
My Issue/Question:
I want to visualize dynamic data in a rectangular area inside my imgui UI, and I'm rendering the data each frame into a texture that then gets rendered using an imgui button.
The problem is, this causes huge CPU usage (that CPU core is then at full usage because of this).
I used the Very Sleepy profiler to investigate:
Screenshots/Video
Btw, my data is currently rendered as simple 2d quads (rects), several thousand small rects.
It looks like this:
So most time is spent in
DrvPresentBuffers
, which is part of the Nvidia OpenGL driver.(The memcpy is from copying the rect data when constructing the vertex buffer before rendering.)
Another symptom of this is that the UI input event handling lags (e.g. moving the mouse over buttons or table rows that are highlighted when hovering makes the lag very obvious). It seems that each frame's time is not enough to process everything fast enough because
DrvPresentBuffers
takes so long each frame.Is
DrvPresentBuffers
the function for swapping the rendered buffer with the rendered one?Is this high CPU usage caused by using the texture almost immediately as source for rendering to the screen after using it as target for rendering, such that the GPU stalls for some reason?
What would be the best way to reduce the CPU usage?
Would it make sense to not use a texture but render to the screen directly after imgui has rendered everything each frame?
(By reserving a rect area and storing its coords and size.)
But then it would draw over any tooltips and currently I have translucent buttons drawn over the data widget which would also not work then:
Which strategy do other applications use with imgui for rendering custom widgets rendering dynamic data with several thousand small shapes? Do they all use imgui's built-in shape rendering? Is it fast enough with ~10k shapes?
(I prefer to ask this before I rewrite my renderer again. I used another gui lib before, where it was too slow because each rect got its own node in the widget graph and it caused HUGE processing overhead & CPU usage to render 10k small rects.)
Standalone, minimal, complete and verifiable example:
Unfortunately not easily possible because it's part of a large application and I hope it's possible to figure out a way to resolve this from the info above.
The text was updated successfully, but these errors were encountered: