Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[core] High CPU usage #217

Closed
gen2brain opened this issue Jan 25, 2017 · 7 comments
Closed

[core] High CPU usage #217

gen2brain opened this issue Jan 25, 2017 · 7 comments

Comments

@gen2brain
Copy link
Contributor

gen2brain commented Jan 25, 2017

Hello,

basic_window example is taking a 100% of CPU, one whole core is occupied. I tried to use SetConfigFlags(FLAG_VSYNC_HINT) but it doesn't help. This is on Linux with Intel graphic card. Vsync is enabled, I can confirm that with glxgears util.

Now, I noticed that when I have both SetTargetFPS(60) and SetConfigFlags(FLAG_VSYNC_HINT) or just SetTargetFPS(60) CPU usage is 100%. If I disable SetTargetFPS() and enable just SetConfigFlags(FLAG_VSYNC_HINT) then CPU usage drops to 10% and everything works smooth, and all without the sound of CPU cooler :)

Maybe that can help in fixing this issue.

Also, I changed this line in core.c:
while (windowMinimized) glfwPollEvents()
to
while (windowMinimized) glfwWaitEvents()
so when window is minimized CPU usage drops to 0. Is there any reason to poll for events while window is minimized?

@raysan5
Copy link
Owner

raysan5 commented Jan 25, 2017

Hello @gen2brain, thanks for the report.

Actually, this issue was already reported some time ago: #17

Unfortunately, behaviour is dependant on graphic driver, for example, in my Surface Pro 3 with Intel i5 and integrated Intel graphic card, when enabling FLAG_VSYNC_HINT CPU usage drops to 10% independently of SetTargetFPS(60).

But I applied your proposed change when windowMinimized: d8edcaf

Probably this CPU issue would require more deep investigation... it also depends on GLFW3... have you tried other libraries like SDL or SFML? How do they perform?

@gen2brain
Copy link
Contributor Author

Thanks! I only have a little experience with SDL, which performs much better. But there you have to manually calculate frame delta and use SDL_Delay to sleep remaining time. There are probably other methods but that one was easiest for me, if you don't do anything to limit frame rate than CPU usage is also very high. But raylib is much nicer and easier to use :)

I am actually making a Go bindings for raylib. Most is finished, but I also want to port physac engine. Raygui, raymath and easings are done, and I ported all examples. There are a few bugs I have to solve, I will let you know when I release it. I still haven't tried the Android build with my bindings, there GLFW3 is not used so I am curious about the CPU usage on Android.

@raysan5
Copy link
Owner

raysan5 commented Jan 25, 2017

Hi @gen2brain,

Actually I also calculate frame delta manually inside EndDrawing() but I investigated a bit further on how SDL_Delay() works (checked the source code) and it uses the windows function Sleep(ms) (implemented in kernel32 library). It seems that, depending on the language and OS, the function implementation can vary and it consumes CPU or not, more info here. So, per that answer, I'm just using the first method (using a while loop) instead of asking the system x86 HLT instruction. Just for reference, SDL_Delay() in unix uses a similar method to mine or nanosleep() if available.

I'll try to investigate in deep this issue and do some tests with Sleep() but I can't give you a deadline... :)

On Android and Raspberry Pi that part of the code works the same way so maybe it also happens the same, please let me know if you try it.

I am actually making a Go bindings for raylib. Most is finished, but I also want to port physac engine. Raygui, raymath and easings are done, and I ported all examples. There are a few bugs I have to solve, I will let you know when I release it.

Wow that's amazing! Actually Go was in my list of languages to try! Please keep my updated! :D

@raysan5
Copy link
Owner

raysan5 commented Jan 26, 2017

Ok, just kept investigating and testing... definitely, it's a raylib issue. Just replaced the busy wait loop implementation by a Sleep() on windows and CPU usage drops to <3%, even in the most advance examples.

Now I need to find alternatives for the multiple platforms... keep working on this issue...

@gen2brain
Copy link
Contributor Author

Hey, great , I am glad I started something, and maybe pointed to right direction. I am not C expert so I can't help much with timing functions, but I can help with tests on Linux and other platforms. I have some experience with Android toolchains, and I have old rpi 1, so I think I can prepare everything for testing. And I have a lot of experience with compiling, cross compiling etc.

@raysan5
Copy link
Owner

raysan5 commented Jan 26, 2017

Hey @gen2brain! You said you have some experience with Android toolchain, maybe you can help me with this issue: #202

In the meantime, I'm working in this timming issue...

Thanks again for the help! :)

@raysan5
Copy link
Owner

raysan5 commented Jan 27, 2017

New timming mechanism implemented in commit b681e8c, requires some testing on platforms other than Windows.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants