Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Excessive CPU usage with vsync + NVidia #1168

Open
slipher opened this issue May 30, 2024 · 3 comments
Open

Excessive CPU usage with vsync + NVidia #1168

slipher opened this issue May 30, 2024 · 3 comments

Comments

@slipher
Copy link
Member

slipher commented May 30, 2024

On my Windows 10 machine with an Nvidia card, setting r_swapInterval caps the FPS as expected, for values -4 to -1 or 1 to 4. But despite the lower FPS, it increases the CPU usage. I did some tests with the 0.54.1 release with all default cvars except resolution and full screen, looking at the plat23 alien base. At 125 FPS without vsync, the CPU usage is 5%. With any value of vsync enabled (which gives possible FPSes of 60, 30, 20, or 15), the CPU usage is 8%. This means roughly all of 1 core, as the denominator here is 12 cores.

The problem does not occur with Intel graphics.

The problem does not occur in the main menu. So it can't be explained by something as simple as always busy-looping until the monitor is ready...

@illwieckz
Copy link
Member

illwieckz commented May 30, 2024

The problem does not occur in the main menu.

The main menu is doing a lot of IPC while reading cvar values of (not-displayed but active) preferences options windows, maybe this inserts dozens and dozens of sleep calls in an existing busy loop?

@slipher
Copy link
Member Author

slipher commented May 31, 2024

The problem does not occur in the main menu.

The main menu is doing a lot of IPC while reading cvar values of (not-displayed but active) preferences options windows, maybe this inserts dozens and dozens of sleep calls in an existing busy loop?

I don't follow this theory. If there were a busy wait, it would be once per frame, at the point when we request the completed frame to be rendered. I don't see how the graphics driver would observe how many cvar syscalls we used. The engine is by default single-threaded after all.

@illwieckz
Copy link
Member

Ah yes with vsync the busy loop would be in the graphics driver, not in the engine…

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants