Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High CPU usage from syscall conflict in pf-kernel 5.15 #5346

Open
literal-0x4e opened this issue Nov 25, 2021 · 6 comments
Open

High CPU usage from syscall conflict in pf-kernel 5.15 #5346

literal-0x4e opened this issue Nov 25, 2021 · 6 comments

Comments

@literal-0x4e
Copy link

When using proton-6.3-8 OR proton experimental, proton/wine components (not counting the actual game itself) produce some insane CPU usage, on average some 14 cores worth (!!!) of load. I tried various games and it happens with every single game I tried.
it looks something like this (tested with Disgaea 1 here, but it's more or less the same with other games, too)
SS_2021_11_25_22_09_26
note how wine/proton adds up to 14 cores worth of load, while the game itself only uses up 1/8th (12.4%) of a core.

I bisected this regression and found this to be the offending commit:

commit 50b4c8785f389f0840bfc6a9ee3f233c9de2da48 (HEAD, tag: proton-6.3-8-rc9, tag: proton-6.3-8, origin/proton_6.3-rc, origin/proton_6.3, origin/HEAD, proton_6.3)
Author: Arkadiusz Hiler <ahiler@codeweavers.com>
Date:   Mon Nov 15 18:48:05 2021 +0200

    update wine

Unfortunately, this commit just updates a bunch of submodules, yet I am not sure how to bisect submodules correctly, so I cannot pinpoint what exactly causes this regression at this time. Although if you so require of me, I can try figuring that out and bisect the submodules, too, to find the exact change.

For the sake of comparison, this is how the CPU usage looks like on that very same game when running on commit ea7eaa41d15ad3c27b53297946ae9a2ca2ed3974 (i.e. the commit right before the "bad" commit). Note how wine/proton itself basically doesn't use any cpu time
SS_2021_11_25_22_55_36

P.S. I do have a 16c/32t processor, so I got cores to spare either way, but this high usage is just silly – I'll get blamed for global warming if this keeps up 😂👌

@kisak-valve
Copy link
Member

kisak-valve commented Nov 25, 2021

Hello @literal-0x4e, this sounds like #5271. What kernel are you using, and does it have out of tree patches which include the former futex2 patch set which was obsoleted by futex waitv? Does setting a game to PROTON_NO_FSYNC=1 %command% avoid this behavior?

@literal-0x4e
Copy link
Author

literal-0x4e commented Nov 25, 2021

Thank You for a speedy reply. I just tried three variations as per linked bug description: PROTON_NO_FSYNC=1 alone, both PROTON_NO_ESYNC=1 PROTON_NO_FSYNC=1, and PROTON_NO_ESYNC=1 alone, but I do not see any change – it still hogs the CPU.
Also, my kernel does NOT have futex2 support. I am using a kernel with out-of-tree patches (namely pf-kernel, 5.15), but no form of futex2 support is in it yet.
So this may or may not be a separate issue with similar symptoms.

@kisak-valve kisak-valve added the Regression Confirmed working on an older version of Proton label Nov 25, 2021
@literal-0x4e
Copy link
Author

I have found that when running games on affected proton versions with PROTON_NO_FSYNC=1, the high CPU usage does actually go away after some time – the high usage lasts from a couple of tens of seconds to a few minutes (varies greatly from run to run) and then goes down to normal for that run. If running without PROTON_NO_FSYNC=1 set, the CPU usage does not ever drop. I did not realize that it goes away after a while in my prior testing as there was no need to run a game more than a few seconds to see the vast difference in CPU usage.
In other words, PROTON_NO_FSYNC=1 helps, but only partially.
I bisected the wine component and found this commit to be the one introducing this regression:

commit 120c1d7a3553bd101d67a02f1aad6514fa118b84 (HEAD, refs/bisect/bad)
Author: André Almeida <andrealmeid@collabora.com>
Date:   Thu Oct 21 13:36:14 2021 -0300

    ntdll/fsync: Support futex_waitv() API
    
    Since an interface for wait on multiple futexes was merged, we no longer
    need to support different interfaces. Drop out FUTEX_WAIT_MULTIPLE
    (opcode 31) in favor of the futex_waitv() interface accepted by
    upstream Linux.

Now, as to why I am affected despite not having any futex2 support in my kernel, that seems to be because my kernel does have a syscall 449, which is the same number as used by futex_waitv aka futex2, 'cept in my case it's the pmadv_ksm syscall, so proton tries to use that syscall number and keeps getting -EINVAL in return over and over.

The fix to disable fsync from the aforementioned bug report (commit 7f65309b59697e37db1ace08c197842ae5490ed1) doesn't help as it checks for old futex2 patches specifically and after not finding /sys/kernel/futex2/wait, it assumes it's a go and uses it. I could propose a check for the minor kernel version number for detection instead, to only work in new enough kernels to have futex_waitv merged and settled in, but I can understand if this is undesireable, as it only matters for edge cases e.g. mine.
One thing is still weird: why does it keep on trying to use the futex_waitv syscall over and over for some time, then abruptly stopping? I did verify it is doing exactly that with strace: running a trace with PROTON_NO_FSYNC=1 did show it does in fact try to use syscall 449 (traced as syscall_0x1c1) for some time, until stopping abruptly. Judging by the arguments in the trace, all these syscalls come from wine/server/fsync.c line 72, part of a function to determine if futex_waitv is OK to use. Although oddly enough, when ran under strace, the high CPU usage generally lasts a whole lot shorter than without strace.

inb4 this gets the "CLOSED NOTABUG" –Poettering treatment.

@kisak-valve
Copy link
Member

Thanks for looking into the details @literal-0x4e, it looks like this syscall conflict is already resolved in pf-kernel v5.16-pf0 (possibly a pre-release version) https://gitlab.com/post-factum/pf-kernel/-/blob/pf-5.16/arch/arm64/include/asm/unistd32.h#L908.

@kisak-valve kisak-valve changed the title Insanely high CPU usage of recent Proton versions (regression) High CPU usage from syscall conflict in pf-kernel 5.15 Nov 30, 2021
@kisak-valve kisak-valve removed the Regression Confirmed working on an older version of Proton label Nov 30, 2021
@ivyl
Copy link
Collaborator

ivyl commented Dec 1, 2021

'cept in my case it's the pmadv_ksm syscall, so proton tries to use that syscall number and keeps getting -EINVAL in return over and over.

Is there a reason why those kernels use the next free syscall number for their downstream patches (the old futex2 is guilty of this as well) instead of something a bit more random to even try to avoid problem with backwards/forward-compability? That's bound to cause problems.

I could propose a check for the minor kernel version number for detection instead, to only work in new enough kernels to have futex_waitv merged and settled in, but I can understand if this is undesireable, as it only matters for edge cases e.g. mine.

The syscall number is now reserved upstream. There's a bunch of downstream kernelsthat are popular among enthusiast that we want to support. Zen and TKG both rock the upstream cherry-picks in their 5.15 versions.

Judging by the arguments in the trace, all these syscalls come from wine/server/fsync.c line 72, part of a function to determine if futex_waitv is OK to use.

Wineserver is single-threaded and caches the state of fsync enablement.

syscall( __NR_futex_waitv, 0, 0, 0, 0, 0);
do_fsync_cached = getenv("WINEFSYNC") && atoi(getenv("WINEFSYNC")) && errno != ENOSYS;

If WINEFSYNC is not set or is set to something that atoi() evaluates to 0 it should be disabled. We should see that syscall only once. PROTON_NO_FSYNC does this:

self.check_environment("PROTON_NO_FSYNC", "nofsync")

# ...

if "nofsync" in self.compat_config:
    self.env.pop("WINEFSYNC", "")
else:
    self.env["WINEFSYNC"] = "1"

There may be one of those calls coming per non-server process due to dlls/ntdll/unix/fsync.c:160, but that does not explain the behavior you are seeing.

That's the bit that could use some more investigation and fixing.

@literal-0x4e
Copy link
Author

Is there a reason why those kernels use the next free syscall number for their downstream patches (the old futex2 is guilty of this as well) instead of something a bit more random to even try to avoid problem with backwards/forward-compability? That's bound to cause problems.

I cannot tell if there's a reason other than simplicity, but as far as I've seen, sometimes sysfs entries to get the syscall number exist, e.g. /sys/kernel/pmadv/ksm in my case

The syscall number is now reserved upstream. There's a bunch of downstream kernelsthat are popular among enthusiast that we want to support. Zen and TKG both rock the upstream cherry-picks in their 5.15 versions.

That's exactly why I assumed detection based on the minor version would be undesirable :)

There may be one of those calls coming per non-server process due to dlls/ntdll/unix/fsync.c:160, but that does not explain the behavior you are seeing.

Oh yeah, I saw that bit and somehow (probably due to being a bit ill at the time) it did not register to me yesterday that NULL is 0, so I only considered the call in wineserver code (as strace shows all those syscalls have arguments zeroed out, I looked for such calls). derp. What a shame. Sorry about that.

That's the bit that could use some more investigation and fixing.

Okay, I shall try to investigate deeper. Although as it's been some seven years since my last wine patch / last time I tried to hack on wine, it might take a while for me to recall how to deal with this peculiar piece of software properly, so don't hold your breath.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants