Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add systemd-oomd option to kill single, largest process instead of entire cgroup #25853

Open
kevgrig opened this issue Dec 23, 2022 · 27 comments
Labels
needs-discussion 🤔 oomd RFE 🎁 Request for Enhancement, i.e. a feature request

Comments

@kevgrig
Copy link

kevgrig commented Dec 23, 2022

Component

systemd-oomd

Is your feature request related to a problem? Please describe

systemd-oomd killed my entire user session when killing the single, largest process would have been sufficient.

Dec 23 11:09:41 x270b systemd-oomd[1039]: Killed /user.slice/user-1000.slice/session-2.scope due to memory used (16309088256) / total (16542982144) and swap used (7743770624) / total (8589930496) being more than 90.00%

I'm running systemd-oomd v249 and it's nice that v251 added more details of what is killed but it would be nicer if I could configure systemd-oomd to try killing the largest memory user first so that I don't lose other work.

Describe the solution you'd like

Configuration option to kill the largest PID rather than the entire cgroup.

Describe alternatives you've considered

$ sudo systemctl disable --now systemd-oomd
$ sudo systemctl mask systemd-oomd

The systemd version you checked that didn't have the feature you are asking for

249

@kevgrig kevgrig added the RFE 🎁 Request for Enhancement, i.e. a feature request label Dec 23, 2022
@github-actions github-actions bot added the oomd label Dec 23, 2022
@Werkov
Copy link
Contributor

Werkov commented Jan 2, 2023

I assume this is meant for scope units only (since killing a pseudo random process of a service compromises the service anyway).
There is recent #25385 that implements and adds OOMPolicy=continue to scope units, i.e. only the kernel OOM action is taken (which would be probably the largest process).

@kevgrig Is your RFE covered by this change?

@AdrianVovk
Copy link
Contributor

AdrianVovk commented Jan 2, 2023

It'd also be nice to have this not just for the user session, but also for individual apps. For instance: right now oomd will kill your whole browser, when killing the largest tab would have sufficed.

Turning off oomd and letting the kernel OOM handler kill processes tends to lock up the system... I don't think that change you linked has anything to do with oomd, right? I don't think it addresses the RFE

@kevgrig
Copy link
Author

kevgrig commented Jan 2, 2023

@Werkov

I assume this is meant for scope units only (since killing a pseudo random process of a service compromises the service anyway). There is recent #25385 that implements and adds OOMPolicy=continue to scope units, i.e. only the kernel OOM action is taken (which would be probably the largest process).

@kevgrig Is your RFE covered by this change?

To be frank, I don't understand if #25376 applies as per the comment:

oomd is not enabled here; this is BUILT-IN systemd behavior

Whereas, in my case, I'm fine continuing to run systemd-oomd and the kill behavior is happening inside systemd-oom rather than systemd. I don't have a clear understanding of all the components and terminology and how they interact.

However, in principle, yes, the ability to configure systemd-oomd to defer to the kernel OOM killer is what I would like.

@Werkov
Copy link
Contributor

Werkov commented Jan 3, 2023

@AdrianVovk: the internal organization of browser processes (e.g. 1:1 mapping of processes to tab) is a different abstraction that IMO should stay out of oomd. For cooperative actions, you may be interested in #23606.

@kevgrig (I missed the origin of the kill action in the report.) OOMPolicy= is indeed orthogonal to oomd (and the kernel OOM would have different conditions to trigger), sorry about confusion.
So the first approach would be to disable ManagedOOMSwap=kill of the affected .scope unit.
The second would be to better organize the cgroup tree of user session (i.e. not having the big browser and other session processes to share the single .scope unit but a separate scope per application, that's rather for your DE/launcher to handle.
The third option (deliberately listed last :-) would be the requested new sub-unit option for ManagedOOM{Swap,MemoryPressure}= but I'm honestly not fan of that given the possibilities above. (Ccing @anitazha to assess usefulness of such an extension.)

@kevgrig
Copy link
Author

kevgrig commented Jan 3, 2023

@Werkov

the first approach would be to disable ManagedOOMSwap=kill of the affected .scope unit

If I understand correctly, this wouldn't kill anything in my user's .scope unit; however, in my case, the cause of excessive memory will always be in my user's .scope unit. I'm just a single user running a Linux desktop with few background services.

better organize the cgroup tree of user session

This does seem like a nice future option, but I run an off-beat DE (Xfce) so I doubt that will come any time soon.

new sub-unit option for ManagedOOM{Swap,MemoryPressure}= but I'm honestly not fan of that given the possibilities above

In my opinion, option 1 is not useful for my (very common) use case, and option 2 is currently inapplicable, and the OOM killer is something that already exists that would work well.

@anitazha
Copy link
Member

anitazha commented Jan 4, 2023

oomd/systemd-oomd by design is cgroup v2 only so that we can take advantage of the resource control mechanisms provided by cgroups. In the past we discussed process killing but decided to stay firm on this direction. As such in an environment that doesn't group processes accordingly into cgroups, we recommend not using systemd-oomd. In your case if you still want to say, use systemd-oomd for system.slice and rely on the kernel OOM killer for everything else it's a matter of overriding/tweaking the options for ManagedOOM{Swap,MemoryPressure}=.

Related, Benjamin Berg worked on cgroupify which some (all?) variants of Fedora uses to separate browser tabs into different cgroups. Maybe this can be tweaked for your environment.

@kevgrig
Copy link
Author

kevgrig commented Jan 5, 2023

I run Fedora and it looks like this is a known issue with explicit disregard for non-GNOME and non-KDE DEs:

How will this work if everything is in the same cgroup?

It will not work as systemd-oomd acts on a per-cgroup level. Applications will need to spawn processes into separate cgroups (e.g. with systemd-run) or use a desktop environment (e.g. GNOME, KDE) that does this for them.

Should spins that don't put processes in separate cgroups be excluded from this change?

That will be left up to the maintainers of those spins. Based on feedback, the current plan is to enable systemd-oomd with the specified configuration by default to minimize fragmentation on the Fedora install base (the Upgrade/Compatibility section as been updated to reflect this). A separate subpackage, "systemd-oomd-defaults", controls the policy for systemd-oomd and excluding it or removing it (and performing a systemctl daemon-reload) will prevent systemd-oomd from killing anything; without a policy systemd-oomd doesn't act.

Opened Xfce issue: https://gitlab.xfce.org/xfce/xfce4-session/-/issues/158

@kevgrig kevgrig closed this as completed Jan 5, 2023
@tootea
Copy link

tootea commented Aug 11, 2023

@anitazha

oomd/systemd-oomd by design is cgroup v2 only so that we can take advantage of the resource control mechanisms provided by cgroups. In the past we discussed process killing but decided to stay firm on this direction. As such in an environment that doesn't group processes accordingly into cgroups, we recommend not using systemd-oomd.

It feels like implementing ManagedOOM{Swap,MemoryPressure}=kill-largest-process is a matter of adding a few dozen lines to oomd-util.c. Would a pull request to that effect be acceptable in principle, or are you absolutely opposed to adding such an option for philosophical reasons?

The current whole-cgroup killing mode is basically unusable for my main desktop workflow, which entails testing the software I develop in a terminal (Konsole) shell session. Every time things don't go entirely to plan and the tested process OOMs, systemd-oomd will kill the entire cgroup corresponding to the affected terminal tab, thus killing not only the offending tested process but also its parent interactive shell. This is extremely disruptive, because it nukes any context the shell/terminal had at that point (variables, command history, scrollback).

Right now, the only way to preserve my sanity is to disable systemd-oomd altogether, but that's suboptimal for many reasons. The alternatives (kernel OOM killer, earlyoom) do not take cgroups or memory pressure information into account at all, thus making suboptimal decisions. It's a shame that while systemd-oomd already supports all this magic, the lack of a less nuclear killing mode forces devs like me to throw the baby out with the bathwater.

If adding a single-process killing mode is absolutely not going to fly, what do you consider the best course of action? Shells and terminal emulators are likely here to stay, and patching bash (and all the other shells out there) to put every single pipeline into a separate cgroup is impractical.

@Werkov
Copy link
Contributor

Werkov commented Aug 11, 2023

@tootea You can wrap your debug/testing runs into a scope:

systemd-run --user --scope test-program ...

to restrict the blast radius.

@tootea
Copy link

tootea commented Aug 11, 2023

@Werkov Yes, I have actually tried that in the past and it gets part of the job done, but I couldn't find a way to make it work well with GDB. Either I end up debugging systemd-run instead of my executable (requiring a cumbersome sequence of commands to end up on the right process every time the debugger restarts it), or the entire gdb session ends up in the cgroup, so we're back to square one in terms of losing context.

@tootea
Copy link

tootea commented Aug 11, 2023

The bottom line is, while killing an entire cgroup might make great sense for server deployments with containerized services, I'm having a hard time imagining a desktop application which requires or prefers having its entire cgroup killed instead of just one process. Sure, we can apply workarounds to dozens of different apps to make them treat a cgroup as the new process (atomic unit of lifetime management), but wouldn't it be much easier to just improve systemd-oomd a bit and perhaps eventually make kill-largest-process the default on desktops?

@kevgrig kevgrig reopened this Aug 11, 2023
@Doomsdayrs
Copy link

Doomsdayrs commented Dec 12, 2023

A frequent issue that I experience is Android Studio eating up all ram (or perhaps a rogue process) that then leads to an entire system halt.

The only solution is to restart the entire system, perhaps ruining my work flow in the process.

This is an issue I experience especially during power work, where Android Studio can use up to 16GB or more of memory.

Having this feature would allow Android Studio to be axed, especially if a memory leak occurs, and then letting me resume my work.

@Werkov
Copy link
Contributor

Werkov commented Dec 18, 2023

Having this feature would allow Android Studio to be axed, especially if a memory leak occurs, and then letting me resume my work.

This should be also possible with DEs like GNOME or KDE that launch apps in dedicated scopes.
@Doomsdayrs To be clear -- your DE doesn't do this implicit scoping?

@Doomsdayrs
Copy link

This should be also possible with DEs like GNOME or KDE that launch apps in dedicated scopes. @Doomsdayrs To be clear -- your DE doesn't do this implicit scoping?

I use GNOME. I am unaware of this concept of "implicit scoping" as I am an end user / developer.

Android Studio is not installed via Flatpak, but via a tar archive containing the executable and other data, if that effects the situation in any way.

@Werkov
Copy link
Contributor

Werkov commented Dec 18, 2023

Flatpak is not necessary. AS may ship its own desktop file or you can add your own. The app started via GNOME and this desktop entry should get its own .scope. Or if you run an executable manually, you may use the systemd-run wrapper above.

@traylenator
Copy link
Contributor

traylenator commented Mar 6, 2024

Just adding my wish for this. The obvious example for me is:

$ ssh myhost
$ tail /dev/zero

As things stand the whole scope unit is taken out by systemd-oomd including my login.
I can't teach 1000s of people to run things that might go bad in a systemd-run.

@Werkov
Copy link
Contributor

Werkov commented Mar 7, 2024

@traylenator Is systemd-oomd giving you any benefits then?

(Nice idiom for memory allocation BTW.)

@traylenator
Copy link
Contributor

My motivation for looking at systemd-oomd was a situation last week: Random user launched 1700 threads of clang compilation.
This made the machine unusable unsurprisingly. This situation was well detected by PSI

System

==> /proc/pressure/cpu <==
some avg10=93.51 avg60=89.01 avg300=91.70 total=98833034235
full avg10=0.00 avg60=0.00 avg300=0.00 total=0

==> /proc/pressure/io <==
some avg10=99.72 avg60=99.81 avg300=99.83 total=64093890614
full avg10=2.03 avg60=1.97 avg300=1.42 total=48637928811

==> /proc/pressure/irq <==
full avg10=0.80 avg60=0.68 avg300=0.42 total=11299819357

==> /proc/pressure/memory <==
some avg10=99.47 avg60=99.37 avg300=99.37 total=8917504840
full avg10=86.55 avg60=87.90 avg300=91.54 total=8171112563

Users slice

==> /sys/fs/cgroup/user.slice/user-12345.slice/cpu.pressure <==
some avg10=93.12 avg60=92.57 avg300=91.79 total=5067128254
full avg10=6.02 avg60=5.64 avg300=4.96 total=305773281

==> /sys/fs/cgroup/user.slice/user-12345.slice/io.pressure <==
some avg10=100.00 avg60=99.99 avg300=100.00 total=5207715255
full avg10=1.62 avg60=1.96 avg300=2.49 total=117575245

==> /sys/fs/cgroup/user.slice/user-12345.slice/irq.pressure <==
full avg10=0.00 avg60=0.00 avg300=0.00 total=36510431

==> /sys/fs/cgroup/user.slice/user-12345.slice/memory.pressure <==
some avg10=99.54 avg60=99.54 avg300=99.37 total=5198906372
full avg10=92.12 avg60=92.18 avg300=93.37 total=4997292465

It's an extreme case and killing the whole user slice would have easily been justified but killing of the clangs in possibly
several iterations would be better (for the user). I'm sure they rather not loose their emacs in the process.

@kevgrig
Copy link
Author

kevgrig commented Mar 7, 2024

Is systemd-oomd giving you any benefits then?

I think there's a false premise here of "well, the user can just decide whether or not to use systemd-oomd". Various distributions are now shipping with systemd-oomd enabled by default.

For an advanced user, finding this thread and disabling systemd-oomd or using systemd-run will be non-trivial. And this is after at least one incident of everything being killed.

For an average user just running a single-user DE, many won't even know where to look and they'll just connect Linux with, "It just randomly kills my entire session and logs me out." This is slightly made better by the major DEs integrating systemd-run somewhat, but I doubt this covers all cases.

The underlying point of this issue is that becoming a default carries responsibilities. You could argue that systemd-oomd should not be a default, and DEs made a mistake, but this seems like it'll be a big effort to undo. The alternative is to embrace the spotlight and add this feature, even if that's welcoming a PR.

Edit: A counter-argument to my point is that even if systemd-oomd has this option, presumably it will not be shipped by systemd-oomd as a default; however, at least it swims with the current and it seems that it will be easier to propose that all DEs switch to this option as a default. Alternatively, if you really want to be a good community member: allow a PR for the new option, make it the default, and then advanced server distributions where systemd-oomd is particularly valuable (server VMs, Kubernetes, etc.) can override their shipped default to the current default.

@AdrianVovk
Copy link
Contributor

You could argue that systemd-oomd should not be a default

For DEs that don't use cgroups to isolate individual apps, yes 100% defaulting to oomd is a mistake on those systems. And to be frank it's up to whoever configured oomd to manage those DEs to configure things correctly

this seems like it'll be a big effort to undo

Not particularly... enabling oomd is just a few lines of configuration, disabling it is just as easy.

This is slightly made better by the major DEs integrating systemd-run somewhat, but I doubt this covers all cases.

Ultimately DEs, apps (like browsers, IDEs), and other upstreams (like sshd) need to start organizing sub-components that the OS is allowed to manage separately into separated cgroups on their own, or distros should opt them out of oomd entirely (via something like ManagedOOMPreference=omit or similar, or by just not shipping oomd by default).

IMO oomd performs well on server systems because server workloads know how to organize themselves into separate cgroups: the common microservice setup ends up w/ multiple containers each doing one small task, which lets oomd kill individual microservices instead of bringing down the whole stack. In contrast, desktop software that could benefit from letting the OS manage parts of the app independently (tabs in a web browser, compiler running in an IDE, etc) simply don't tell the OS about it.

$ ssh myhost
$ tail /dev/zero

Not quite sure I'm following what the situation is here. Is myhost a remote server, or localhost? When you say it kills your whole login, do you mean a login session on localhost, a login session on myhost, your SSH session on myhost, or something else?

killing of the clangs in possibly several iterations would be better (for the user). I'm sure they rather not loose their emacs in the process

If emacs were to be updated to put compilation jobs in dedicated cgroups, the system would know that's it's safe (for the consistency of emacs) to kill just the compilation job rather than killing the whole editor (or session), and you as a sysadmin would gain the ability to impose targeted resource limits on just emacs compilation jobs.

@traylenator
Copy link
Contributor

I can expand on the two examples, the clang one a typical session might be:

$ ssh -X remotehost
remotehost> emacs myfile.c &
remotehost> clang -flto-jobs  1700  ... # I don't know clang at all, no clue if correct option.

So now I have a single session scope

/user.slice/user-12345.slice
├─session-28884.scope
│ ├─1873444 "sshd: fred [priv]"
│ ├─1873573 "sshd: fred@pts/1"
│ ├─1873574 -bash
│ ├─1873791 clang ..
│ ├─1873792 clang ...
│ ├─1873793 clang ...
... # many more clangs...
│ ├─187371000 clang ...
│ ├─1873764 emacs
│ ├─1873791 systemctl status user-12345.slice
│ └─1873792 less

Taking out the scope unit destroys the clangs (great) but also emacs, bash and the sshd bits. Game over for
that login.
So unless both sshd and then bash starts putting every command in a cgroup or clang does something itself
its game over for everything.
The tail /dev/zero example was just a trivial version of above. Ran on the remoteserver its in same scope unit
as bash and the user bit of sshd.

systemd-oomd and PSI metrics is so good at recognising the culprit cgroup. I do get part of reluctance is
identifying which process to shoot is imprecise. We don't after all have per processes PSI metrics.

Looking also at

  • psi-notify works really well but libnotify is somewhat wrong for a SSH session.
  • nohang - just starting...

@AdrianVovk
Copy link
Contributor

Ah I get the issue now. No much we can do automatically in this case.

One possible solution is having users run long-running tasks via systemd-run. They could fork off emacs, then systemd-run themselves an interactive shell instance, and just run commands normally in there. If the commands overstep limits just that subshell gets killed.

Another approach, which may be better for your situation, is opting session-*.scope out of oomd and instead setting normal resource control settings on it (memory max, etc). This means that any process that's inside of session-*.scope will get killed by the kernel OOM killer early. Then processes that are aware of cgroups (stuff running via systemd-run, etc) will exist outside of the session-*.scope resource limits and will instead get managed by oomd

So unless both sshd and then bash starts putting every command in a cgroup

Well sshd already puts everything in a cgroup, courtesy of logind: session-28884.scope

But it might be useful for sshd to fork off a new cgroup anyway for the process that it runs, so that it can be managed independently. So: /user.slice/user-12345.slice/ssh-session-<random>.scope, so that it's possible to configure resource limits only for SSH sessions (ssh-session-*.scope). Maybe there's another mechanism to do this, IDK 🤷. Maybe logind should put the name of the PAM stack that creates the session into the unit name, so that sessions started via different PAM stacks can have different resource limits applied (so <pam stack name>-session-<id>.scope, with an alias to session-<id>.scope for backwards compat)?

systemd-oomd and PSI metrics is so good at recognising the culprit cgroup. I do get part of reluctance is
identifying which process to shoot is imprecise.

Another reason we'd like to avoid shooting individual processes is because it'll leave services in an inconsistent state potentially. If some service/app/DE/whatever is running in one cgroup, then it's telling us that "I'm expecting that all these processes are managed together, as a group". Started/stopped/killed/resource limits applied/etc. So while it's technically possible to kill individual processes that's not what we're being asked to do by the services

My understanding is that the kernel OOM killer's primary objective is to keep the kernel functional, no matter the consequences for userspace. So it can kill individual processes indiscriminately, potentially leaving services in inconsistent states. oomd wants to avoid this.

Real-world example: If you have a browser w/ a bunch of tabs, and each tab is a collection of 3 processes, then it makes a lot more sense to kill all three processes that make up a tab at once as a group rather than killing the one process that's using the most RAM and letting the other two processes crash. If the browser doesn't tell us that it's safe to kill a tab's 3 processes without bringing down the rest of the browser, the best we can do is kill the whole browser

@traylenator
Copy link
Contributor

Thanks a lot for the consideration - much appreciated.

Another approach, which may be better for your situation, is opting session-*.scope out of oomd and instead setting normal resource control settings on it (memory max, etc).

We have both High and Max mem limits for sure, we'd be even more in mess with out them for sure.
This clang case was quite a way from those limits, there is loads of cache that could have been dropped.
This was memory IO thrash killing the box. The PSIs are really the best indicators for that.

image
image

As far as I can tell the system recovered because their compilation finished.

Another reason we'd like to avoid shooting individual processes is because it'll leave services in an inconsistent state

Of course entirely sensible, killing a bit of sssd is clearly bad. User sessions are quite different I would say though. The bash shell and the thing running in it are hugely unrelated I'd say. The ManagedOOMMemoryPressure=kill-largest-process for sure should be an option - I would not change the default - the default is hugely predictable which is great.

They could fork off emacs, then systemd-run
It's unfortunately just not going to happen - ~10,000 users from 5 continents a week it's just unfeasible to "train" people in that way. New people all the time with new and exciting ways to do the same thing. Legacy build systems, tooling. Change is unfortunately but understandably hard.

Identifying the "bad" process is hard but that's not a reason to try - the per cgroup PSI at least gives you 100% correct user&cgroup and then highest memory in that is probably going to be right or at least not too wrong. If its the best you can do its still justified. User is destroying box - I'll keep killing what I think it is till I get it - bash and the login should survive since it's almost certainly not that.

@tootea
Copy link

tootea commented Mar 8, 2024

@AdrianVovk Let me repeat my question above which is still unanswered: If I put in the effort and submit a PR implementing ManagedOOMMemoryPressure=kill-largest-process, does it stand a chance of being seriously considered, or is it likely to be rejected outright on philosophical grounds?

(Whether or not that setting should become a default on desktop systems is secondary. Let's first have the opt-in for those of us who can't stand the current default. But I personally think it would be a good default behaviour on desktops. I can see how always shooting the whole cgroup might make perfect sense in a containerized HA server environment, because having a whole service die and another instance take over is preferable to having a degraded instance limping on. But IMHO most desktop workloads have little to gain from that ideological purity; there's always the user around who can freely pull the plug on their entire session if it ends up in a half-broken state, so I see no justification for a preemptive strike. And as long as a process is still the atomic unit of lifetime management in most contexts, apps have to somehow handle single processes quitting anyway. Even your hypothetical browser with three PIDs per tab will inevitably sometimes see one of the processes segfault, at which point something has to happen with the remaining two without any involvement of cgroups.)

@AdrianVovk
Copy link
Contributor

Let me repeat my question above which is still unanswered

Not up to me, I'm not someone who works on oomd. I'm just a distro dev w/ opinions :)

I'm personally not very against such a mode, but I'm not particularly for it either. I'm slightly concerned that given the capability you propose apps will just use it to avoid the work of properly supporting cgroups 🤷

I see no justification for a preemptive strike

Just to be clear: I'm not for killing the whole session. When the whole session dies at the hands of oomd it's a symptom of oomd being misconfigured, or a lack of use of cgroups in the DE and/or apps

apps have to somehow handle single processes quitting anyway. Even your hypothetical browser with three PIDs per tab will inevitably sometimes see one of the processes segfault, at which point something has to happen with the remaining two without any involvement of cgroups

Sure, it's just that this "something" can often be a segfault, or hang, or whatever undefined behavior of its own. If the OS can avoid inducing situations like that it should.

In other words: individual tab processes crashing (which may ultimately bring down the rest of the tab's processes one way or another) due to programmer error is OK in my books. Individual tab processes being killed because the browser happens to be the biggest user of RAM in that moment, causing the rest of the browser to fall into some undefined behavior (probably having more processes ultimately crash) is something we can and should avoid as OS developers

@Werkov
Copy link
Contributor

Werkov commented Mar 14, 2024

If I put in the effort and submit a PR implementing ManagedOOMMemoryPressure=kill-largest-process, does it stand a chance of being seriously considered, or is it likely to be rejected outright on philosophical grounds?

I'm not sure this is possible to implement reliably in userspace. IIUC, oomd uses cgroup's cgroup.kill to terminate whole cgroup at once. The desired behavior could be implemented by something like memory.oom that would trigger memcg OOM that would act as if OOM happened inside cgroup. (Not to be confused with memory.oom.group; this is more like echo f >/proc/sysrq-trigger but with cgroup granularity.) Does that make sense?

@qnixsynapse
Copy link

oomd actually kills other harmless processes, instead of just the "offending process". Here is an example of a real world usage on arch linux which have cgroups enabled. As you can see, it considers other harmless processes eligible to kill which is wrong! Only offending process/program here is firefox or should I say, "/user.slice/user-1000.slice/user@1000.service/app.slice/app-flatpak-org.mozilla.firefox-6051.scope"

systemd-oomd[4313]: Considered 72 cgroups for killing, top candidates were:
systemd-oomd[4313]:         Path: /user.slice/user-1000.slice/user@1000.service/app.slice/festival.service
systemd-oomd[4313]:                 Swap Usage: 344.8M
 systemd-oomd[4313]:         Path: /user.slice/user-1000.slice/user@1000.service/app.slice/app-gnome-org.gnome.Software-1369.scope
systemd-oomd[4313]:                 Swap Usage: 74.7M
 systemd-oomd[4313]:         Path: /user.slice/user-1000.slice/user@1000.service/app.slice/app-flatpak-org.mozilla.firefox-6051.scope
 systemd-oomd[4313]:                 Swap Usage: 60.3M
 systemd-oomd[4313]:         Path: /user.slice/user-1000.slice/user@1000.service/session.slice/org.gnome.Shell@wayland.service
 systemd-oomd[4313]:                 Swap Usage: 42.5M
 systemd-oomd[4313]:         Path: /user.slice/user-1000.slice/user@1000.service/app.slice/app-gnome-geary\x2dautostart-1387.scope
systemd-oomd[4313]:                 Swap Usage: 34.7M
systemd-oomd[4313]:         Path: /user.slice/user-1000.slice/user@1000.service/app.slice/app-gnome-org.gnome.SystemMonitor-6625.scope
 systemd-oomd[4313]:                 Swap Usage: 32.3M
systemd-oomd[4313]:         Path: /user.slice/user-1000.slice/user@1000.service/app.slice/app-gnome-org.gnome.Evolution\x2dalarm\x2dnotify-1403.scope
 systemd-oomd[4313]:                 Swap Usage: 19.5M
 systemd-oomd[4313]:         Path: /user.slice/user-1000.slice/user@1000.service/session.slice/org.freedesktop.IBus.session.GNOME.service
 systemd-oomd[4313]:                 Swap Usage: 17.9M
 systemd-oomd[4313]:         Path: /user.slice/user-1000.slice/user@1000.service/app.slice/app-dbus\x2d:1.2\x2dorg.gnome.Calendar.slice/dbus-:1.2-org.gnome.Calendar@0.service
 systemd-oomd[4313]:                 Swap Usage: 16.0M
 systemd-oomd[4313]:         Path: /user.slice/user-1000.slice/user@1000.service/app.slice/evolution-source-registry.service
 systemd-oomd[4313]:                 Swap Usage: 15.3M

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-discussion 🤔 oomd RFE 🎁 Request for Enhancement, i.e. a feature request
Development

No branches or pull requests

9 participants