Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logged out session still tracked by logind #26744

Closed
hexchain opened this issue Mar 10, 2023 · 29 comments · Fixed by #28348
Closed

Logged out session still tracked by logind #26744

hexchain opened this issue Mar 10, 2023 · 29 comments · Fixed by #28348
Labels
bug 🐛 Programming errors, that need preferential fixing login

Comments

@hexchain
Copy link
Contributor

systemd version the issue has been seen with

253.1

Used distribution

Arch Linux

Linux kernel version used

6.2.1-arch1-1

CPU architectures issue was seen on

x86_64

Component

systemd-logind

Expected behaviour you didn't see

After a session has been logged out and all tasks are stopped, that session should no longer exist in loginctl.

Unexpected behaviour you saw

Logged out sessions still show up in loginctl.

In the following example, sessions 8, 10, and c3 are already logged out. Session 8 is a Plasma Wayland session, logged in through SDDM. Session 10 is a TTY session, created by C-A-F4 and logging in there.

% loginctl
SESSION  UID USER     SEAT  TTY
     10 1000 hexchain seat0 tty4
      2 1000 hexchain seat0 tty1
      8 1001 thc      seat0 tty4
     c3  973 sddm     seat0

4 sessions listed.

% loginctl show-session 10
Id=10
User=1000
Name=hexchain
Timestamp=Fri 2023-03-10 14:52:11 +08
TimestampMonotonic=80455612140
VTNr=4
Seat=seat0
TTY=tty4
Remote=no
Service=login
Scope=session-10.scope
Leader=133675
Audit=10
Type=tty
Class=user
Active=no
State=closing
IdleHint=no
IdleSinceHint=1678431129781660
IdleSinceHintMonotonic=80453572525
LockedHint=no

% systemctl status session-10.scope
Unit session-10.scope could not be found.

% loginctl terminate-session 10
% loginctl terminate-session 10
% loginctl kill-session 10
Could not kill session: No such file or directory

% loginctl show-session 10
Id=10
User=1000
Name=hexchain
Timestamp=Fri 2023-03-10 14:52:11 +08
TimestampMonotonic=80455612140
VTNr=4
Seat=seat0
TTY=tty4
Remote=no
Service=login
Scope=session-10.scope
Leader=133675
Audit=10
Type=tty
Class=user
Active=no
State=closing
IdleHint=no
IdleSinceHint=1678431129781660
IdleSinceHintMonotonic=80453572526
LockedHint=no

% loginctl
SESSION  UID USER     SEAT  TTY
     10 1000 hexchain seat0 tty4
      2 1000 hexchain seat0 tty1
      8 1001 thc      seat0 tty4
     c3  973 sddm     seat0

4 sessions listed.

Steps to reproduce the problem

I haven't figured out a stable way to reproduce, sorry.

The desktop manager in use is sddm/sddm@5341b06.

Additional program output to the terminal or log subsystem illustrating the issue

% journalctl -b -u systemd-logind | grep -v 'Watching system buttons on'
Mar 09 16:31:25 hitori systemd[1]: Starting User Login Management...
Mar 09 16:31:25 hitori systemd-logind[569]: New seat seat0.
Mar 09 16:31:25 hitori systemd[1]: Started User Login Management.
Mar 09 16:31:27 hitori systemd-logind[569]: New session c1 of user sddm.
Mar 09 16:33:43 hitori systemd-logind[569]: New session 2 of user hexchain.
Mar 09 16:33:43 hitori systemd-logind[569]: Session c1 logged out. Waiting for processes to exit.
Mar 09 16:33:43 hitori systemd-logind[569]: Removed session c1.
Mar 09 16:33:59 hitori systemd-logind[569]: New session c2 of user sddm.
Mar 09 16:34:04 hitori systemd-logind[569]: New session 5 of user thc.
Mar 09 16:34:04 hitori systemd-logind[569]: Session c2 logged out. Waiting for processes to exit.
Mar 09 16:34:04 hitori systemd-logind[569]: Removed session c2.
Mar 09 22:18:23 hitori systemd-logind[569]: Session 5 logged out. Waiting for processes to exit.
Mar 09 22:18:23 hitori systemd-logind[569]: Removed session 5.
Mar 10 10:41:35 hitori systemd-logind[569]: New session c3 of user sddm.
Mar 10 10:41:40 hitori systemd-logind[569]: New session 8 of user thc.
Mar 10 10:41:40 hitori systemd-logind[569]: Session c3 logged out. Waiting for processes to exit.
Mar 10 14:26:17 hitori systemd-logind[569]: Session 8 logged out. Waiting for processes to exit.
Mar 10 14:52:11 hitori systemd-logind[569]: New session 10 of user hexchain.
Mar 10 14:52:15 hitori systemd-logind[569]: Session 10 logged out. Waiting for processes to exit.
@hexchain hexchain added the bug 🐛 Programming errors, that need preferential fixing label Mar 10, 2023
@emansom
Copy link
Contributor

emansom commented Mar 22, 2023

Experiencing this as well. Same system configuration.

@georgmu
Copy link
Contributor

georgmu commented Apr 24, 2023

Same problem with Fedora 38, systemd-253.2-1.fc38.x86_64

@rhansen
Copy link

rhansen commented Apr 26, 2023

I think this problem has been around for a while; see bug #14850. I'm noticing it on a system with systemd 245.4.

@kanashimia
Copy link

An easy way to reproduce this using machinectl shell:

$ loginctl
...
N sessions listed.

$ machinectl shell .host /bin/sh -c exit
...

$ loginctl
...
N+1 sessions listed.

@lilydjwg
Copy link

After some dbus-monitor and strace commands, I find that systemd isn't emitting the UnitRemoved signal for session-XX.scope, so that logind still thinks there are remaining processes.

I tried to figure out why the signal isn't sent, but I wrongly pressed Ctrl-C at gdb -p 1 and the system died. The bus_unit_send_removed_signal function was called but my breakpoint at send_removed_signal was not reached.

@venkatkchandra
Copy link

venkatkchandra commented Jun 5, 2023

[user@<hostname_masked> ~]# journalctl -S "2023-06-01 16:00" -U "2023-06-01 17:04" -p 7 | egrep 'systemd\[|systemd-logind\[' | grep 9011 | grep -v 39011 | grep -v 'Ignoring session' | grep -v 399011
Jun 01 16:02:27 <hostname_masked> systemd[1]: session-9011.scope: Failed to load configuration: No such file or directory
Jun 01 16:02:27 <hostname_masked> systemd[1]: session-9011.scope: Trying to enqueue job session-9011.scope/start/fail
Jun 01 16:02:27 <hostname_masked> systemd[1]: session-9011.scope: Installed new job session-9011.scope/start as 2152958
Jun 01 16:02:27 <hostname_masked> systemd[1]: session-9011.scope: Enqueued job session-9011.scope/start as 2152958
Jun 01 16:02:27 <hostname_masked> systemd-logind[1613]: New session 9011 of user root.
Jun 01 16:02:27 <hostname_masked> systemd[1]: session-9011.scope changed dead -> running
Jun 01 16:02:27 <hostname_masked> systemd[1]: session-9011.scope: Job session-9011.scope/start finished, result=done
Jun 01 16:02:27 <hostname_masked> systemd[1]: Started Session 9011 of user root.
Jun 01 16:02:27 <hostname_masked> systemd-logind[1613]: Sent message type=method_call sender=n/a destination=org.freedesktop.systemd1 path=/org/freedesktop/systemd1/unit/session_2d9011_2escope interface=org.freedesktop.systemd1.Scope member=Abandon cookie=493858 reply_cookie=0 signature=n/a error-name=n/a error-message=n/a
Jun 01 16:02:27 <hostname_masked> systemd[1]: session-9011.scope: Collecting.
Jun 01 16:02:27 <hostname_masked> systemd[1]: Got message type=method_call sender=:1.6 destination=org.freedesktop.systemd1 path=/org/freedesktop/systemd1/unit/session_2d9011_2escope interface=org.freedesktop.systemd1.Scope member=Abandon cookie=493858 reply_cookie=0 signature=n/a error-name=n/a error-message=n/a
Jun 01 16:02:27 <hostname_masked> systemd[1]: session-9011.scope: Failed to load configuration: No such file or directory
Jun 01 16:02:27 <hostname_masked> systemd[1]: Sent message type=error sender=n/a destination=:1.6 path=n/a interface=n/a member=n/a cookie=16 reply_cookie=493858 signature=s error-name=org.freedesktop.systemd1.ScopeNotRunning error-message=Scope session-9011.scope is not running, cannot abandon.
Jun 01 16:02:27 <hostname_masked> systemd[1]: Failed to process message type=method_call sender=:1.6 destination=org.freedesktop.systemd1 path=/org/freedesktop/systemd1/unit/session_2d9011_2escope interface=org.freedesktop.systemd1.Scope member=Abandon cookie=493858 reply_cookie=0 signature=n/a error-name=n/a error-message=n/a: Scope session-9011.scope is not running, cannot abandon.
Jun 01 16:02:27 <hostname_masked> systemd[1]: session-9011.scope: Collecting.
Jun 01 16:02:27 <hostname_masked> systemd-logind[1613]: Got message type=error sender=:1.397263 destination=:1.6 path=n/a interface=n/a member=n/a cookie=16 reply_cookie=493858 signature=s error-name=org.freedesktop.systemd1.ScopeNotRunning error-message=Scope session-9011.scope is not running, cannot abandon.
Jun 01 16:02:27 <hostname_masked> systemd-logind[1613]: Session 9011 logged out. Waiting for processes to exit.
Jun 01 16:31:34 <hostname_masked> systemd[1]: Sent message type=error sender=n/a destination=:1.6 path=n/a interface=n/a member=n/a cookie=1132 reply_cookie=497227 signature=s error-name=org.freedesktop.systemd1.NoSuchUnit error-message=Unit session-9011.scope not loaded.
Jun 01 16:31:34 <hostname_masked> systemd[1]: Failed to process message type=method_call sender=:1.6 destination=org.freedesktop.systemd1 path=/org/freedesktop/systemd1 interface=org.freedesktop.systemd1.Manager member=KillUnit cookie=497227 reply_cookie=0 signature=ssi error-name=n/a error-message=n/a: Unit session-9011.scope not loaded.
Jun 01 16:31:34 <hostname_masked> systemd-logind[1613]: Got message type=error sender=:1.397263 destination=:1.6 path=n/a interface=n/a member=n/a cookie=1132 reply_cookie=497227 signature=s error-name=org.freedesktop.systemd1.NoSuchUnit error-message=Unit session-9011.scope not loaded.
[user@<hostname_masked> ~]# 
  • The issue occurred in a development setup, running some load tests. Unclear which test causes this issue but involves a quick establishment and teardown of the session. Does not reproduce at will. What I notice is the following.
  • systemd-logind initiates the session tear down sequence but systemd throws errors and is unable to complete the message sequence.
  • It appears that systemd is unable to do the cleanup of the sessions as part of garbage collection. The sessions are getting marked for garbage collection and they do get into the cleanup_queue but I do not think the PropertiesChanged or UnitRemoved messages for these sessions are being sent oyt by systemd to systemd-logind
  • This message is the proof for getting into the cleanup_queue Jun 01 16:02:27 <hostname_masked> systemd[1]: session-9011.scope: Collecting.
  • We are using 239-58 version of the CentOS build.
  • As you will see, debug logs have been enabled for systemd and systemd-logind
  • The file /run/systemd/transient/session-9011.scope was missing / deleted.
  • The file /run/systemd/sessions/9011 was present and showed the session in closing state. Restarted logind-systemd and sessions stuck in closing were reaped.

@poettering
Copy link
Member

@venkatkchandra sorry, but 239 is simply too old. This code has changed a lot since then. Issues with such old versions should be reported to the downstream distro, not here upstream

@poettering
Copy link
Member

@hexchain is this a cgroupsv2 system?

@hexchain
Copy link
Contributor Author

hexchain commented Jun 8, 2023

@hexchain is this a cgroupsv2 system?

Yes.

@poettering
Copy link
Member

Any chance you can check if #27968 helps?

@georgmu
Copy link
Contributor

georgmu commented Jun 8, 2023

I tried to reproduce the bug in a VM via the machinectl command, but that alone does not trigger the issue.

If I can (more or less reliably) reproduce it in the VM, I can try the patch

One observation that suggests that the patch might not be the complete solution:
The problem is not triggered every time (I checked loginctl output after opening/closing login sessions on my desktop system and for some time the closed sessions disappeared), but after it is triggered for the first time, all new sessions will be in the Closing state after logout.

@venkatkchandra
Copy link

venkatkchandra commented Jun 8, 2023

I tried to reproduce the bug in a VM via the machinectl command, but that alone does not trigger the issue.

If I can (more or less reliably) reproduce it in the VM, I can try the patch

One observation that suggests that the patch might not be the complete solution: The problem is not triggered every time (I checked loginctl output after opening/closing login sessions on my desktop system and for some time the closed sessions disappeared), but after it is triggered for the first time, all new sessions will be in the Closing state after logout.

Had the same experience with the CentOS 239-58. Once sessions get into closing, further sessions do not get cleaned up and you empty the quota of 8192 connections, duration depending on the load on the system. This is a long standing issue, I believe. I was unable to reproduce with machinectl. I can say this much that the unit seems to be getting into the GC queue and then the cleanup queue. But does not get cleaned up.

The following is the evidence that it gets into the cleanup_queue

Jun 01 16:02:27 <hostname_masked> systemd[1]: session-9011.scope: Collecting.

@venkatkchandra
Copy link

After some dbus-monitor and strace commands, I find that systemd isn't emitting the UnitRemoved signal for session-XX.scope, so that logind still thinks there are remaining processes.

I tried to figure out why the signal isn't sent, but I wrongly pressed Ctrl-C at gdb -p 1 and the system died. The bus_unit_send_removed_signal function was called but my breakpoint at send_removed_signal was not reached.

This is correct with 239-58 also. UnitRemoved does not seem to be going out.

@poettering
Copy link
Member

Had the same experience with the CentOS 239-58.

This is the upstream bug tracker of systemd. We only track recent versions of systemd here, i.e. v252+v253 here. v239 is waaaaaaaaaay too old. Please do not add noise to bug reports here.

Also, centos 8 afaik still uses cgroupsv1, where cgroup empty events are unreliable to the point of uselessness. there it is pretty likely pid1 won't notice when a cgroup runs empty, which might cause the issue at hand here.

@hexchain
Copy link
Contributor Author

hexchain commented Jun 10, 2023

Any chance you can check if #27968 helps?

Given that it was merged, I built and installed the current master (fcc0668), but after rebooting I was unable to log into the Plasma Wayland session.

kwin_wayland complains that "/tmp/.X11-unix is not owned by root or us". Further investigation showed that entries in /tmp that should normally be owned by root appears to be owned by nobody instead, indicating that a user namespace might be in place. After reverting 6ef721c, the Plasma session starts normally again. AFAIK there doesn't seem to be any sandboxing option in plasma-kwin_wayland.service. (cc @bluca)

I'll be running this build for a few days and see if the logind issue still happens.

@bluca
Copy link
Member

bluca commented Jun 10, 2023

I am not familiar with kde but when I searched i don't remember seeing it in the list of things shipping user units with sandboxing, where is it coming from? Ie can you show the user unit that is running that session?

@hexchain
Copy link
Contributor Author

Here are the logs of plasma-kwin_wayland.service with the following override:

[Service]
ExecStartPre=ls -al /tmp
ExecStartPre=sh -c 'systemctl --user show plasma-kwin_wayland.service | sort -d'

plasma-kwin_wayland.log

@bluca
Copy link
Member

bluca commented Jun 10, 2023

I don't see any sandboxing there, are you sure that's the right unit?

@hexchain
Copy link
Contributor Author

Pretty sure it is. Looking at /proc/self/ns also confirmed that the service is indeed inside a different user namespace:

Service override:

[Service]
ExecStartPre=ls -l /proc/self/ns

Journal:

Jun 11 12:07:33 hostname ls[386095]: total 0
Jun 11 12:07:33 hostname ls[386095]: lrwxrwxrwx 1 username username 0 Jun 11 12:07 cgroup -> cgroup:[4026531835]
Jun 11 12:07:33 hostname ls[386095]: lrwxrwxrwx 1 username username 0 Jun 11 12:07 ipc -> ipc:[4026531839]
Jun 11 12:07:33 hostname ls[386095]: lrwxrwxrwx 1 username username 0 Jun 11 12:07 mnt -> mnt:[4026531841]
Jun 11 12:07:33 hostname ls[386095]: lrwxrwxrwx 1 username username 0 Jun 11 12:07 net -> net:[4026531840]
Jun 11 12:07:33 hostname ls[386095]: lrwxrwxrwx 1 username username 0 Jun 11 12:07 pid -> pid:[4026531836]
Jun 11 12:07:33 hostname ls[386095]: lrwxrwxrwx 1 username username 0 Jun 11 12:07 pid_for_children -> pid:[4026531836]
Jun 11 12:07:33 hostname ls[386095]: lrwxrwxrwx 1 username username 0 Jun 11 12:07 time -> time:[4026531834]
Jun 11 12:07:33 hostname ls[386095]: lrwxrwxrwx 1 username username 0 Jun 11 12:07 time_for_children -> time:[4026531834]
Jun 11 12:07:33 hostname ls[386095]: lrwxrwxrwx 1 username username 0 Jun 11 12:07 user -> user:[4026533106]
Jun 11 12:07:33 hostname ls[386095]: lrwxrwxrwx 1 username username 0 Jun 11 12:07 uts -> uts:[4026531838]

PID 1:

% sudo ls -l /proc/1/ns/
total 0
lrwxrwxrwx 1 root root 0 Jun 10 15:32 cgroup -> cgroup:[4026531835]
lrwxrwxrwx 1 root root 0 Jun 11 11:57 ipc -> ipc:[4026531839]
lrwxrwxrwx 1 root root 0 Jun 11 11:57 mnt -> mnt:[4026531841]
lrwxrwxrwx 1 root root 0 Jun 10 15:33 net -> net:[4026531840]
lrwxrwxrwx 1 root root 0 Jun 11 11:57 pid -> pid:[4026531836]
lrwxrwxrwx 1 root root 0 Jun 11 11:57 pid_for_children -> pid:[4026531836]
lrwxrwxrwx 1 root root 0 Jun 11 11:57 time -> time:[4026531834]
lrwxrwxrwx 1 root root 0 Jun 11 11:57 time_for_children -> time:[4026531834]
lrwxrwxrwx 1 root root 0 Jun 11 11:57 user -> user:[4026531837]
lrwxrwxrwx 1 root root 0 Jun 11 11:57 uts -> uts:[4026531838]

@bluca
Copy link
Member

bluca commented Jun 11, 2023

Then what you posted cannot be the full list of settings, something is missing. Try with systemctl cat

@hexchain
Copy link
Contributor Author

Then what you posted cannot be the full list of settings, something is missing. Try with systemctl cat

% systemctl --user cat plasma-kwin_wayland.service
# /usr/lib/systemd/user/plasma-kwin_wayland.service
[Unit]
Description=KDE Window Manager
PartOf=graphical-session.target

[Service]
ExecStart=/usr/bin/kwin_wayland_wrapper --xwayland
BusName=org.kde.KWinWrapper
Slice=session.slice

# /home/username/.config/systemd/user/plasma-kwin_wayland.service.d/override.conf
[Service]
ExecStartPre=ls -l /proc/self/ns

@poettering
Copy link
Member

what about the per-user systemd instance? i.e. user@.service for you user? does that have sandboxing on?

@hexchain
Copy link
Contributor Author

The user service looks like this:

% systemctl cat user@.service
# /usr/lib/systemd/system/user@.service
#  SPDX-License-Identifier: LGPL-2.1-or-later
#
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.

[Unit]
Description=User Manager for UID %i
Documentation=man:user@.service(5)
After=user-runtime-dir@%i.service dbus.service systemd-oomd.service
Requires=user-runtime-dir@%i.service
IgnoreOnIsolate=yes

[Service]
User=%i
PAMName=systemd-user
Type=notify-reload
ExecStart=/usr/lib/systemd/systemd --user
Slice=user-%i.slice
KillMode=mixed
Delegate=pids memory cpu
DelegateSubgroup=init.scope
TasksMax=infinity
TimeoutStopSec=120s
KeyringMode=inherit
OOMScoreAdjust=100
MemoryPressureWatch=skip

# /usr/lib/systemd/system/user@.service.d/10-login-barrier.conf
#  SPDX-License-Identifier: LGPL-2.1-or-later
#
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.

[Unit]
# Make sure user instances are started after logins are allowed. However this
# is not desirable for user@0.service since root should be able to log in
# earlier during the boot process especially if something goes wrong.
After=systemd-user-sessions.service

@hexchain
Copy link
Contributor Author

I've added some log outputs to execute.c and found out that exec_context_need_unprivileged_private_users returned true for all user units because context->private_mounts is -1.

@bluca
Copy link
Member

bluca commented Jun 14, 2023

Ah, good find, that was changed from boolean to tristate concurrently so it was completely missed - are you able to send a PR? The fix is simply to check if it's > 0

@hexchain
Copy link
Contributor Author

Sure! Opened #28037.

@hexchain
Copy link
Contributor Author

hexchain commented Jul 10, 2023

This issue doesn't seem to appear since I've switched to the main branch. I think we can call it fixed.

However, now at 254rc1, systemd-logind has a bunch of zombie subprocesses with the name (close). As I'm typing this after a soft-reboot, there are 7. Is this normal?

EDIT: It gets a bit crazy over time:

% pgrep -fa logind
855930 /usr/lib/systemd/systemd-logind
% pgrep -P 855930 -c '(close)'
45

YHNdnzj added a commit to YHNdnzj/systemd that referenced this issue Jul 10, 2023
Follow-up for c26d783

waitpid() doesn't support WEXITED and returns -1 (EINVAL),
which results in the intermediate close process not getting
reaped.

Fixes systemd#26744 (comment)
YHNdnzj added a commit to YHNdnzj/systemd that referenced this issue Jul 10, 2023
Follow-up for c26d783

waitpid() doesn't support WEXITED and returns -1 (EINVAL),
which results in the intermediate close process not getting
reaped.

Fixes systemd#26744 (comment)
@YHNdnzj
Copy link
Member

YHNdnzj commented Jul 10, 2023

This issue doesn't seem to appear since I've switched to the main branch. I think we can call it fixed.

Closing then :)

However, now at 254rc1, systemd-logind has a bunch of zombie subprocesses with the name (close). As I'm typing this after a soft-reboot, there are 7. Is this normal?

Just opened #28348, hope that helps. But that's off-topic here.

@YHNdnzj YHNdnzj closed this as completed Jul 10, 2023
YHNdnzj added a commit to YHNdnzj/systemd that referenced this issue Jul 11, 2023
Follow-up for c26d783

waitpid() doesn't support WEXITED and returns -1 (EINVAL),
which results in the intermediate close process not getting
reaped.

Fixes systemd#26744 (comment)
YHNdnzj added a commit to YHNdnzj/systemd that referenced this issue Jan 16, 2024
At the same time, 8b6c039 is reverted, i.e.
session state is removed from the output. It was added to workaround systemd#26744,
and doesn't really make too much sense after the issue is properly fixed.
YHNdnzj added a commit to YHNdnzj/systemd that referenced this issue Jan 16, 2024
At the same time, 8b6c039 is reverted, i.e.
session state is removed from the output. It was added to workaround systemd#26744,
and doesn't really make too much sense after the issue is properly fixed.
YHNdnzj added a commit to YHNdnzj/systemd that referenced this issue Jan 16, 2024
At the same time, 8b6c039 is reverted, i.e.
session state is removed from the output. It was added to workaround systemd#26744,
and doesn't really make too much sense after the issue is properly fixed.
YHNdnzj added a commit to YHNdnzj/systemd that referenced this issue Jan 16, 2024
At the same time, 8b6c039 is reverted, i.e.
session state is removed from the output. It was added to workaround systemd#26744,
and doesn't really make too much sense after the issue is properly fixed.
YHNdnzj added a commit to YHNdnzj/systemd that referenced this issue Jan 16, 2024
At the same time, 8b6c039 is reverted, i.e.
session state is removed from the output. It was added to workaround systemd#26744,
and doesn't really make too much sense after the issue is properly fixed.
YHNdnzj added a commit to YHNdnzj/systemd that referenced this issue Jan 16, 2024
At the same time, 8b6c039 is reverted, i.e.
session state is removed from the output. It was added to workaround systemd#26744,
and doesn't really make too much sense after the issue is properly fixed.
YHNdnzj added a commit to YHNdnzj/systemd that referenced this issue Jan 16, 2024
At the same time, 8b6c039 is reverted, i.e.
session state is removed from the output. It was added to workaround systemd#26744,
and doesn't really make too much sense after the issue is properly fixed.
YHNdnzj added a commit to YHNdnzj/systemd that referenced this issue Jan 17, 2024
At the same time, 8b6c039 is reverted, i.e.
session state is removed from the output. It was added to workaround systemd#26744,
and doesn't really make too much sense after the issue is properly fixed.
LiveFreeAndRoam pushed a commit to LiveFreeAndRoam/systemd that referenced this issue Jan 22, 2024
At the same time, 8b6c039 is reverted, i.e.
session state is removed from the output. It was added to workaround systemd#26744,
and doesn't really make too much sense after the issue is properly fixed.
@mrc0mmand mrc0mmand added login and removed logind labels Apr 19, 2024
martinpitt added a commit to martinpitt/cockpit that referenced this issue May 7, 2024
Despite multiple attempts/claims at fixing this, logind still often
has empty "State: closing" sessions after logout, without any processes
in it [1]. Our generic nondestructive test cleanup in testlib's
terminate_sessions() has a workaround for this (restarting logind). Do
the same in this test.

Only do this when waiting for the session to go away; starting new
sessions should work fine.

Fixes cockpit-project#20379

[1] systemd/systemd#26744
@martinpitt
Copy link
Contributor

We still see this in our CI on latest systemd 256-rc1, and with all systemd versions before. Our workaround is to run systemctl stop systemd-logind in the loop while waiting for the session to go away, when there is a "State: closing" session. Restarting logind mops up these empty "closing" sessions.

martinpitt added a commit to cockpit-project/cockpit that referenced this issue May 7, 2024
Despite multiple attempts/claims at fixing this, logind still often
has empty "State: closing" sessions after logout, without any processes
in it [1]. Our generic nondestructive test cleanup in testlib's
terminate_sessions() has a workaround for this (restarting logind). Do
the same in this test.

Only do this when waiting for the session to go away; starting new
sessions should work fine.

Fixes #20379

[1] systemd/systemd#26744
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🐛 Programming errors, that need preferential fixing login
Development

Successfully merging a pull request may close this issue.