Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kicked out from session #7782

Open
holiman opened this issue Sep 19, 2022 · 5 comments
Open

Kicked out from session #7782

holiman opened this issue Sep 19, 2022 · 5 comments
Labels
affects-4.1 This issue affects Qubes OS 4.1. C: desktop-linux-i3 Support for the i3 tiling window manager needs diagnosis Requires technical diagnosis from developer. Replace with "diagnosed" or remove if otherwise closed. P: default Priority: default. Default priority for new issues, to be replaced given sufficient information. T: bug Type: bug report. A problem or defect resulting in unintended behavior in something that exists.

Comments

@holiman
Copy link

holiman commented Sep 19, 2022

Qubes OS release

  qubes: |
    R4.1
  xen: |
    4.14.5
  kernel: |
    5.15.63-1

Using i3wm.

Brief summary

Every once in a while, perhaps 1-2 times per day, I become "kicked out" from dom0. Essentially, I need to re-log in.

When I do log in again, all my windows are still active, and nothing have been lost, it is merely annoying. After logging in, all windows pop up one by one on desktop 1.

More details

This is from journalctl -r on dom0, one of the first times it happened:

Sep 13 18:19:40 dom0 kernel:  </TASK>
Sep 13 18:19:40 dom0 kernel: R13: 0000000000000001 R14: 0000000000000009 R15: 000000000000278e
Sep 13 18:19:40 dom0 kernel: R10: 0000000000000001 R11: 0000000000000246 R12: 000077c444000000
Sep 13 18:19:40 dom0 kernel: RBP: 0000000000000000 R08: 0000000000000009 R09: 000000007e2dd000
Sep 13 18:19:40 dom0 kernel: RDX: 0000000000000001 RSI: 000000000278e000 RDI: 0000000000000000
Sep 13 18:19:40 dom0 kernel: RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 000077c44efd32e6
Sep 13 18:19:40 dom0 kernel: RSP: 002b:00007fff813265d8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
Sep 13 18:19:40 dom0 kernel: Code: 01 00 66 90 f3 0f 1e fa 41 f7 c1 ff 0f 00 00 75 2b 55 48 89 fd 53 89 cb 48 85 ff 74 37 41 89 da 48 89 ef b8 09 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 62 5b 5d c3 0f 1f 80 00 00 00 00 48 8b 05 79
Sep 13 18:19:40 dom0 kernel: RIP: 0033:0x77c44efd32e6
Sep 13 18:19:40 dom0 kernel:  entry_SYSCALL_64_after_hwframe+0x61/0xcb
Sep 13 18:19:40 dom0 kernel:  do_syscall_64+0x38/0x90
Sep 13 18:19:40 dom0 kernel:  ksys_mmap_pgoff+0x1d1/0x240
Sep 13 18:19:40 dom0 kernel:  vm_mmap_pgoff+0xe6/0x190
Sep 13 18:19:40 dom0 kernel:  ? security_mmap_file+0x7f/0xe0
Sep 13 18:19:40 dom0 kernel:  ? gntdev_alloc_map+0x374/0x3d0 [xen_gntdev]
Sep 13 18:19:40 dom0 kernel:  do_mmap+0x341/0x530
Sep 13 18:19:40 dom0 kernel:  mmap_region+0x61a/0x6d0
Sep 13 18:19:40 dom0 kernel:  unmap_region+0xbd/0x120
Sep 13 18:19:40 dom0 kernel:  unmap_vmas+0x83/0x100
Sep 13 18:19:40 dom0 kernel:  unmap_page_range+0x17a/0x210
Sep 13 18:19:40 dom0 kernel:  zap_pud_range.isra.0+0xaa/0x1e0
Sep 13 18:19:40 dom0 kernel:  zap_pmd_range.isra.0+0x1cc/0x2d0
Sep 13 18:19:40 dom0 kernel:  ? __raw_callee_save_xen_pmd_val+0x11/0x22
Sep 13 18:19:40 dom0 kernel:  zap_pte_range+0x388/0x7d0
Sep 13 18:19:40 dom0 kernel:  print_bad_pte.cold+0x6a/0xc5
Sep 13 18:19:40 dom0 kernel:  dump_stack_lvl+0x46/0x5e
Sep 13 18:19:40 dom0 kernel:  <TASK>
Sep 13 18:19:40 dom0 kernel: Call Trace:
Sep 13 18:19:40 dom0 kernel: Hardware name: LENOVO 20TKCTO1WW/20TKCTO1WW, BIOS N2VET36W (1.21 ) 12/27/2021
Sep 13 18:19:40 dom0 kernel: CPU: 0 PID: 149508 Comm: Xorg Tainted: G    B             5.15.63-1.fc32.qubes.x86_64 #1
Sep 13 18:19:40 dom0 kernel: file:(null) fault:0x0 mmap:0x0 readpage:0x0
Sep 13 18:19:40 dom0 kernel: addr:000077c3ad0e6000 vm_flags:140600f9 anon_vma:0000000000000000 mapping:0000000000000000 index:7e318
Sep 13 18:19:40 dom0 kernel: page dumped because: bad pte
Sep 13 18:19:40 dom0 kernel: raw: 0000000000000000 000008c50000002d 00000001fffffffe 0000000000000000
Sep 13 18:19:40 dom0 kernel: raw: 0027ffffc000340a ffff8880228420c0 ffffea0005b57f40 0000000000000000
Sep 13 18:19:40 dom0 kernel: flags: 0x27ffffc000340a(referenced|dirty|owner_priv_1|reserved|private|node=0|zone=4|lastcpupid=0x1fffff)
Sep 13 18:19:40 dom0 kernel: page:00000000d225d7af refcount:1 mapcount:-1 mapping:0000000000000000 index:0x0 pfn:0x16d5fe
Sep 13 18:19:40 dom0 kernel: BUG: Bad page map in process Xorg  pte:8000000e37c14365 pmd:3e651067

I'm not sure if that is related - the latest time I checked, this seemed to be the most likely culprit:

Sep 19 12:10:37 dom0 systemd[1]: systemd-coredump@3-116193-0.service: Succeeded.
Sep 19 12:10:37 dom0 systemd-logind[4411]: Removed session 5.
Sep 19 12:10:37 dom0 systemd[1]: session-5.scope: Consumed 1min 48.381s CPU time.
Sep 19 12:10:37 dom0 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@3-116193-0 comm="systemd" exe="/usr/l>
Sep 19 12:10:37 dom0 systemd[1]: session-5.scope: Succeeded.
Sep 19 12:10:37 dom0 systemd-coredump[116200]: Process 95970 (xss-lock) of user 1000 dumped core.
                                               
                                               Stack trace of thread 95970:
                                               #0  0x00007eba44371fc2 g_logv (libglib-2.0.so.0 + 0x59fc2)
                                               #1  0x00007eba44372233 g_log (libglib-2.0.so.0 + 0x5a233)
                                               #2  0x00005b4d327b1be5 screensaver_event_cb (xss-lock + 0x4be5)
                                               #3  0x00005b4d327b1cdf xcb_event_dispatch (xss-lock + 0x4cdf)
                                               #4  0x00007eba4436a78f g_main_context_dispatch (libglib-2.0.so.0 + 0x5278f)
                                               #5  0x00007eba4436ab18 g_main_context_iterate.constprop.0 (libglib-2.0.so.0 + 0x52b18)
                                               #6  0x00007eba4436ae33 g_main_loop_run (libglib-2.0.so.0 + 0x52e33)
                                               #7  0x00005b4d327b0ecd main (xss-lock + 0x3ecd)
                                               #8  0x00007eba44138082 __libc_start_main (libc.so.6 + 0x27082)
                                               #9  0x00005b4d327b102e _start (xss-lock + 0x402e)
                                               
                                               Stack trace of thread 95977:
                                               #0  0x00007eba4420786f __poll (libc.so.6 + 0xf686f)
                                               #1  0x00007eba4436aaae g_main_context_iterate.constprop.0 (libglib-2.0.so.0 + 0x52aae)
                                               #2  0x00007eba4436abe3 g_main_context_iteration (libglib-2.0.so.0 + 0x52be3)
                                               #3  0x00007eba4436ac31 glib_worker_main (libglib-2.0.so.0 + 0x52c31)
                                               #4  0x00007eba443947f2 g_thread_proxy (libglib-2.0.so.0 + 0x7c7f2)
                                               #5  0x00007eba4402f432 start_thread (libpthread.so.0 + 0x9432)
                                               #6  0x00007eba442126d3 __clone (libc.so.6 + 0x1016d3)
                                               
                                               Stack trace of thread 95979:
                                               #0  0x00007eba4420786f __poll (libc.so.6 + 0xf686f)
                                               #1  0x00007eba4436aaae g_main_context_iterate.constprop.0 (libglib-2.0.so.0 + 0x52aae)
                                               #2  0x00007eba4436ae33 g_main_loop_run (libglib-2.0.so.0 + 0x52e33)
                                               #3  0x00007eba445be6aa gdbus_shared_thread_func (libgio-2.0.so.0 + 0x1226aa)
                                               #4  0x00007eba443947f2 g_thread_proxy (libglib-2.0.so.0 + 0x7c7f2)
                                               #5  0x00007eba4402f432 start_thread (libpthread.so.0 + 0x9432)
                                               #6  0x00007eba442126d3 __clone (libc.so.6 + 0x1016d3)
Sep 19 12:10:37 dom0 qrexec-policy-daemon[4604]: qrexec: qubes.WindowIconUpdater+: sys-net -> @adminvm: allowed to dom0
Sep 19 12:10:37 dom0 qrexec-policy-daemon[4604]: qrexec: qubes.WindowIconUpdater+: sys-firewall -> @adminvm: allowed to dom0
Sep 19 12:10:37 dom0 qrexec-policy-daemon[4604]: qrexec: qubes.WindowIconUpdater+: work -> @adminvm: allowed to dom0
@holiman holiman added P: default Priority: default. Default priority for new issues, to be replaced given sufficient information. T: bug Type: bug report. A problem or defect resulting in unintended behavior in something that exists. labels Sep 19, 2022
@andrewdavidwong andrewdavidwong added C: desktop-linux-i3 Support for the i3 tiling window manager needs diagnosis Requires technical diagnosis from developer. Replace with "diagnosed" or remove if otherwise closed. labels Sep 19, 2022
@andrewdavidwong andrewdavidwong added this to the Release 4.1 updates milestone Sep 19, 2022
@logoerthiner1
Copy link

The core of xss-lock is a longstanding bug (#7296) and is a red herring in your case. The relevant core dump should be for example Xorg or something similar.

@holiman
Copy link
Author

holiman commented Sep 20, 2022

Good point. I was actually kind of wondering the same. After a bit more digging, I found that preceding the xss-lock crash, there was another crash.
So the first crash caused X to shut down/crash, which then triggered the xss-lock crash due to the already known issue. Here's the same as above, but with more history. This time the log is in normal, not reverse order:

Sep 19 12:10:36 dom0 kernel: nouveau 0000:01:00.0: fifo: fault 01 [VIRT_WRITE] at 0000000005edb000 engine 40 [gr] client 13 [GPC0/PROP_0] reason 02 [PTE] on channel 2 [00ff8f3000 Xorg[95828]]
...
Sep 19 12:10:36 dom0 systemd-coredump[116186]: Process 95828 (Xorg) of user 0 dumped core.
                                               
                                               Stack trace of thread 95828:
                                               #0  0x000070ac3c74d7d5 raise (libc.so.6 + 0x3c7d5)
                                               #1  0x000070ac3c736895 abort (libc.so.6 + 0x25895)
                                               #2  0x00006267591ec0e0 OsAbort (Xorg + 0x1c60e0)
                                               #3  0x00006267591f1959 AbortServer (Xorg + 0x1cb959)
                                               #4  0x00006267591f26aa FatalError (Xorg + 0x1cc6aa)
                                               #5  0x00006267591e9450 OsSigHandler (Xorg + 0x1c3450)

Expand to see the full log

-- Logs begin at Sat 2022-09-10 21:21:33 CEST, end at Tue 2022-09-20 08:19:46 CEST. --
Sep 19 12:10:36 dom0 kernel: nouveau 0000:01:00.0: fifo: fault 01 [VIRT_WRITE] at 0000000005edb000 engine 40 [gr] client 13 [GPC0/PROP_0] reason 02 [PTE] on channel 2 [00ff8f3000 Xorg[95828]]
Sep 19 12:10:36 dom0 kernel: nouveau 0000:01:00.0: fifo: channel 2: killed
Sep 19 12:10:36 dom0 kernel: nouveau 0000:01:00.0: fifo: runlist 0: scheduled for recovery
Sep 19 12:10:36 dom0 kernel: nouveau 0000:01:00.0: fifo: engine 0: scheduled for recovery
Sep 19 12:10:36 dom0 kernel: nouveau 0000:01:00.0: Xorg[95828]: channel 2 killed!
Sep 19 12:10:36 dom0 audit[95828]: ANOM_ABEND auid=4294967295 uid=0 gid=0 ses=4294967295 pid=95828 comm="Xorg" exe="/usr/libexec/Xorg" sig=6 res=1
Sep 19 12:10:36 dom0 kernel: audit: type=1701 audit(1663582236.580:510): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=95828 comm="Xorg" exe="/usr/libexec/Xorg" sig=6 res=1
Sep 19 12:10:36 dom0 audit: BPF prog-id=47 op=LOAD
Sep 19 12:10:36 dom0 audit: BPF prog-id=48 op=LOAD
Sep 19 12:10:36 dom0 audit: BPF prog-id=49 op=LOAD
Sep 19 12:10:36 dom0 kernel: audit: type=1334 audit(1663582236.593:511): prog-id=47 op=LOAD
Sep 19 12:10:36 dom0 kernel: audit: type=1334 audit(1663582236.593:512): prog-id=48 op=LOAD
Sep 19 12:10:36 dom0 kernel: audit: type=1334 audit(1663582236.593:513): prog-id=49 op=LOAD
Sep 19 12:10:36 dom0 systemd[1]: Started Process Core Dump (PID 116185/UID 0).
Sep 19 12:10:36 dom0 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@2-116185-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Sep 19 12:10:36 dom0 kernel: audit: type=1130 audit(1663582236.595:514): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@2-116185-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Sep 19 12:10:36 dom0 systemd-coredump[116186]: Process 95828 (Xorg) of user 0 dumped core.
                                               
                                               Stack trace of thread 95828:
                                               #0  0x000070ac3c74d7d5 raise (libc.so.6 + 0x3c7d5)
                                               #1  0x000070ac3c736895 abort (libc.so.6 + 0x25895)
                                               #2  0x00006267591ec0e0 OsAbort (Xorg + 0x1c60e0)
                                               #3  0x00006267591f1959 AbortServer (Xorg + 0x1cb959)
                                               #4  0x00006267591f26aa FatalError (Xorg + 0x1cc6aa)
                                               #5  0x00006267591e9450 OsSigHandler (Xorg + 0x1c3450)
                                               #6  0x000070ac3c8f1a90 __restore_rt (libpthread.so.0 + 0x14a90)
                                               #7  0x000070ac3c74d7d5 raise (libc.so.6 + 0x3c7d5)
                                               #8  0x000070ac3c736895 abort (libc.so.6 + 0x25895)
                                               #9  0x000070ac3c736769 __assert_fail_base.cold (libc.so.6 + 0x25769)
                                               #10 0x000070ac3c745e86 __assert_fail (libc.so.6 + 0x34e86)
                                               #11 0x000070ac358fc847 nouveau_pushbuf_data (libdrm_nouveau.so.2 + 0x4847)
                                               #12 0x000070ac358fc7a7 nouveau_pushbuf_data (libdrm_nouveau.so.2 + 0x47a7)
                                               #13 0x000070ac358fc8cf pushbuf_submit (libdrm_nouveau.so.2 + 0x48cf)
                                               #14 0x000070ac358fcce7 pushbuf_flush.isra.0 (libdrm_nouveau.so.2 + 0x4ce7)
                                               #15 0x000070ac358fd904 nouveau_pushbuf_kick (libdrm_nouveau.so.2 + 0x5904)
                                               #16 0x000070ac3b058acd nvc0_flush (nouveau_dri.so + 0x99eacd)
                                               #17 0x000070ac3a7f732d st_glFlush (nouveau_dri.so + 0x13d32d)
                                               #18 0x000070ac3be8c5d4 _glamor_block_handler (libglamoregl.so + 0xa5d4)
                                               #19 0x000070ac3becdb47 msBlockHandler (modesetting_drv.so + 0xcb47)
                                               #20 0x0000626759088505 BlockHandler (Xorg + 0x62505)
                                               #21 0x00006267591e2b22 WaitForSomething (Xorg + 0x1bcb22)
                                               #22 0x00006267590837e7 Dispatch (Xorg + 0x5d7e7)
                                               #23 0x0000626759087b04 dix_main (Xorg + 0x61b04)
                                               #24 0x000070ac3c738082 __libc_start_main (libc.so.6 + 0x27082)
                                               #25 0x0000626759070e6e _start (Xorg + 0x4ae6e)
                                               
                                               Stack trace of thread 95829:
                                               #0  0x000070ac3c8ece92 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xfe92)
                                               #1  0x000070ac3ab5720b util_queue_thread_func (nouveau_dri.so + 0x49d20b)
                                               #2  0x000070ac3ab56ccb impl_thrd_routine (nouveau_dri.so + 0x49cccb)
                                               #3  0x000070ac3c8e6432 start_thread (libpthread.so.0 + 0x9432)
                                               #4  0x000070ac3c8126d3 __clone (libc.so.6 + 0x1016d3)
                                               
                                               Stack trace of thread 95830:
                                               #0  0x000070ac3c8ece92 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xfe92)
                                               #1  0x000070ac3ab5720b util_queue_thread_func (nouveau_dri.so + 0x49d20b)
                                               #2  0x000070ac3ab56ccb impl_thrd_routine (nouveau_dri.so + 0x49cccb)
                                               #3  0x000070ac3c8e6432 start_thread (libpthread.so.0 + 0x9432)
                                               #4  0x000070ac3c8126d3 __clone (libc.so.6 + 0x1016d3)
                                               
                                               Stack trace of thread 95831:
                                               #0  0x000070ac3c8ece92 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xfe92)
                                               #1  0x000070ac3ab5720b util_queue_thread_func (nouveau_dri.so + 0x49d20b)
                                               #2  0x000070ac3ab56ccb impl_thrd_routine (nouveau_dri.so + 0x49cccb)
                                               #3  0x000070ac3c8e6432 start_thread (libpthread.so.0 + 0x9432)
                                               #4  0x000070ac3c8126d3 __clone (libc.so.6 + 0x1016d3)
                                               
                                               Stack trace of thread 95832:
                                               #0  0x000070ac3c8ece92 pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0 + 0xfe92)
                                               #1  0x000070ac3ab5720b util_queue_thread_func (nouveau_dri.so + 0x49d20b)
                                               #2  0x000070ac3ab56ccb impl_thrd_routine (nouveau_dri.so + 0x49cccb)
                                               #3  0x000070ac3c8e6432 start_thread (libpthread.so.0 + 0x9432)
                                               #4  0x000070ac3c8126d3 __clone (libc.so.6 + 0x1016d3)
                                               
                                               Stack trace of thread 95844:
                                               #0  0x000070ac3c8f0750 __lll_lock_wait (libpthread.so.0 + 0x13750)
                                               #1  0x000070ac3c8e8ee1 __pthread_mutex_lock (libpthread.so.0 + 0xbee1)
                                               #2  0x00006267591e73c4 input_lock (Xorg + 0x1c13c4)
                                               #3  0x00006267591e7685 InputReady (Xorg + 0x1c1685)
                                               #4  0x00006267591e9de1 ospoll_wait (Xorg + 0x1c3de1)
                                               #5  0x00006267591e74c1 InputThreadDoWork (Xorg + 0x1c14c1)
                                               #6  0x000070ac3c8e6432 start_thread (libpthread.so.0 + 0x9432)
                                               #7  0x000070ac3c8126d3 __clone (libc.so.6 + 0x1016d3)
Sep 19 12:10:37 dom0 systemd[1]: systemd-coredump@2-116185-0.service: Succeeded.
Sep 19 12:10:37 dom0 kernel: audit: type=1131 audit(1663582237.162:515): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@2-116185-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Sep 19 12:10:37 dom0 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@2-116185-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Sep 19 12:10:37 dom0 qui-updates[96149]: qui-updates: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Sep 19 12:10:37 dom0 kernel: traps: xss-lock[95970] trap int3 ip:7eba44371fc2 sp:7ffd9f47cb80 error:0 in libglib-2.0.so.0.6400.6[7eba44334000+86000]
Sep 19 12:10:37 dom0 kernel: audit: type=1701 audit(1663582237.167:516): auid=1000 uid=1000 gid=1001 ses=5 pid=95970 comm="xss-lock" exe="/usr/bin/xss-lock" sig=5 res=1
Sep 19 12:10:37 dom0 kernel: audit: type=1113 audit(1663582237.175:517): pid=95887 uid=0 auid=1000 ses=5 msg='op=logout id=1000 exe="/usr/sbin/lightdm" hostname=dom0 addr=? terminal=/dev/tty1 res=success'
Sep 19 12:10:37 dom0 kernel: audit: type=1106 audit(1663582237.180:518): pid=95887 uid=0 auid=1000 ses=5 msg='op=PAM:session_close grantors=pam_selinux,pam_loginuid,pam_selinux,pam_keyinit,pam_namespace,pam_keyinit,pam_limits,pam_systemd,pam_unix,pam_lastlog,pam_umask,pam_lastlog acct="martin" exe="/usr/sbin/lightdm" hostname=? addr=? terminal=:0 res=success'
Sep 19 12:10:37 dom0 audit[95970]: ANOM_ABEND auid=1000 uid=1000 gid=1001 ses=5 pid=95970 comm="xss-lock" exe="/usr/bin/xss-lock" sig=5 res=1
Sep 19 12:10:37 dom0 audit[95887]: USER_LOGOUT pid=95887 uid=0 auid=1000 ses=5 msg='op=logout id=1000 exe="/usr/sbin/lightdm" hostname=dom0 addr=? terminal=/dev/tty1 res=success'
Sep 19 12:10:37 dom0 audit[95887]: USER_END pid=95887 uid=0 auid=1000 ses=5 msg='op=PAM:session_close grantors=pam_selinux,pam_loginuid,pam_selinux,pam_keyinit,pam_namespace,pam_keyinit,pam_limits,pam_systemd,pam_unix,pam_lastlog,pam_umask,pam_lastlog acct="martin" exe="/usr/sbin/lightdm" hostname=? addr=? terminal=:0 res=success'
Sep 19 12:10:37 dom0 audit[95887]: CRED_DISP pid=95887 uid=0 auid=1000 ses=5 msg='op=PAM:setcred grantors=pam_unix acct="martin" exe="/usr/sbin/lightdm" hostname=? addr=? terminal=:0 res=success'
Sep 19 12:10:37 dom0 at-spi2-registryd[96154]: X connection to :0 broken (explicit kill or server shutdown).
Sep 19 12:10:37 dom0 unknown[96256]: xfce4-notifyd: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Sep 19 12:10:37 dom0 lightdm[95887]: pam_unix(lightdm:session): session closed for user martin
Sep 19 12:10:37 dom0 widget-wrapper[116194]: xdpyinfo:  unable to open display ":0".
Sep 19 12:10:37 dom0 unknown[96125]: qui-domains: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Sep 19 12:10:37 dom0 unknown[96062]: qui-clipboard: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Sep 19 12:10:37 dom0 unknown[96103]: qui-disk-space: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Sep 19 12:10:37 dom0 systemd[6395]: dbus-:1.7-org.a11y.atspi.Registry@1.service: Main process exited, code=exited, status=1/FAILURE
Sep 19 12:10:37 dom0 systemd[6395]: dbus-:1.7-org.a11y.atspi.Registry@1.service: Failed with result 'exit-code'.
Sep 19 12:10:37 dom0 widget-wrapper[116196]: xdpyinfo:  unable to open display ":0".
Sep 19 12:10:37 dom0 unknown[96074]: qui-devices: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Sep 19 12:10:37 dom0 widget-wrapper[96058]: exiting with 0
Sep 19 12:10:37 dom0 systemd[6395]: xfce4-notifyd.service: Main process exited, code=exited, status=1/FAILURE
Sep 19 12:10:37 dom0 systemd[6395]: xfce4-notifyd.service: Failed with result 'exit-code'.
Sep 19 12:10:37 dom0 widget-wrapper[116195]: xdpyinfo:  unable to open display ":0".
Sep 19 12:10:37 dom0 widget-wrapper[96124]: exiting with 0
Sep 19 12:10:37 dom0 systemd[6395]: qubes-widget@qui-clipboard.service: Succeeded.
Sep 19 12:10:37 dom0 widget-wrapper[116197]: xdpyinfo:  unable to open display ":0".
Sep 19 12:10:37 dom0 widget-wrapper[96101]: exiting with 0
Sep 19 12:10:37 dom0 systemd-logind[4411]: Session 5 logged out. Waiting for processes to exit.
Sep 19 12:10:37 dom0 widget-wrapper[96069]: exiting with 0
Sep 19 12:10:37 dom0 systemd[6395]: qubes-widget@qui-disk-space.service: Succeeded.
Sep 19 12:10:37 dom0 systemd[6395]: qubes-widget@qui-devices.service: Succeeded.
Sep 19 12:10:37 dom0 systemd[6395]: qubes-widget@qui-domains.service: Succeeded.
Sep 19 12:10:37 dom0 systemd[6395]: qubes-widget@qui-domains.service: Consumed 1.110s CPU time.
Sep 19 12:10:37 dom0 widget-wrapper[116198]: xdpyinfo:  unable to open display ":0".
Sep 19 12:10:37 dom0 widget-wrapper[96148]: exiting with 0
Sep 19 12:10:37 dom0 systemd[6395]: qubes-widget@qui-updates.service: Succeeded.
Sep 19 12:10:37 dom0 audit: BPF prog-id=50 op=LOAD
Sep 19 12:10:37 dom0 audit: BPF prog-id=51 op=LOAD
Sep 19 12:10:37 dom0 audit: BPF prog-id=52 op=LOAD
Sep 19 12:10:37 dom0 systemd[1]: Started Process Core Dump (PID 116193/UID 0).
Sep 19 12:10:37 dom0 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@3-116193-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 audit: BPF prog-id=0 op=UNLOAD
Sep 19 12:10:37 dom0 audit: BPF prog-id=0 op=UNLOAD
Sep 19 12:10:37 dom0 audit: BPF prog-id=0 op=UNLOAD
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qubesd[4559]: socket.send() raised exception.
Sep 19 12:10:37 dom0 qrexec-policy-daemon[4604]: qrexec: qubes.WindowIconUpdater+: vault -> @adminvm: allowed to dom0
Sep 19 12:10:37 dom0 qrexec-policy-daemon[4604]: qrexec: qubes.WindowIconUpdater+: work -> @adminvm: allowed to dom0
Sep 19 12:10:37 dom0 qrexec-policy-daemon[4604]: qrexec: qubes.WindowIconUpdater+: sys-firewall -> @adminvm: allowed to dom0
Sep 19 12:10:37 dom0 qrexec-policy-daemon[4604]: qrexec: qubes.WindowIconUpdater+: sys-net -> @adminvm: allowed to dom0
Sep 19 12:10:37 dom0 systemd-coredump[116200]: Process 95970 (xss-lock) of user 1000 dumped core.
                                               
                                               Stack trace of thread 95970:
                                               #0  0x00007eba44371fc2 g_logv (libglib-2.0.so.0 + 0x59fc2)
                                               #1  0x00007eba44372233 g_log (libglib-2.0.so.0 + 0x5a233)
                                               #2  0x00005b4d327b1be5 screensaver_event_cb (xss-lock + 0x4be5)
                                               #3  0x00005b4d327b1cdf xcb_event_dispatch (xss-lock + 0x4cdf)
                                               #4  0x00007eba4436a78f g_main_context_dispatch (libglib-2.0.so.0 + 0x5278f)
                                               #5  0x00007eba4436ab18 g_main_context_iterate.constprop.0 (libglib-2.0.so.0 + 0x52b18)
                                               #6  0x00007eba4436ae33 g_main_loop_run (libglib-2.0.so.0 + 0x52e33)
                                               #7  0x00005b4d327b0ecd main (xss-lock + 0x3ecd)
                                               #8  0x00007eba44138082 __libc_start_main (libc.so.6 + 0x27082)
                                               #9  0x00005b4d327b102e _start (xss-lock + 0x402e)
                                               
                                               Stack trace of thread 95977:
                                               #0  0x00007eba4420786f __poll (libc.so.6 + 0xf686f)
                                               #1  0x00007eba4436aaae g_main_context_iterate.constprop.0 (libglib-2.0.so.0 + 0x52aae)
                                               #2  0x00007eba4436abe3 g_main_context_iteration (libglib-2.0.so.0 + 0x52be3)
                                               #3  0x00007eba4436ac31 glib_worker_main (libglib-2.0.so.0 + 0x52c31)
                                               #4  0x00007eba443947f2 g_thread_proxy (libglib-2.0.so.0 + 0x7c7f2)
                                               #5  0x00007eba4402f432 start_thread (libpthread.so.0 + 0x9432)
                                               #6  0x00007eba442126d3 __clone (libc.so.6 + 0x1016d3)
                                               
                                               Stack trace of thread 95979:
                                               #0  0x00007eba4420786f __poll (libc.so.6 + 0xf686f)
                                               #1  0x00007eba4436aaae g_main_context_iterate.constprop.0 (libglib-2.0.so.0 + 0x52aae)
                                               #2  0x00007eba4436ae33 g_main_loop_run (libglib-2.0.so.0 + 0x52e33)
                                               #3  0x00007eba445be6aa gdbus_shared_thread_func (libgio-2.0.so.0 + 0x1226aa)
                                               #4  0x00007eba443947f2 g_thread_proxy (libglib-2.0.so.0 + 0x7c7f2)
                                               #5  0x00007eba4402f432 start_thread (libpthread.so.0 + 0x9432)
                                               #6  0x00007eba442126d3 __clone (libc.so.6 + 0x1016d3)
Sep 19 12:10:37 dom0 systemd[1]: session-5.scope: Succeeded.
Sep 19 12:10:37 dom0 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-coredump@3-116193-0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Sep 19 12:10:37 dom0 systemd[1]: session-5.scope: Consumed 1min 48.381s CPU time.
Sep 19 12:10:37 dom0 systemd-logind[4411]: Removed session 5.
Sep 19 12:10:37 dom0 systemd[1]: systemd-coredump@3-116193-0.service: Succeeded.
Sep 19 12:10:37 dom0 audit: BPF prog-id=0 op=UNLOAD
Sep 19 12:10:37 dom0 audit: BPF prog-id=0 op=UNLOAD
Sep 19 12:10:37 dom0 audit: BPF prog-id=0 op=UNLOAD

@logoerthiner1
Copy link

I have a similar issue #7673 which also involves a crashing Xorg. It seems that when Xorg crashes in whatever ways, the session is logged out forcefully.

Your backtrace is clean so I suppose that dev should be able to find the reason of the bug and fix it soon.

@DemiMarie
Copy link

I have a similar issue #7673 which also involves a crashing Xorg. It seems that when Xorg crashes in whatever ways, the session is logged out forcefully.

I believe this is by design, actually.

@asharp
Copy link

asharp commented Sep 24, 2022

I've also seen this, except in one case it also crashed sound for all of the running qubes - they disappeared as sources from pulseaudio but they otherwise worked. Newly started qubes worked fine however. That was using xfce.

@andrewdavidwong andrewdavidwong added the affects-4.1 This issue affects Qubes OS 4.1. label Aug 8, 2023
@andrewdavidwong andrewdavidwong removed this from the Release 4.1 updates milestone Aug 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
affects-4.1 This issue affects Qubes OS 4.1. C: desktop-linux-i3 Support for the i3 tiling window manager needs diagnosis Requires technical diagnosis from developer. Replace with "diagnosed" or remove if otherwise closed. P: default Priority: default. Default priority for new issues, to be replaced given sufficient information. T: bug Type: bug report. A problem or defect resulting in unintended behavior in something that exists.
Projects
None yet
Development

No branches or pull requests

5 participants