-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shutdown sometimes reboots the system on Z790 #488
Comments
@renehoj, please get the cbmem log from when the machine restarts instead of shutdown. There should be some wake causes in there for us to analyze |
It happened again today, here are the cbmem logs. |
Hmm the wake flag is not set, so it looks like an ordinary reboot. However the register which OS writes to put the system to sleep is set to the value that indicates power off. We will need to reproduce this issue using QubesOS to get more detailed view on this matter. |
I think the system doesn't just restart, it crashes during shutdown, and it's the crash that reboots the system. I had been ignoring the problem, because it isn't a huge issue to shut down the system manually when it rebooted, but this weekend it left my system unable to boot because the file system was corrupted. I have looked at the system journal and the reboot seems to happen early in the shut down sequence, before the drives are dismounted. Have you been able to reproduce the issue? |
We haven't experienced such issues yet. |
Is there any difference between the Z690 and Z790 in regards to how the watchdog timer works? I tried disabling the watchdog in the chipset configuration and the issue seems to have gone away, but I don't have any way to trigger the crash/reset to confirm it actually solves the issue. Is there any way the watchdog event could fire during shutdown? There is a period (maybe 3-5 sec.) during shutdown where the system stalls, right after the "system is powering down now" message, it's always at this point the crash/reset happens, if it happens. |
I did some more testing, and disabling watchdog doesn't solve the issue. It just allows me to stop more VMs during shutdown, without rebooting the system. Having ~15 VMs running during shutdown will still reboot the system, but manually shutting down all VMs before doing the system shutdown seems to work just fine. |
There isn't, at least from the firmware side.
Rather unlikely, the watchdog is being reloaded with an SMI handler, which will still run right before the shutdown.
Interesting... Maybe @marmarek would have some ideas? |
System powering down is the last message if the system crash/reset, after that message dom0/xen will start shutting down VMs. I guess if no one else is experiencing the same issue, it's unlikely to be caused by Dasharo. I did upgrade from 12th gen Z690 DDR4 to 13th gen Z790 DDR5, if I'm the only one with the problem it seems more likely to be an issue with xen and the new hardware. How do I connect a serial console?, would that require connecting a TTL to USB cable to the motherboard? |
VMs are normally shutdown when stopping "qubes-core" (aka "Qubes Dom0 startup setup") service.
https://docs.dasharo.com/unified/msi/development/#serial-debug |
|
Ok, so there are multiple issues here. @renehoj when posting logs, please use code tags (triple `) which will prevent github from trying to interpret things like First, there's clearly some corruption in Xen causing it to take a fatal pagefault. It's a userspace pointer, which is wrong in this context. My gut feeling is that there's a pointer in memory which has been corrupted somehow, and For this, please can you rebuild Xen with CONFIG_DEBUG enabled (generally, in case there are assertions elsewhere which help narrow things down), and CONFIG_XMEM_POOL_POISON specifically, which performs extra integrity checking of xmem pools in order to spot things like double frees. Both options are available in Xen's Second, Xen is trying to reboot and clearly not succeeding. To diagnose that further, we'd need the full |
Do you believe than there is any possibility of a defective RAM module that causes memory corruption? Put full system configuration, just in case you end up going in the MemTest route in case that it ends up being Hardware. |
@zirblazer I have tried running the memtest, it didn't find any errors. The system is stable I don't get random crashes, as fare as I can tell it's only shutting down QubesOS with the VMs running that triggers the reset. @andyhhp I have updated the output in the post. I can try and build a new QubesOS iso, but I don't have any experience with compiling Xen or using the Qubes builder, so I don't know how easy it is to build QubesOS with a custom version of Xen. I don't think there is a problem with the reboot, I just didn't include the output from the next boot. The system reboots, and everything seems to be working. As far as I can tell the only way I can trigger the crash is by shutting down QubesOS with the VMs running, I have not seen any reboots from manually shutting down VMs. This is why I initially though it was an issues with Dasharo, it looked like shutdown would randomly reboot the system. |
Thanks. That's a much more normal looking backtrace now. @marmarek Can you advise on the best way to run a debug Xen under QubesOS ? If it were just me, I'd drop a new xen.gz into @renehoj This is definitely (at least partially) some error in Xen. Even if it is memory corruption from something else, it will still require a fix in Xen to mitigate. Can you confirm exactly what steps you are taking in order to trigger the crash? You distinguish between manually shutting down the VMs (so initiating a shutdown inside the VM?), and shutting down ones which are running (an admin operation to kill the Qube?), but it's not entirely clear. If reboot following the crash is actually working fine, then this is unlikely to be a Dasharo issue. |
That's what I usually do... We have several patches on top but most of them are just for toolstack and the few for hypervisor shouldn't matter for debugging. Just make sure you use appropriate branch (stable-4.17) and config (https://github.com/QubesOS/qubes-vmm-xen/blob/main/config). |
@andyhhp To trigger the crash, I leave the VMs running and shutdown QubesOS from the XFCE menu. I don't know of any other way to trigger the crash. When I say manually shutting down the VMs, I mean running I'm just one person taking mental notes of when the system crashes, I can't say for sure qvm-shutdown is safe to use, but I have only seen the crash happen during the system shutdown with the VMs running. Tonight I will try and compile stable-4.17 with the QubesOS config and debugging enabled. |
Ah ok. So when you explicitly shut the VMs down first, then shut QubesOS down, it works fine. When you try to shut QubesOS down with VMs still running, we hit the crash. It looks like there's something else happening in the shutdown case which doesn't work nicely in parallel with VM shutdown. |
The |
No. |
It's "S" which means it may be stack rubble in a release build, including a token entry finding nothing to do. I'd suggest waiting for the log from a debug build which should give us this accurately. |
@andyhhp I tried to compile xen with debug, but the output looks very similar to the previous crash. I cloned 4.17 Is there anything else I need to do?
|
That's still a release build of Xen, so I suspect you didn't actually boot the hypervisor you built. Either edit grub.cfg, or use the interactive prompt at boot to change the path to the hypervisor. That said,
is a very different area of code, but with similar symptoms that still look like xmem pool corruption. I suspect that if you repeat this a few times, you'll find different |
@andyhhp The version I have compiled can't shutdown any VMs without crashing, shutting down QubesOS or a single VM will result in the following crash. Don't know if it relevant, but I'm running 4 cpupools with credit2 as the scheduler.
|
Well - that is an improvement... at least it's consistent now. And yes, CPUPools almost certainly is relevant factor here. Can you describe your CPUPool setup, and other scheduling settings (e.g. smt, sched_gran), and which VMs are in which CPUPools. One observation - it does seem fairly absurd that we're moving vcpus between CPUPools as part of destroying the VM, but we really do move it back from whichever CPUPool it's in, back to pool 0. |
Ah - one further question. This is a Z790, with an AlderLake CPU, is it not? ADL is the first of the Hybrid CPUs which mix P and E cores, and in particular which might be relevant here, have different hyperthreading-ness |
It is an 13th gen RaptorLake, but it has the same P and E cores as AlderLake. I have smt=on, dom0_max_vcpus=4, dom0_vcpus_pin, sched-gran=cpu. I tried using sched-gran=core, but it doesn't seem possible for asymmetric cpus. CPUPools I have a python script running on dom0 that listen for QubeOS admin events, when adminAPI says a VM has started the script executes xl cpupool-migarte to move the VM to the pool where it belongs, this is mostly to control what is running on P and E cores. |
Wow, a lot to unpack there. That's quite a setup. First, there is a AFAICT, you've got P cores (ID 0 thru 15, so 8c/16t), and E cores (ID 16 thru 31, so only 8c total), with SMT active and "cpu" (thread) scheduling. So Xen should be fairly oblivious and just be treating them as 24 independent things. Can you provide the full Are all cpupools running credit2? Can you experiment using credit across the board? You'll need As to the admin events, |
There is definitely a flaw in the move domain design, at least with credit2. I guess in your case credit2 is using multiple runqueues in at least one cpupool. I have a rough idea how to fix it, but this might take some time. In the meantime I'd like you to verify my suspect by posting the output of "xl dmesg" after having setup the cpupools. To get your system back to a working state you might want to try adding "credit2_runqueue=all" to the Xen boot parameters (assuming my analysis is correct). Please do so only after gathering the xl dmesg data I've asked for. |
It's an 13900K with 24 cores (8P + 16E) and 32 threads (16P + 16E).
All the pools are credit2. Before QubesOS upgraded to Xen 4.17 it was possible to run Pool-0 as credit2, and the other pools as credit. That stopped working in 4.17, and I changed all the pools to credit2. I will try and change them to credit when I get home.
I originally used cpuset in xen.xml to pin the cpus, using adminAPI with python was just a lot easier to manage when creating new VMs. Switching back to using xen.xml with jinja, wouldn't be an issue.
I'm at work right now, so I can't post the xl dmesg, but I will do it when I get home. |
Dasharo/edk2#99 this should solve it. I only need to test it (and install Qubes first :) ) |
Prevent debugging on serial port (whether physical or cbmem console) at runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It will stop calling SerialPortWrite if EFI switches to runtime and avoid access to cbmem CONSOLE buffer which is neither marked as runtime code nor data. Solves the issue with Xen backtrace on EFI reset system runtime service: Dasharo/dasharo-issues#488 (comment) Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
@renehoj Are you happy to provide a name/email for a Reported-by and/or Tested-by: tags in the Xen fix, or would you prefer that we just link to this issue? |
Sure. |
Prevent debugging on serial port (whether physical or cbmem console) at runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It will stop calling SerialPortWrite if EFI switches to runtime and avoid access to cbmem CONSOLE buffer which is neither marked as runtime code nor data. Solves the issue with Xen backtrace on EFI reset system runtime service: Dasharo/dasharo-issues#488 (comment) Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
When moving a domain out of a cpupool running with the credit2 scheduler and having multiple run-queues, the following ASSERT() can be observed: (XEN) Xen call trace: (XEN) [<ffff82d04023a700>] R credit2.c#csched2_unit_remove+0xe3/0xe7 (XEN) [<ffff82d040246adb>] S sched_move_domain+0x2f3/0x5b1 (XEN) [<ffff82d040234cf7>] S cpupool.c#cpupool_move_domain_locked+0x1d/0x3b (XEN) [<ffff82d040236025>] S cpupool_move_domain+0x24/0x35 (XEN) [<ffff82d040206513>] S domain_kill+0xa5/0x116 (XEN) [<ffff82d040232b12>] S do_domctl+0xe5f/0x1951 (XEN) [<ffff82d0402276ba>] S timer.c#timer_lock+0x69/0x143 (XEN) [<ffff82d0402dc71b>] S pv_hypercall+0x44e/0x4a9 (XEN) [<ffff82d0402012b7>] S lstar_enter+0x137/0x140 (XEN) (XEN) (XEN) **************************************** (XEN) Panic on CPU 1: (XEN) Assertion 'svc->rqd == c2rqd(sched_unit_master(unit))' failed at common/sched/credit2.c:1159 (XEN) **************************************** This is happening as sched_move_domain() is setting a different cpu for a scheduling unit without telling the scheduler. When this unit is removed from the scheduler, the ASSERT() will trigger. In non-debug builds the result is usually a clobbered pointer, leading to another crash a short time later. Fix that by swapping the two involved actions (setting another cpu and removing the unit from the scheduler). Link: Dasharo/dasharo-issues#488 Fixes: 70fadc4 ("xen/cpupool: support moving domain between cpupools with different granularity") Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: George Dunlap <george.dunlap@cloud.com>
When moving a domain out of a cpupool running with the credit2 scheduler and having multiple run-queues, the following ASSERT() can be observed: (XEN) Xen call trace: (XEN) [<ffff82d04023a700>] R credit2.c#csched2_unit_remove+0xe3/0xe7 (XEN) [<ffff82d040246adb>] S sched_move_domain+0x2f3/0x5b1 (XEN) [<ffff82d040234cf7>] S cpupool.c#cpupool_move_domain_locked+0x1d/0x3b (XEN) [<ffff82d040236025>] S cpupool_move_domain+0x24/0x35 (XEN) [<ffff82d040206513>] S domain_kill+0xa5/0x116 (XEN) [<ffff82d040232b12>] S do_domctl+0xe5f/0x1951 (XEN) [<ffff82d0402276ba>] S timer.c#timer_lock+0x69/0x143 (XEN) [<ffff82d0402dc71b>] S pv_hypercall+0x44e/0x4a9 (XEN) [<ffff82d0402012b7>] S lstar_enter+0x137/0x140 (XEN) (XEN) (XEN) **************************************** (XEN) Panic on CPU 1: (XEN) Assertion 'svc->rqd == c2rqd(sched_unit_master(unit))' failed at common/sched/credit2.c:1159 (XEN) **************************************** This is happening as sched_move_domain() is setting a different cpu for a scheduling unit without telling the scheduler. When this unit is removed from the scheduler, the ASSERT() will trigger. In non-debug builds the result is usually a clobbered pointer, leading to another crash a short time later. Fix that by swapping the two involved actions (setting another cpu and removing the unit from the scheduler). Link: Dasharo/dasharo-issues#488 Fixes: 70fadc4 ("xen/cpupool: support moving domain between cpupools with different granularity") Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: George Dunlap <george.dunlap@cloud.com> master commit: 4709ec8 master date: 2023-11-20 10:49:29 +0100
When moving a domain out of a cpupool running with the credit2 scheduler and having multiple run-queues, the following ASSERT() can be observed: (XEN) Xen call trace: (XEN) [<ffff82d04023a700>] R credit2.c#csched2_unit_remove+0xe3/0xe7 (XEN) [<ffff82d040246adb>] S sched_move_domain+0x2f3/0x5b1 (XEN) [<ffff82d040234cf7>] S cpupool.c#cpupool_move_domain_locked+0x1d/0x3b (XEN) [<ffff82d040236025>] S cpupool_move_domain+0x24/0x35 (XEN) [<ffff82d040206513>] S domain_kill+0xa5/0x116 (XEN) [<ffff82d040232b12>] S do_domctl+0xe5f/0x1951 (XEN) [<ffff82d0402276ba>] S timer.c#timer_lock+0x69/0x143 (XEN) [<ffff82d0402dc71b>] S pv_hypercall+0x44e/0x4a9 (XEN) [<ffff82d0402012b7>] S lstar_enter+0x137/0x140 (XEN) (XEN) (XEN) **************************************** (XEN) Panic on CPU 1: (XEN) Assertion 'svc->rqd == c2rqd(sched_unit_master(unit))' failed at common/sched/credit2.c:1159 (XEN) **************************************** This is happening as sched_move_domain() is setting a different cpu for a scheduling unit without telling the scheduler. When this unit is removed from the scheduler, the ASSERT() will trigger. In non-debug builds the result is usually a clobbered pointer, leading to another crash a short time later. Fix that by swapping the two involved actions (setting another cpu and removing the unit from the scheduler). Link: Dasharo/dasharo-issues#488 Fixes: 70fadc4 ("xen/cpupool: support moving domain between cpupools with different granularity") Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: George Dunlap <george.dunlap@cloud.com> master commit: 4709ec8 master date: 2023-11-20 10:49:29 +0100
When moving a domain out of a cpupool running with the credit2 scheduler and having multiple run-queues, the following ASSERT() can be observed: (XEN) Xen call trace: (XEN) [<ffff82d04023a700>] R credit2.c#csched2_unit_remove+0xe3/0xe7 (XEN) [<ffff82d040246adb>] S sched_move_domain+0x2f3/0x5b1 (XEN) [<ffff82d040234cf7>] S cpupool.c#cpupool_move_domain_locked+0x1d/0x3b (XEN) [<ffff82d040236025>] S cpupool_move_domain+0x24/0x35 (XEN) [<ffff82d040206513>] S domain_kill+0xa5/0x116 (XEN) [<ffff82d040232b12>] S do_domctl+0xe5f/0x1951 (XEN) [<ffff82d0402276ba>] S timer.c#timer_lock+0x69/0x143 (XEN) [<ffff82d0402dc71b>] S pv_hypercall+0x44e/0x4a9 (XEN) [<ffff82d0402012b7>] S lstar_enter+0x137/0x140 (XEN) (XEN) (XEN) **************************************** (XEN) Panic on CPU 1: (XEN) Assertion 'svc->rqd == c2rqd(sched_unit_master(unit))' failed at common/sched/credit2.c:1159 (XEN) **************************************** This is happening as sched_move_domain() is setting a different cpu for a scheduling unit without telling the scheduler. When this unit is removed from the scheduler, the ASSERT() will trigger. In non-debug builds the result is usually a clobbered pointer, leading to another crash a short time later. Fix that by swapping the two involved actions (setting another cpu and removing the unit from the scheduler). Link: Dasharo/dasharo-issues#488 Fixes: 70fadc4 ("xen/cpupool: support moving domain between cpupools with different granularity") Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: George Dunlap <george.dunlap@cloud.com> master commit: 4709ec8 master date: 2023-11-20 10:49:29 +0100
@renehoj the v0.9.1 ROM contains the fix. Could you please confirm the issue is still present or not? |
@miczyg1 Yes, I haven't had any crashes with v0.9.1, as fare as I can tell the issue is solved. |
Prevent debugging on serial port (whether physical or cbmem console) at runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It will stop calling SerialPortWrite if EFI switches to runtime and avoid access to cbmem CONSOLE buffer which is neither marked as runtime code nor data. Solves the issue with Xen backtrace on EFI reset system runtime service: Dasharo/dasharo-issues#488 (comment) Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
Prevent debugging on serial port (whether physical or cbmem console) at runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It will stop calling SerialPortWrite if EFI switches to runtime and avoid access to cbmem CONSOLE buffer which is neither marked as runtime code nor data. Solves the issue with Xen backtrace on EFI reset system runtime service: Dasharo/dasharo-issues#488 (comment) Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
…IVER Prevent debugging on serial port (whether physical or cbmem console) at runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It will stop calling SerialPortWrite if EFI switches to runtime and avoid access to cbmem CONSOLE buffer which is neither marked as runtime code nor data. Solves the issue with Xen backtrace on EFI reset system runtime service: Dasharo/dasharo-issues#488 (comment) Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
…IVER Prevent debugging on serial port (whether physical or cbmem console) at runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It will stop calling SerialPortWrite if EFI switches to runtime and avoid access to cbmem CONSOLE buffer which is neither marked as runtime code nor data. Solves the issue with Xen backtrace on EFI reset system runtime service: Dasharo/dasharo-issues#488 (comment) Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
…IVER Prevent debugging on serial port (whether physical or cbmem console) at runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It will stop calling SerialPortWrite if EFI switches to runtime and avoid access to cbmem CONSOLE buffer which is neither marked as runtime code nor data. Solves the issue with Xen backtrace on EFI reset system runtime service: Dasharo/dasharo-issues#488 (comment) Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
…IVER Prevent debugging on serial port (whether physical or cbmem console) at runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It will stop calling SerialPortWrite if EFI switches to runtime and avoid access to cbmem CONSOLE buffer which is neither marked as runtime code nor data. Solves the issue with Xen backtrace on EFI reset system runtime service: Dasharo/dasharo-issues#488 (comment) Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
…IVER Prevent debugging on serial port (whether physical or cbmem console) at runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It will stop calling SerialPortWrite if EFI switches to runtime and avoid access to cbmem CONSOLE buffer which is neither marked as runtime code nor data. Solves the issue with Xen backtrace on EFI reset system runtime service: Dasharo/dasharo-issues#488 (comment) Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
…IVER Prevent debugging on serial port (whether physical or cbmem console) at runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It will stop calling SerialPortWrite if EFI switches to runtime and avoid access to cbmem CONSOLE buffer which is neither marked as runtime code nor data. Solves the issue with Xen backtrace on EFI reset system runtime service: Dasharo/dasharo-issues#488 (comment) Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
…IVER Prevent debugging on serial port (whether physical or cbmem console) at runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It will stop calling SerialPortWrite if EFI switches to runtime and avoid access to cbmem CONSOLE buffer which is neither marked as runtime code nor data. Solves the issue with Xen backtrace on EFI reset system runtime service: Dasharo/dasharo-issues#488 (comment) Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
…IVER Prevent debugging on serial port (whether physical or cbmem console) at runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It will stop calling SerialPortWrite if EFI switches to runtime and avoid access to cbmem CONSOLE buffer which is neither marked as runtime code nor data. Solves the issue with Xen backtrace on EFI reset system runtime service: Dasharo/dasharo-issues#488 (comment) Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
…IVER Prevent debugging on serial port (whether physical or cbmem console) at runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It will stop calling SerialPortWrite if EFI switches to runtime and avoid access to cbmem CONSOLE buffer which is neither marked as runtime code nor data. Solves the issue with Xen backtrace on EFI reset system runtime service: Dasharo/dasharo-issues#488 (comment) Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
…IVER Prevent debugging on serial port (whether physical or cbmem console) at runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It will stop calling SerialPortWrite if EFI switches to runtime and avoid access to cbmem CONSOLE buffer which is neither marked as runtime code nor data. Solves the issue with Xen backtrace on EFI reset system runtime service: Dasharo/dasharo-issues#488 (comment) Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
…IVER Prevent debugging on serial port (whether physical or cbmem console) at runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It will stop calling SerialPortWrite if EFI switches to runtime and avoid access to cbmem CONSOLE buffer which is neither marked as runtime code nor data. Solves the issue with Xen backtrace on EFI reset system runtime service: Dasharo/dasharo-issues#488 (comment) Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
…IVER Prevent debugging on serial port (whether physical or cbmem console) at runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It will stop calling SerialPortWrite if EFI switches to runtime and avoid access to cbmem CONSOLE buffer which is neither marked as runtime code nor data. Solves the issue with Xen backtrace on EFI reset system runtime service: Dasharo/dasharo-issues#488 (comment) Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
Dasharo version
v0.9.0
Dasharo variant
Workstation
Affected component(s) or functionality
Shutdown
Brief summary
Sometimes shutdown will reboot the system.
I'm using Qubes 4.2.0, and there is no warnings in errors in journalctl.
I have not experienced this issues with the Z690, running the same version of Qubes.
How reproducible
It's happened two times now, but I don't know what triggers it.
How to reproduce
Don't know how to reproduce.
Expected behavior
The system would turn off, and not restart.
Actual behavior
System restarts.
The text was updated successfully, but these errors were encountered: