Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shutdown sometimes reboots the system on Z790 #488

Closed
renehoj opened this issue Sep 12, 2023 · 49 comments
Closed

Shutdown sometimes reboots the system on Z790 #488

renehoj opened this issue Sep 12, 2023 · 49 comments

Comments

@renehoj
Copy link

renehoj commented Sep 12, 2023

Dasharo version
v0.9.0

Dasharo variant
Workstation

Affected component(s) or functionality
Shutdown

Brief summary
Sometimes shutdown will reboot the system.
I'm using Qubes 4.2.0, and there is no warnings in errors in journalctl.
I have not experienced this issues with the Z690, running the same version of Qubes.

How reproducible
It's happened two times now, but I don't know what triggers it.

How to reproduce
Don't know how to reproduce.

Expected behavior
The system would turn off, and not restart.

Actual behavior
System restarts.

@miczyg1
Copy link
Contributor

miczyg1 commented Sep 13, 2023

@renehoj, please get the cbmem log from when the machine restarts instead of shutdown. There should be some wake causes in there for us to analyze

@renehoj
Copy link
Author

renehoj commented Sep 13, 2023

It happened again today, here are the cbmem logs.

logs.tar.gz

@miczyg1
Copy link
Contributor

miczyg1 commented Sep 14, 2023

Hmm the wake flag is not set, so it looks like an ordinary reboot. However the register which OS writes to put the system to sleep is set to the value that indicates power off.

We will need to reproduce this issue using QubesOS to get more detailed view on this matter.

@renehoj
Copy link
Author

renehoj commented Oct 2, 2023

I think the system doesn't just restart, it crashes during shutdown, and it's the crash that reboots the system.

I had been ignoring the problem, because it isn't a huge issue to shut down the system manually when it rebooted, but this weekend it left my system unable to boot because the file system was corrupted. I have looked at the system journal and the reboot seems to happen early in the shut down sequence, before the drives are dismounted.

Have you been able to reproduce the issue?

@miczyg1
Copy link
Contributor

miczyg1 commented Oct 3, 2023

We haven't experienced such issues yet.

@renehoj
Copy link
Author

renehoj commented Oct 7, 2023

Is there any difference between the Z690 and Z790 in regards to how the watchdog timer works?

I tried disabling the watchdog in the chipset configuration and the issue seems to have gone away, but I don't have any way to trigger the crash/reset to confirm it actually solves the issue.

Is there any way the watchdog event could fire during shutdown?

There is a period (maybe 3-5 sec.) during shutdown where the system stalls, right after the "system is powering down now" message, it's always at this point the crash/reset happens, if it happens.

@renehoj
Copy link
Author

renehoj commented Oct 10, 2023

I did some more testing, and disabling watchdog doesn't solve the issue. It just allows me to stop more VMs during shutdown, without rebooting the system.

Having ~15 VMs running during shutdown will still reboot the system, but manually shutting down all VMs before doing the system shutdown seems to work just fine.

@miczyg1
Copy link
Contributor

miczyg1 commented Oct 10, 2023

Is there any difference between the Z690 and Z790 in regards to how the watchdog timer works?

There isn't, at least from the firmware side.

Is there any way the watchdog event could fire during shutdown?

Rather unlikely, the watchdog is being reloaded with an SMI handler, which will still run right before the shutdown.

There is a period (maybe 3-5 sec.) during shutdown where the system stalls, right after the "system is powering down now" message, it's always at this point the crash/reset happens, if it happens.

system is powering down now is not the last message displayed before the poweroff. IIRC it should be reboot: power down. Maybe a serial console would say something more?

Having ~15 VMs running during shutdown will still reboot the system, but manually shutting down all VMs before doing the system shutdown seems to work just fine.

Interesting... Maybe @marmarek would have some ideas?

@renehoj
Copy link
Author

renehoj commented Oct 12, 2023

System powering down is the last message if the system crash/reset, after that message dom0/xen will start shutting down VMs.

I guess if no one else is experiencing the same issue, it's unlikely to be caused by Dasharo.

I did upgrade from 12th gen Z690 DDR4 to 13th gen Z790 DDR5, if I'm the only one with the problem it seems more likely to be an issue with xen and the new hardware.

How do I connect a serial console?, would that require connecting a TTL to USB cable to the motherboard?

@marmarek
Copy link

after that message dom0/xen will start shutting down VMs.

VMs are normally shutdown when stopping "qubes-core" (aka "Qubes Dom0 startup setup") service.

How do I connect a serial console?, would that require connecting a TTL to USB cable to the motherboard?

https://docs.dasharo.com/unified/msi/development/#serial-debug

@renehoj
Copy link
Author

renehoj commented Oct 13, 2023

Here is the output from the crash

[  121.025419] dm-110: detected capacity change from 41943040 to 0

(XEN) ----[ Xen-4.17.2  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    3
(XEN) RIP:    e008:[<ffff82d04022e739>] xmem_pool_free+0x229/0x2fe
(XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor (d0v3)
(XEN) rax: ffff830dbf077db0   rbx: ffff830dbf077d50   rcx: 00000000481a1fd0
(XEN) rdx: 0000000000000004   rsi: 0000000000000000   rdi: ffff830da4e2dfd0
(XEN) rbp: ffff831081dc0000   rsp: ffff8310797b7e08   r8:  0000000000000005
(XEN) r9:  ffff830e47424000   r10: 0000000000000000   r11: ffff8310354e30ac
(XEN) r12: ffff831081dc1868   r13: fffffffffffffff8   r14: ffff830e481acaa8
(XEN) r15: 0000000000000000   cr0: 0000000080050033   cr4: 0000000000b526e0
(XEN) cr3: 0000001072c22000   cr2: 00000000481a1fe8
(XEN) fsb: 0000000000000000   gsb: ffff8881b9d80000   gss: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen code around <ffff82d04022e739> (xmem_pool_free+0x229/0x2fe):
(XEN)  85 c9 74 08 48 8b 78 18 <48> 89 79 18 48 63 fe 48 63 ca 49 89 f8 49 c1 e0
(XEN) Xen stack trace from rsp=ffff8310797b7e08:
(XEN)    ffff830e2f04b540 00000000ffffffff ffff830e481ac000 ffff82d0402c7392
(XEN)    ffff830e2f04b540 ffff82d0402cd369 ffff830e481ac000 ffff82d0402cd90e
(XEN)    ffff830e481ac000 ffff82d0402e6edb ffff830e47446000 ffff82d040205fa0
(XEN)    ffff8310797bd040 0000000000000000 0000000000000002 ffff82d0405c5d80
(XEN)    0000000000000003 ffff82d040224838 ffff82d0405bdd80 0000000000000000
(XEN)    ffffffffffffffff ffff82d0405bdd80 0000000000000000 ffff82d040224f2a
(XEN)    ffff831079530000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff82d0402e2ef6 0000000000000000 0000000000000000
(XEN)    ffff8881064ac2b8 0000000080000007 ffff8881b9d9f300 ffff8881b9d9f300
(XEN)    0000000000000206 0000000001ffff00 0000000000000015 0000000000000001
(XEN)    000000000000000d ffffffff81f871aa 0000000000000000 0000000000000005
(XEN)    ffff8881b9d9f710 0000010000000000 ffffffff81f871a8 000000000000e033
(XEN)    0000000000000206 ffffc90042e97cd0 000000000000e02b bcb4157c93368e52
(XEN)    70b70abaa2124c0c f804e0e31be5a13b 197a535927b0d472 0000e01000000003
(XEN)    ffff831079530000 0000004039205000 0000000000b526e0 0000000000000000
(XEN)    0000000000000000 2106030300000001 9549274c040d2000
(XEN) Xen call trace:
(XEN)    [<ffff82d04022e739>] R xmem_pool_free+0x229/0x2fe
(XEN)    [<ffff82d0402c7392>] S p2m_free_logdirty+0x12/0x1c
(XEN)    [<ffff82d0402cd369>] S p2m_free_one+0x9/0x4a
(XEN)    [<ffff82d0402cd90e>] S p2m_final_teardown+0x2e/0x45
(XEN)    [<ffff82d0402e6edb>] S arch_domain_destroy+0x5b/0xa5
(XEN)    [<ffff82d040205fa0>] S domain.c#complete_domain_destroy+0x80/0x13e
(XEN)    [<ffff82d040224838>] S rcupdate.c#rcu_process_callbacks+0x118/0x29b
(XEN)    [<ffff82d040224f2a>] S softirq.c#.annobin_softirq.c+0x5a/0x91
(XEN)    [<ffff82d0402e2ef6>] S x86_64/entry.S#process_softirqs+0x6/0x20
(XEN) 
(XEN) Pagetable walk from 00000000481a1fe8:
(XEN)  L4[0x000] = 0000000000000000 ffffffffffffffff
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 3:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0002]
(XEN) Faulting linear address: 00000000481a1fe8
(XEN) ****************************************
(XEN) 
(XEN) Reboot in five seconds...

@andyhhp
Copy link

andyhhp commented Oct 14, 2023

Ok, so there are multiple issues here.

@renehoj when posting logs, please use code tags (triple `) which will prevent github from trying to interpret things like < and > as link syntax and omit them when rendered. That said, there is enough information here to work with.

First, there's clearly some corruption in Xen causing it to take a fatal pagefault. It's a userspace pointer, which is wrong in this context. My gut feeling is that there's a pointer in memory which has been corrupted somehow, and 00000000481a1fe8 is incidental, rather than the cause.

For this, please can you rebuild Xen with CONFIG_DEBUG enabled (generally, in case there are assertions elsewhere which help narrow things down), and CONFIG_XMEM_POOL_POISON specifically, which performs extra integrity checking of xmem pools in order to spot things like double frees. Both options are available in Xen's menuconfig.

Second, Xen is trying to reboot and clearly not succeeding. To diagnose that further, we'd need the full xl dmesg log from boot, and anything else that may have been on the serial at the point of crash (Again, a debug Xen might help further here).

@zirblazer
Copy link

Do you believe than there is any possibility of a defective RAM module that causes memory corruption? Put full system configuration, just in case you end up going in the MemTest route in case that it ends up being Hardware.

@renehoj
Copy link
Author

renehoj commented Oct 15, 2023

@zirblazer I have tried running the memtest, it didn't find any errors. The system is stable I don't get random crashes, as fare as I can tell it's only shutting down QubesOS with the VMs running that triggers the reset.

@andyhhp I have updated the output in the post. I can try and build a new QubesOS iso, but I don't have any experience with compiling Xen or using the Qubes builder, so I don't know how easy it is to build QubesOS with a custom version of Xen.

I don't think there is a problem with the reboot, I just didn't include the output from the next boot. The system reboots, and everything seems to be working.

As far as I can tell the only way I can trigger the crash is by shutting down QubesOS with the VMs running, I have not seen any reboots from manually shutting down VMs. This is why I initially though it was an issues with Dasharo, it looked like shutdown would randomly reboot the system.

@andyhhp
Copy link

andyhhp commented Oct 15, 2023

I have updated the output in the post.

Thanks. That's a much more normal looking backtrace now.

@marmarek Can you advise on the best way to run a debug Xen under QubesOS ? If it were just me, I'd drop a new xen.gz into /boot but I have no idea if that's the right course of actions here. A casual glance at the Developer docs doesn't seem to reveal anything obvious.

@renehoj This is definitely (at least partially) some error in Xen. Even if it is memory corruption from something else, it will still require a fix in Xen to mitigate.

Can you confirm exactly what steps you are taking in order to trigger the crash? You distinguish between manually shutting down the VMs (so initiating a shutdown inside the VM?), and shutting down ones which are running (an admin operation to kill the Qube?), but it's not entirely clear.

If reboot following the crash is actually working fine, then this is unlikely to be a Dasharo issue.

@marmarek
Copy link

If it were just me, I'd drop a new xen.gz into /boot but I have no idea if that's the right course of actions here.

That's what I usually do... We have several patches on top but most of them are just for toolstack and the few for hypervisor shouldn't matter for debugging. Just make sure you use appropriate branch (stable-4.17) and config (https://github.com/QubesOS/qubes-vmm-xen/blob/main/config).

@renehoj
Copy link
Author

renehoj commented Oct 16, 2023

@andyhhp To trigger the crash, I leave the VMs running and shutdown QubesOS from the XFCE menu. I don't know of any other way to trigger the crash.

When I say manually shutting down the VMs, I mean running qvm-shutdown --all --wait, so when I shut down the system no VMs are running.

I'm just one person taking mental notes of when the system crashes, I can't say for sure qvm-shutdown is safe to use, but I have only seen the crash happen during the system shutdown with the VMs running.

Tonight I will try and compile stable-4.17 with the QubesOS config and debugging enabled.

@andyhhp
Copy link

andyhhp commented Oct 16, 2023

To trigger the crash, I leave the VMs running and shutdown QubesOS from the XFCE menu.

Ah ok. So when you explicitly shut the VMs down first, then shut QubesOS down, it works fine. When you try to shut QubesOS down with VMs still running, we hit the crash.

It looks like there's something else happening in the shutdown case which doesn't work nicely in parallel with VM shutdown.

@gwd
Copy link

gwd commented Oct 16, 2023

The p2m_free_logdirty may indicate that the domain in question was "saved", rather than shut down. @marmarek , will Qubes try to save currently-running domains when shutting down? @renehoj, can you try saving the domains rather than shutting them down, to see if that triggers anything? Not sure the best way to do that w/in Qubes.

@marmarek
Copy link

@marmarek , will Qubes try to save currently-running domains when shutting down?

No.

@andyhhp
Copy link

andyhhp commented Oct 16, 2023

The p2m_free_logdirty may indicate

It's "S" which means it may be stack rubble in a release build, including a token entry finding nothing to do. I'd suggest waiting for the log from a debug build which should give us this accurately.

@renehoj
Copy link
Author

renehoj commented Oct 16, 2023

@andyhhp I tried to compile xen with debug, but the output looks very similar to the previous crash.

I cloned 4.17
copied the config marmarek linked to xen/xen and renamed it to .config
enabled CONFIG_DEBUG and CONFIG_XMEM_POOL_POISON
ran make build-xen
copied the xen.gz to /boot in dom0, and renamed to it to xen-4.17.2.gz

Is there anything else I need to do?

(XEN) ----[ Xen-4.17.2  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d04023a355>] credit2.c#csched2_free_domdata+0x65/0xd0
(XEN) RFLAGS: 0000000000010046   CONTEXT: hypervisor (d0v0)
(XEN) rax: ffff830e12f69fa8   rbx: ffff830e12f69f40   rcx: ffff83107a06ac70
(XEN) rdx: 00000000bf22efc8   rsi: 000000000000001f   rdi: 0000000000000001
(XEN) rbp: ffff82d040588038   rsp: ffff8310828b7cc0   r8:  0000000000000000
(XEN) r9:  ffff83107a06a200   r10: ffff83108288d340   r11: ffff830fbff45000
(XEN) r12: ffff83107a06ac20   r13: 0000000000000286   r14: 0000000000000000
(XEN) r15: ffff83107a3fec00   cr0: 0000000080050033   cr4: 0000000000b526e0
(XEN) cr3: 000000091e5de000   cr2: 00000000bf22efc8
(XEN) fsb: 00007977b0ff96c0   gsb: ffff8881b9c00000   gss: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen code around <ffff82d04023a355> (credit2.c#csched2_free_domdata+0x65/0xd0):
(XEN)  48 8d 43 68 48 89 51 08 <48> 89 0a 48 89 43 68 48 89 43 70 f0 41 81 24 24
(XEN) Xen stack trace from rsp=ffff8310828b7cc0:
(XEN)    ffff830e12e60920 0000000000000000 0000000000000000 ffff83107a06e8f0
(XEN)    ffff82d040245f7e ffff830e12f69f40 ffff83107a06ab10 ffff830e12e3af60
(XEN)    ffff83107a06acd0 ffff830fbff45000 ffff830e12f69f40 ffff830e12e3af60
(XEN)    ffff830fbff45000 ffff83107a06e8f0 ffff8310a73e4010 ffff83107a06e8f0
(XEN)    ffff830fbff45000 00007977eb7f3010 ffff82d04043d5f0 ffff8310828b7e10
(XEN)    0000000000000000 ffff82d040235c86 ffff830fbff45000 ffff8310828b7ec8
(XEN)    00007977eb7f3010 0000000000000000 ffff82d0402069e6 ffff830fbff45000
(XEN)    ffff8310828b7ec8 ffff82d040232c9b ffff83107a01cb80 ffff83107a01cb80
(XEN)    ffff82d04020c44e ffff831000000001 ffff83107a04b001 0000000000000001
(XEN)    ffff83107a04b000 ffff83107a04b068 ffff82d040588280 ffff82d040588280
(XEN)    ffff82d0405880e0 0000000000000206 0000001500000002 7aaa37872bc7000e
(XEN)    00007977a002a8e0 ffffffffffffff78 0000000000000000 00007977b0ff9638
(XEN)    00007977d85566a0 7aaa37872bc71200 00007977a002a8e0 7aaa37872bc71200
(XEN)    0000000000000000 ffffffffffffff78 0000000000000000 0000797774008000
(XEN)    00007977b0ff8280 000000000000000e 00007977a002a8e0 00007977eaf8d3d3
(XEN)    ffff8310828b7ef8 ffff83107a04b000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000024 ffff82d0402dfc92 ffff83107a04b000
(XEN)    0000000000000000 0000000000000000 ffff82d0402012a7 0000000000000000
(XEN)    ffff88811c6bb700 00007977b0ff8110 0000000000305000 ffff88811c6bb700
(XEN)    0000000000000022 0000000000000282 0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82d04023a355>] R credit2.c#csched2_free_domdata+0x65/0xd0
(XEN)    [<ffff82d040245f7e>] S sched_move_domain+0x1ee/0x760
(XEN)    [<ffff82d040235c86>] S cpupool_move_domain+0x36/0x70
(XEN)    [<ffff82d0402069e6>] S domain_kill+0x96/0x110
(XEN)    [<ffff82d040232c9b>] S do_domctl+0x123b/0x18e0
(XEN)    [<ffff82d04020c44e>] S event_fifo.c#evtchn_fifo_set_pending+0x34e/0x560
(XEN)    [<ffff82d0402dfc92>] S pv_hypercall+0x3e2/0x410
(XEN)    [<ffff82d0402012a7>] S lstar_enter+0x137/0x140
(XEN) 
(XEN) Pagetable walk from 00000000bf22efc8:
(XEN)  L4[0x000] = 0000000000000000 ffffffffffffffff
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0002]
(XEN) Faulting linear address: 00000000bf22efc8
(XEN) ****************************************
(XEN) 
(XEN) Reboot in five seconds...

@andyhhp
Copy link

andyhhp commented Oct 17, 2023

(XEN) ----[ Xen-4.17.2  x86_64  debug=n  Not tainted ]----

That's still a release build of Xen, so I suspect you didn't actually boot the hypervisor you built. Either edit grub.cfg, or use the interactive prompt at boot to change the path to the hypervisor.

That said,

(XEN) RIP:    e008:[<ffff82d04023a355>] credit2.c#csched2_free_domdata+0x65/0xd0
...
(XEN) Pagetable walk from 00000000bf22efc8:

is a very different area of code, but with similar symptoms that still look like xmem pool corruption. I suspect that if you repeat this a few times, you'll find different RIPs each time.

@renehoj
Copy link
Author

renehoj commented Oct 17, 2023

@andyhhp The version I have compiled can't shutdown any VMs without crashing, shutting down QubesOS or a single VM will result in the following crash.

Don't know if it relevant, but I'm running 4 cpupools with credit2 as the scheduler.

(XEN) arch/x86/hvm/hvm.c:1658:d8v0 All CPUs offline -- powering off.
(XEN) Assertion 'svc->rqd == c2rqd(sched_unit_master(unit))' failed at common/sched/credit2.c:1159
(XEN) ----[ Xen-4.17.3-pre  x86_64  debug=y  Not tainted ]----
(XEN) CPU:    1
(XEN) RIP:    e008:[<ffff82d04023a700>] credit2.c#csched2_unit_remove+0xe3/0xe7
(XEN) RFLAGS: 0000000000010083   CONTEXT: hypervisor (d0v1)
(XEN) rax: ffff83107a06ac90   rbx: ffff830bd4124ef0   rcx: 00000000000072f5
(XEN) rdx: 0000004039bc5000   rsi: ffff830bd4124ef0   rdi: ffff830bd41240f0
(XEN) rbp: ffff82d0405a9288   rsp: ffff831082887cc8   r8:  0000000000000000
(XEN) r9:  ffff83107a06aaa0   r10: ffff82e000000000   r11: 4000000000000000
(XEN) r12: ffff83107a06ac90   r13: ffff830bd41240f0   r14: ffff82d0405c15e0
(XEN) r15: ffff82d0405a9288   cr0: 0000000080050033   cr4: 0000000000b526e0
(XEN) cr3: 00000008d5938000   cr2: ffff8881399d4a78
(XEN) fsb: 00007f36e57fa6c0   gsb: ffff8881b9c80000   gss: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen code around <ffff82d04023a700> (credit2.c#csched2_unit_remove+0xe3/0xe7):
(XEN)  bb fe ff e9 5f ff ff ff <0f> 0b 0f 0b 41 54 55 53 48 8b 47 08 48 8b 50 20
(XEN) Xen stack trace from rsp=ffff831082887cc8:
(XEN)    ffff830bd4124ef0 ffff830bf8b95000 ffff83107a0694b0 ffff83107a06e8c0
(XEN)    ffff82d0405c15e0 ffff82d040246adb 0000000100000000 ffff830bf8a7cf60
(XEN)    ffff82d000000010 ffff83107a06aad0 ffff82d0405a9288 ffff830bd40fef20
(XEN)    ffff830bf8a7cdb0 ffff83107a06e8c0 ffff830bf8b95000 ffff82d04045d5e8
(XEN)    0000000000000000 0000000000000000 0000000000000000 ffff82d040234cf7
(XEN)    ffff830bf8b95000 ffff83107a06e8c0 ffff82d040236025 ffff830bf8b95000
(XEN)    0000000000000000 00007f37211f3010 ffff82d040206513 ffff830bf8b95000
(XEN)    ffff831082887ec8 00007f37211f3010 ffff82d040232b12 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000700000001 ffff82c000000000
(XEN)    ffff83107a0531c0 ffff82d0402276ba ffff83107a01c068 0000000000000286
(XEN)    0000001a7571dfb0 0000001500000002 9ae1a81629050008 00007f36c40274e0
(XEN)    ffffffffffffff78 0000000000000002 00007f36e57fa638 00007f37107746a0
(XEN)    9ae1a8162905f100 00007f36c40274e0 9ae1a8162905f100 0000000000000002
(XEN)    ffffffffffffff78 0000000000000002 00007f36b80f8470 00007f36e57f9280
(XEN)    0000000000000008 00007f36c40274e0 00007f37209ad3d3 ffff831082887ef8
(XEN)    ffff83107a01c000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000024 ffff82d0402dc71b ffff83107a01c000 0000000000000000
(XEN)    0000000000000000 ffff82d0402012b7 0000000000000000 ffff888165f37200
(XEN)    00007f36e57f9110 0000000000305000 ffff888165f37200 0000000000000022
(XEN)    0000000000000282 0000000000000000 0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82d04023a700>] R credit2.c#csched2_unit_remove+0xe3/0xe7
(XEN)    [<ffff82d040246adb>] S sched_move_domain+0x2f3/0x5b1
(XEN)    [<ffff82d040234cf7>] S cpupool.c#cpupool_move_domain_locked+0x1d/0x3b
(XEN)    [<ffff82d040236025>] S cpupool_move_domain+0x24/0x35
(XEN)    [<ffff82d040206513>] S domain_kill+0xa5/0x116
(XEN)    [<ffff82d040232b12>] S do_domctl+0xe5f/0x1951
(XEN)    [<ffff82d0402276ba>] S timer.c#timer_lock+0x69/0x143
(XEN)    [<ffff82d0402dc71b>] S pv_hypercall+0x44e/0x4a9
(XEN)    [<ffff82d0402012b7>] S lstar_enter+0x137/0x140
(XEN) 
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 1:
(XEN) Assertion 'svc->rqd == c2rqd(sched_unit_master(unit))' failed at common/sched/credit2.c:1159
(XEN) ****************************************
(XEN) 
(XEN) Reboot in five seconds...

@andyhhp
Copy link

andyhhp commented Oct 17, 2023

Well - that is an improvement... at least it's consistent now.

And yes, CPUPools almost certainly is relevant factor here. Can you describe your CPUPool setup, and other scheduling settings (e.g. smt, sched_gran), and which VMs are in which CPUPools.

One observation - it does seem fairly absurd that we're moving vcpus between CPUPools as part of destroying the VM, but we really do move it back from whichever CPUPool it's in, back to pool 0.

@andyhhp
Copy link

andyhhp commented Oct 17, 2023

Ah - one further question. This is a Z790, with an AlderLake CPU, is it not? ADL is the first of the Hybrid CPUs which mix P and E cores, and in particular which might be relevant here, have different hyperthreading-ness

@renehoj
Copy link
Author

renehoj commented Oct 17, 2023

@andyhhp

It is an 13th gen RaptorLake, but it has the same P and E cores as AlderLake.

I have smt=on, dom0_max_vcpus=4, dom0_vcpus_pin, sched-gran=cpu. I tried using sched-gran=core, but it doesn't seem possible for asymmetric cpus.

CPUPools
Pool-0: core 0-3
Browsers: core 4-9
PCores: core 10-15
ECores: core 16-31

I have a python script running on dom0 that listen for QubeOS admin events, when adminAPI says a VM has started the script executes xl cpupool-migarte to move the VM to the pool where it belongs, this is mostly to control what is running on P and E cores.

@andyhhp
Copy link

andyhhp commented Oct 17, 2023

Wow, a lot to unpack there. That's quite a setup.

First, there is a list_del() hidden behind the ASSERT() which is now reliably failing, and that could plausibly be our source of memory corruption. Either way, lets investigate the reliable issue first.

AFAICT, you've got P cores (ID 0 thru 15, so 8c/16t), and E cores (ID 16 thru 31, so only 8c total), with SMT active and "cpu" (thread) scheduling. So Xen should be fairly oblivious and just be treating them as 24 independent things.

Can you provide the full xl dmesg, including activating CONFIG_DEBUG_TRACE and initialising debugtrace_send_to_console = true; near the top of xen/common/debugtrace.c. That should cause relevant messages to come out in sync with other printk()s.

Are all cpupools running credit2? Can you experiment using credit across the board? You'll need sched=credit on Xen's command like to change cpupool0. Credit(1) is less concerned about CPU topology compared to credit2, and this may help narrow down if it's something specific to Credit2, or CPUPools.

As to the admin events, xl can start the VM in a non-default CPUPool. When we've sorted this crash out, it is probably worth seeing if libvirt has libxl's cpupool controls exposed in a nice way, because that would avoid needing to play with the VM after the fact.

@jgross1
Copy link

jgross1 commented Oct 18, 2023

There is definitely a flaw in the move domain design, at least with credit2. I guess in your case credit2 is using multiple runqueues in at least one cpupool.

I have a rough idea how to fix it, but this might take some time. In the meantime I'd like you to verify my suspect by posting the output of "xl dmesg" after having setup the cpupools.

To get your system back to a working state you might want to try adding "credit2_runqueue=all" to the Xen boot parameters (assuming my analysis is correct). Please do so only after gathering the xl dmesg data I've asked for.

@renehoj
Copy link
Author

renehoj commented Oct 18, 2023

@andyhhp

AFAICT, you've got P cores (ID 0 thru 15, so 8c/16t), and E cores (ID 16 thru 31, so only 8c total), with SMT active and "cpu" (thread) scheduling. So Xen should be fairly oblivious and just be treating them as 24 independent things.

It's an 13900K with 24 cores (8P + 16E) and 32 threads (16P + 16E).

Are all cpupools running credit2? Can you experiment using credit across the board? You'll need sched=credit on Xen's command like to change cpupool0. Credit(1) is less concerned about CPU topology compared to credit2, and this may help narrow down if it's something specific to Credit2, or CPUPools.

All the pools are credit2. Before QubesOS upgraded to Xen 4.17 it was possible to run Pool-0 as credit2, and the other pools as credit. That stopped working in 4.17, and I changed all the pools to credit2.

I will try and change them to credit when I get home.

As to the admin events, xl can start the VM in a non-default CPUPool. When we've sorted this crash out, it is probably worth seeing if libvirt has libxl's cpupool controls exposed in a nice way, because that would avoid needing to play with the VM after the fact.

I originally used cpuset in xen.xml to pin the cpus, using adminAPI with python was just a lot easier to manage when creating new VMs. Switching back to using xen.xml with jinja, wouldn't be an issue.

@jgross1

I have a rough idea how to fix it, but this might take some time. In the meantime I'd like you to verify my suspect by posting the output of "xl dmesg" after having setup the cpupools.

Can you provide the full xl dmesg, including activating CONFIG_DEBUG_TRACE and initialising debugtrace_send_to_console = true; near the top of xen/common/debugtrace.c. That should cause relevant messages to come out in sync with other printk()s.

I'm at work right now, so I can't post the xl dmesg, but I will do it when I get home.

@miczyg1
Copy link
Contributor

miczyg1 commented Oct 20, 2023

Dasharo/edk2#99 this should solve it. I only need to test it (and install Qubes first :) )

miczyg1 added a commit to Dasharo/edk2 that referenced this issue Oct 20, 2023
Prevent debugging on serial port (whether physical or cbmem console) at
runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It
will stop calling SerialPortWrite if EFI switches to runtime and avoid access
to cbmem CONSOLE buffer which is neither marked as runtime code nor data.
Solves the issue with Xen backtrace on EFI reset system runtime service:
Dasharo/dasharo-issues#488 (comment)

Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
@andyhhp
Copy link

andyhhp commented Oct 20, 2023

@renehoj Are you happy to provide a name/email for a Reported-by and/or Tested-by: tags in the Xen fix, or would you prefer that we just link to this issue?

@renehoj
Copy link
Author

renehoj commented Oct 21, 2023

@andyhhp I would prefer you just link to this issue.

@miczyg1 I can test it for you, if there is a rom I can download

@andyhhp
Copy link

andyhhp commented Oct 23, 2023

@andyhhp I would prefer you just link to this issue.

Sure.

miczyg1 added a commit to Dasharo/edk2 that referenced this issue Oct 25, 2023
Prevent debugging on serial port (whether physical or cbmem console) at
runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It
will stop calling SerialPortWrite if EFI switches to runtime and avoid access
to cbmem CONSOLE buffer which is neither marked as runtime code nor data.
Solves the issue with Xen backtrace on EFI reset system runtime service:
Dasharo/dasharo-issues#488 (comment)

Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
olafhering pushed a commit to olafhering/xen that referenced this issue Nov 20, 2023
When moving a domain out of a cpupool running with the credit2
scheduler and having multiple run-queues, the following ASSERT() can
be observed:

(XEN) Xen call trace:
(XEN)    [<ffff82d04023a700>] R credit2.c#csched2_unit_remove+0xe3/0xe7
(XEN)    [<ffff82d040246adb>] S sched_move_domain+0x2f3/0x5b1
(XEN)    [<ffff82d040234cf7>] S cpupool.c#cpupool_move_domain_locked+0x1d/0x3b
(XEN)    [<ffff82d040236025>] S cpupool_move_domain+0x24/0x35
(XEN)    [<ffff82d040206513>] S domain_kill+0xa5/0x116
(XEN)    [<ffff82d040232b12>] S do_domctl+0xe5f/0x1951
(XEN)    [<ffff82d0402276ba>] S timer.c#timer_lock+0x69/0x143
(XEN)    [<ffff82d0402dc71b>] S pv_hypercall+0x44e/0x4a9
(XEN)    [<ffff82d0402012b7>] S lstar_enter+0x137/0x140
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 1:
(XEN) Assertion 'svc->rqd == c2rqd(sched_unit_master(unit))' failed at common/sched/credit2.c:1159
(XEN) ****************************************

This is happening as sched_move_domain() is setting a different cpu
for a scheduling unit without telling the scheduler. When this unit is
removed from the scheduler, the ASSERT() will trigger.

In non-debug builds the result is usually a clobbered pointer, leading
to another crash a short time later.

Fix that by swapping the two involved actions (setting another cpu and
removing the unit from the scheduler).

Link: Dasharo/dasharo-issues#488
Fixes: 70fadc4 ("xen/cpupool: support moving domain between cpupools with different granularity")
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: George Dunlap <george.dunlap@cloud.com>
olafhering pushed a commit to olafhering/xen that referenced this issue Nov 23, 2023
When moving a domain out of a cpupool running with the credit2
scheduler and having multiple run-queues, the following ASSERT() can
be observed:

(XEN) Xen call trace:
(XEN)    [<ffff82d04023a700>] R credit2.c#csched2_unit_remove+0xe3/0xe7
(XEN)    [<ffff82d040246adb>] S sched_move_domain+0x2f3/0x5b1
(XEN)    [<ffff82d040234cf7>] S cpupool.c#cpupool_move_domain_locked+0x1d/0x3b
(XEN)    [<ffff82d040236025>] S cpupool_move_domain+0x24/0x35
(XEN)    [<ffff82d040206513>] S domain_kill+0xa5/0x116
(XEN)    [<ffff82d040232b12>] S do_domctl+0xe5f/0x1951
(XEN)    [<ffff82d0402276ba>] S timer.c#timer_lock+0x69/0x143
(XEN)    [<ffff82d0402dc71b>] S pv_hypercall+0x44e/0x4a9
(XEN)    [<ffff82d0402012b7>] S lstar_enter+0x137/0x140
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 1:
(XEN) Assertion 'svc->rqd == c2rqd(sched_unit_master(unit))' failed at common/sched/credit2.c:1159
(XEN) ****************************************

This is happening as sched_move_domain() is setting a different cpu
for a scheduling unit without telling the scheduler. When this unit is
removed from the scheduler, the ASSERT() will trigger.

In non-debug builds the result is usually a clobbered pointer, leading
to another crash a short time later.

Fix that by swapping the two involved actions (setting another cpu and
removing the unit from the scheduler).

Link: Dasharo/dasharo-issues#488
Fixes: 70fadc4 ("xen/cpupool: support moving domain between cpupools with different granularity")
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: George Dunlap <george.dunlap@cloud.com>
master commit: 4709ec8
master date: 2023-11-20 10:49:29 +0100
olafhering pushed a commit to olafhering/xen that referenced this issue Nov 23, 2023
When moving a domain out of a cpupool running with the credit2
scheduler and having multiple run-queues, the following ASSERT() can
be observed:

(XEN) Xen call trace:
(XEN)    [<ffff82d04023a700>] R credit2.c#csched2_unit_remove+0xe3/0xe7
(XEN)    [<ffff82d040246adb>] S sched_move_domain+0x2f3/0x5b1
(XEN)    [<ffff82d040234cf7>] S cpupool.c#cpupool_move_domain_locked+0x1d/0x3b
(XEN)    [<ffff82d040236025>] S cpupool_move_domain+0x24/0x35
(XEN)    [<ffff82d040206513>] S domain_kill+0xa5/0x116
(XEN)    [<ffff82d040232b12>] S do_domctl+0xe5f/0x1951
(XEN)    [<ffff82d0402276ba>] S timer.c#timer_lock+0x69/0x143
(XEN)    [<ffff82d0402dc71b>] S pv_hypercall+0x44e/0x4a9
(XEN)    [<ffff82d0402012b7>] S lstar_enter+0x137/0x140
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 1:
(XEN) Assertion 'svc->rqd == c2rqd(sched_unit_master(unit))' failed at common/sched/credit2.c:1159
(XEN) ****************************************

This is happening as sched_move_domain() is setting a different cpu
for a scheduling unit without telling the scheduler. When this unit is
removed from the scheduler, the ASSERT() will trigger.

In non-debug builds the result is usually a clobbered pointer, leading
to another crash a short time later.

Fix that by swapping the two involved actions (setting another cpu and
removing the unit from the scheduler).

Link: Dasharo/dasharo-issues#488
Fixes: 70fadc4 ("xen/cpupool: support moving domain between cpupools with different granularity")
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: George Dunlap <george.dunlap@cloud.com>
master commit: 4709ec8
master date: 2023-11-20 10:49:29 +0100
krystian-hebel pushed a commit to TrenchBoot/xen that referenced this issue Dec 8, 2023
When moving a domain out of a cpupool running with the credit2
scheduler and having multiple run-queues, the following ASSERT() can
be observed:

(XEN) Xen call trace:
(XEN)    [<ffff82d04023a700>] R credit2.c#csched2_unit_remove+0xe3/0xe7
(XEN)    [<ffff82d040246adb>] S sched_move_domain+0x2f3/0x5b1
(XEN)    [<ffff82d040234cf7>] S cpupool.c#cpupool_move_domain_locked+0x1d/0x3b
(XEN)    [<ffff82d040236025>] S cpupool_move_domain+0x24/0x35
(XEN)    [<ffff82d040206513>] S domain_kill+0xa5/0x116
(XEN)    [<ffff82d040232b12>] S do_domctl+0xe5f/0x1951
(XEN)    [<ffff82d0402276ba>] S timer.c#timer_lock+0x69/0x143
(XEN)    [<ffff82d0402dc71b>] S pv_hypercall+0x44e/0x4a9
(XEN)    [<ffff82d0402012b7>] S lstar_enter+0x137/0x140
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 1:
(XEN) Assertion 'svc->rqd == c2rqd(sched_unit_master(unit))' failed at common/sched/credit2.c:1159
(XEN) ****************************************

This is happening as sched_move_domain() is setting a different cpu
for a scheduling unit without telling the scheduler. When this unit is
removed from the scheduler, the ASSERT() will trigger.

In non-debug builds the result is usually a clobbered pointer, leading
to another crash a short time later.

Fix that by swapping the two involved actions (setting another cpu and
removing the unit from the scheduler).

Link: Dasharo/dasharo-issues#488
Fixes: 70fadc4 ("xen/cpupool: support moving domain between cpupools with different granularity")
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: George Dunlap <george.dunlap@cloud.com>
master commit: 4709ec8
master date: 2023-11-20 10:49:29 +0100
@miczyg1
Copy link
Contributor

miczyg1 commented Feb 12, 2024

@renehoj the v0.9.1 ROM contains the fix. Could you please confirm the issue is still present or not?

@renehoj
Copy link
Author

renehoj commented Feb 12, 2024

@miczyg1 Yes, I haven't had any crashes with v0.9.1, as fare as I can tell the issue is solved.

@miczyg1 miczyg1 closed this as completed Feb 13, 2024
SergiiDmytruk pushed a commit to Dasharo/edk2 that referenced this issue Apr 15, 2024
Prevent debugging on serial port (whether physical or cbmem console) at
runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It
will stop calling SerialPortWrite if EFI switches to runtime and avoid access
to cbmem CONSOLE buffer which is neither marked as runtime code nor data.
Solves the issue with Xen backtrace on EFI reset system runtime service:
Dasharo/dasharo-issues#488 (comment)

Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
SergiiDmytruk pushed a commit to Dasharo/edk2 that referenced this issue Apr 15, 2024
Prevent debugging on serial port (whether physical or cbmem console) at
runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It
will stop calling SerialPortWrite if EFI switches to runtime and avoid access
to cbmem CONSOLE buffer which is neither marked as runtime code nor data.
Solves the issue with Xen backtrace on EFI reset system runtime service:
Dasharo/dasharo-issues#488 (comment)

Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
SergiiDmytruk pushed a commit to Dasharo/edk2 that referenced this issue Apr 21, 2024
…IVER

Prevent debugging on serial port (whether physical or cbmem console) at
runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It
will stop calling SerialPortWrite if EFI switches to runtime and avoid access
to cbmem CONSOLE buffer which is neither marked as runtime code nor data.
Solves the issue with Xen backtrace on EFI reset system runtime service:
Dasharo/dasharo-issues#488 (comment)

Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
SergiiDmytruk pushed a commit to Dasharo/edk2 that referenced this issue Apr 22, 2024
…IVER

Prevent debugging on serial port (whether physical or cbmem console) at
runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It
will stop calling SerialPortWrite if EFI switches to runtime and avoid access
to cbmem CONSOLE buffer which is neither marked as runtime code nor data.
Solves the issue with Xen backtrace on EFI reset system runtime service:
Dasharo/dasharo-issues#488 (comment)

Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
SergiiDmytruk pushed a commit to Dasharo/edk2 that referenced this issue Apr 22, 2024
…IVER

Prevent debugging on serial port (whether physical or cbmem console) at
runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It
will stop calling SerialPortWrite if EFI switches to runtime and avoid access
to cbmem CONSOLE buffer which is neither marked as runtime code nor data.
Solves the issue with Xen backtrace on EFI reset system runtime service:
Dasharo/dasharo-issues#488 (comment)

Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
SergiiDmytruk pushed a commit to Dasharo/edk2 that referenced this issue Apr 22, 2024
…IVER

Prevent debugging on serial port (whether physical or cbmem console) at
runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It
will stop calling SerialPortWrite if EFI switches to runtime and avoid access
to cbmem CONSOLE buffer which is neither marked as runtime code nor data.
Solves the issue with Xen backtrace on EFI reset system runtime service:
Dasharo/dasharo-issues#488 (comment)

Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
SergiiDmytruk pushed a commit to Dasharo/edk2 that referenced this issue Apr 22, 2024
…IVER

Prevent debugging on serial port (whether physical or cbmem console) at
runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It
will stop calling SerialPortWrite if EFI switches to runtime and avoid access
to cbmem CONSOLE buffer which is neither marked as runtime code nor data.
Solves the issue with Xen backtrace on EFI reset system runtime service:
Dasharo/dasharo-issues#488 (comment)

Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
SergiiDmytruk pushed a commit to Dasharo/edk2 that referenced this issue Apr 22, 2024
…IVER

Prevent debugging on serial port (whether physical or cbmem console) at
runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It
will stop calling SerialPortWrite if EFI switches to runtime and avoid access
to cbmem CONSOLE buffer which is neither marked as runtime code nor data.
Solves the issue with Xen backtrace on EFI reset system runtime service:
Dasharo/dasharo-issues#488 (comment)

Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
SergiiDmytruk pushed a commit to Dasharo/edk2 that referenced this issue Apr 23, 2024
…IVER

Prevent debugging on serial port (whether physical or cbmem console) at
runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It
will stop calling SerialPortWrite if EFI switches to runtime and avoid access
to cbmem CONSOLE buffer which is neither marked as runtime code nor data.
Solves the issue with Xen backtrace on EFI reset system runtime service:
Dasharo/dasharo-issues#488 (comment)

Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
SergiiDmytruk pushed a commit to Dasharo/edk2 that referenced this issue Apr 25, 2024
…IVER

Prevent debugging on serial port (whether physical or cbmem console) at
runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It
will stop calling SerialPortWrite if EFI switches to runtime and avoid access
to cbmem CONSOLE buffer which is neither marked as runtime code nor data.
Solves the issue with Xen backtrace on EFI reset system runtime service:
Dasharo/dasharo-issues#488 (comment)

Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
SergiiDmytruk pushed a commit to Dasharo/edk2 that referenced this issue May 6, 2024
…IVER

Prevent debugging on serial port (whether physical or cbmem console) at
runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It
will stop calling SerialPortWrite if EFI switches to runtime and avoid access
to cbmem CONSOLE buffer which is neither marked as runtime code nor data.
Solves the issue with Xen backtrace on EFI reset system runtime service:
Dasharo/dasharo-issues#488 (comment)

Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
SergiiDmytruk pushed a commit to Dasharo/edk2 that referenced this issue May 7, 2024
…IVER

Prevent debugging on serial port (whether physical or cbmem console) at
runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It
will stop calling SerialPortWrite if EFI switches to runtime and avoid access
to cbmem CONSOLE buffer which is neither marked as runtime code nor data.
Solves the issue with Xen backtrace on EFI reset system runtime service:
Dasharo/dasharo-issues#488 (comment)

Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
SergiiDmytruk pushed a commit to Dasharo/edk2 that referenced this issue May 15, 2024
…IVER

Prevent debugging on serial port (whether physical or cbmem console) at
runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It
will stop calling SerialPortWrite if EFI switches to runtime and avoid access
to cbmem CONSOLE buffer which is neither marked as runtime code nor data.
Solves the issue with Xen backtrace on EFI reset system runtime service:
Dasharo/dasharo-issues#488 (comment)

Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
SergiiDmytruk pushed a commit to Dasharo/edk2 that referenced this issue Jun 2, 2024
…IVER

Prevent debugging on serial port (whether physical or cbmem console) at
runtime by using the DxeRuntimeDebugLibSerialPort library as DebugLib. It
will stop calling SerialPortWrite if EFI switches to runtime and avoid access
to cbmem CONSOLE buffer which is neither marked as runtime code nor data.
Solves the issue with Xen backtrace on EFI reset system runtime service:
Dasharo/dasharo-issues#488 (comment)

Signed-off-by: Michał Żygowski <michal.zygowski@3mdeb.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants