-
-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VMs don't boot on 4Kn drive #7828
Comments
Do you see any more details in |
Yes. From /var/log/xen/xen-hotplug.log:
|
The fix will be in repo soon. In the meantime, you can recover by editing |
Got it! Thanks very much! |
Automated announcement from builder-github The package
|
Whoops! I had not considered that at all. Glad this got caught in testing! |
Automated announcement from builder-github The component
|
Automated announcement from builder-github The package
|
This is fixed in util-linux v2.38+, although we'd also have to pass an explicit
|
Automated announcement from builder-github The package
|
Automated announcement from builder-github The package
|
In fact, I'm using LVM thick pool, because if I choose to use the LVM thin layout in the installer, there will be an error message and no new partition will be created.
Whoops, I forgot to explicitly emphasize that "LVM" is different from LVM-thin.
However, if my memory is correct, the default pool in LVM thin layout is “vm”. |
Oh man, I always assumed that dm-crypt with sector_size 512 (the default on R4.1.x) would "shield" the upper storage layers from the 4Kn drive's logical block size. But actually it only works the other way around, which is sector_size 4096 bumping up a physical or logical block size of 512 on the underlying device to 4096 on the dm device. That explains why this bug was able to affect your system. General summary (not specific to this bug): People with 4Kn drives currently can't use lvm_thin pools (#4974), unless they're doing esoteric things like building custom template image files etc. And people who set up 4K dm-crypt on 512e drives have the same problem. But installation layouts compatible with the file-reflink driver are okay: "Btrfs", "Standard Partition", or "LVM" that's not "Thin". The latter two use XFS for the varlibqubes pool and are somewhat less tested, as you noticed 😉 |
Automated announcement from builder-github The component
Or update dom0 via Qubes Manager. |
Automated announcement from builder-github The package
|
Automated announcement from builder-github The package
|
block_size=0 in loop_config usually results in a loop device with 512 byte logical blocks (which is required for compatibility with the normal VM volume content), but not always: XFS backed by a block device with 4096 byte logical blocks (due to a 4Kn drive and/or dm-crypt with sector_size=4096) doesn't support the combination of direct I/O *and* 512 byte logical blocks for loop devices. With block_size=0 the kernel resolves the conflict by changing the logical block size to 4096. Explicitly pass block_size=512 to turn off direct I/O in this case instead. Fixes QubesOS/qubes-issues#7828
How to file a helpful issue
Qubes OS release
R4.1.1 TESTING
Brief summary
Using default "LVM" installation layout.
When I install Qubes with "LVM" disk layout, the default pool is varlibqubes.
The VMs are all based on stock templates that are shipped with the iso. I didn't build 4Kn templates.
All my VMs don't boot after I upgrade dom0 to tesing-latest.
Without enabling testing, things are fine.
Steps to reproduce
Expected behavior
VMs boot as usual.
Actual behavior
An error message, asserting "Start failed: internal error: libxenlight failed to create new domain xxx"
Here are the logs in /var/log/libvirt/libxl/libxl-driver.log:
The text was updated successfully, but these errors were encountered: