New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default partitioning scheme in custom partitioning defaults to LVM rather than LVM thin provisioning #3225

Closed
qubesuser opened this Issue Oct 27, 2017 · 9 comments

Comments

Projects
None yet
8 participants
@qubesuser

Qubes OS version:

R4.0-rc2

Steps to reproduce the behavior:

  1. Start Qubes installation and select custom partitioning

Expected behavior:

Default partition scheme in the selection box is LVM Thin Provisioning

Actual behavior:

Default partition scheme in the selection box is LVM (not thin provisioning)

@arjan-s

This comment has been minimized.

Show comment
Hide comment
@arjan-s

arjan-s Nov 25, 2017

I got bitten by this bug and am now actively using a file-based Qubes installation. I've read that LVM thin provisioning is the new way in 4.0, so now I'm trying to find out whether I should migrate to that. I don't really want to do a reinstallation, but I'm not sure whether it is even possible to modify an active installation to use LVM thin provisioning instead of files.

arjan-s commented Nov 25, 2017

I got bitten by this bug and am now actively using a file-based Qubes installation. I've read that LVM thin provisioning is the new way in 4.0, so now I'm trying to find out whether I should migrate to that. I don't really want to do a reinstallation, but I'm not sure whether it is even possible to modify an active installation to use LVM thin provisioning instead of files.

@na--

This comment has been minimized.

Show comment
Hide comment
@na--

na-- Nov 25, 2017

It is, although it's a bit of a pain to transition all of the VMs to the new thin pool. I actually chose the default LVM partitioning intentionally when I reinstalled RC2 after my old SSD crashed. The reason was that if dom0 is in the thin pool and your free space runs out, LVM turns every thin volume read-only, including the dom0 one if it's in the same thin pool... So it may be tedious to transfer the VMs from the file-based storage to a new LVM thinly provisioned one, but the end result will be better because dom0 won't be on the thin storage as well!

So here's how to actually do this:

  1. Free up some space in the qubes_dom0 volume group. I don't know if that's possible to do while running the system, I did it from the install disk's console by using resize2fs and lvreduce.
  2. Create a new LVM thin pool with. For example if you want to use the rest of the free space in the volume group, run lvcreate --type thin-pool --poolmetadatasize 1G -l '100%FREE' -n new_thin_pool_name qubes_dom0 (change the name as you like and yeah, 1G for poolmetadatasize is probably a bit excessive...)
  3. Create a new qvm-pool that is backed by the new LVM thin volume: qvm-pool -a new-lvm-pool lvm_thin -o volume_group=qubes_dom0,thin_pool=new_thin_pool_name (change the names here as well; you may also want to specify a higher number for revisions_to_keep, though I'm unsure if it actually does anything yet)
  4. Clone your VMs to the new pool with qvm-clone -P new-lvm-pool old-vm-name new-vm-name. This is the tedious part - you cannot simply move the VMs, you have to clone them with another name in the new pool, delete the original VMs and clone the VMs in the new pool again with the original name. This is especially painful when dealing with templates and netvms since you cannot simply delete the original VMs, you have to reassign the properties to the new vms before that (and again after the second clone)... A few shell scripts will probably help here...

na-- commented Nov 25, 2017

It is, although it's a bit of a pain to transition all of the VMs to the new thin pool. I actually chose the default LVM partitioning intentionally when I reinstalled RC2 after my old SSD crashed. The reason was that if dom0 is in the thin pool and your free space runs out, LVM turns every thin volume read-only, including the dom0 one if it's in the same thin pool... So it may be tedious to transfer the VMs from the file-based storage to a new LVM thinly provisioned one, but the end result will be better because dom0 won't be on the thin storage as well!

So here's how to actually do this:

  1. Free up some space in the qubes_dom0 volume group. I don't know if that's possible to do while running the system, I did it from the install disk's console by using resize2fs and lvreduce.
  2. Create a new LVM thin pool with. For example if you want to use the rest of the free space in the volume group, run lvcreate --type thin-pool --poolmetadatasize 1G -l '100%FREE' -n new_thin_pool_name qubes_dom0 (change the name as you like and yeah, 1G for poolmetadatasize is probably a bit excessive...)
  3. Create a new qvm-pool that is backed by the new LVM thin volume: qvm-pool -a new-lvm-pool lvm_thin -o volume_group=qubes_dom0,thin_pool=new_thin_pool_name (change the names here as well; you may also want to specify a higher number for revisions_to_keep, though I'm unsure if it actually does anything yet)
  4. Clone your VMs to the new pool with qvm-clone -P new-lvm-pool old-vm-name new-vm-name. This is the tedious part - you cannot simply move the VMs, you have to clone them with another name in the new pool, delete the original VMs and clone the VMs in the new pool again with the original name. This is especially painful when dealing with templates and netvms since you cannot simply delete the original VMs, you have to reassign the properties to the new vms before that (and again after the second clone)... A few shell scripts will probably help here...
@mirrorway

This comment has been minimized.

Show comment
Hide comment
@mirrorway

mirrorway Nov 25, 2017

I fell into the same trap. While everything appears to work so far, I'm worried if this configuration will be supported in the future...

mirrorway commented Nov 25, 2017

I fell into the same trap. While everything appears to work so far, I'm worried if this configuration will be supported in the future...

@mirrorway

This comment has been minimized.

Show comment
Hide comment
@mirrorway

mirrorway Jan 20, 2018

I take it from the fix in the commit, and the lack of qvm-trim-template in 4.0, that users with file-based storage should reinstall or manually convert to thin-LVM?

I take it from the fix in the commit, and the lack of qvm-trim-template in 4.0, that users with file-based storage should reinstall or manually convert to thin-LVM?

@mirrorway

This comment has been minimized.

Show comment
Hide comment
@mirrorway

mirrorway Jan 20, 2018

Sorry, I mean file-based storage that was set up by 4.0rc2 installer, not by 3.2.

While new users won't accidentally end up with file-based storage, what about those 4.0rc2,rc3 users who already have it? They should migrate to thin LVM, right?

mirrorway commented Jan 20, 2018

Sorry, I mean file-based storage that was set up by 4.0rc2 installer, not by 3.2.

While new users won't accidentally end up with file-based storage, what about those 4.0rc2,rc3 users who already have it? They should migrate to thin LVM, right?

@qubesos-bot

This comment has been minimized.

Show comment
Hide comment
@qubesos-bot

qubesos-bot Jan 20, 2018

Automated announcement from builder-github

The package pykickstart-2.32-4.fc25 has been pushed to the r4.0 testing repository for dom0.
To test this update, please install it with the following command:

sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing

Changes included in this update

Automated announcement from builder-github

The package pykickstart-2.32-4.fc25 has been pushed to the r4.0 testing repository for dom0.
To test this update, please install it with the following command:

sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing

Changes included in this update

@rustybird

This comment has been minimized.

Show comment
Hide comment
@rustybird

rustybird Jan 20, 2018

While new users won't accidentally end up with file-based storage, what about those 4.0rc2,rc3 users who already have it? They should migrate to thin LVM, right?

Probably. The file driver lacks some of the features of the lvm_thin driver, and it has much fewer users who have tested it in an R4.0 context.

rustybird commented Jan 20, 2018

While new users won't accidentally end up with file-based storage, what about those 4.0rc2,rc3 users who already have it? They should migrate to thin LVM, right?

Probably. The file driver lacks some of the features of the lvm_thin driver, and it has much fewer users who have tested it in an R4.0 context.

@qubesos-bot

This comment has been minimized.

Show comment
Hide comment
@qubesos-bot

qubesos-bot Feb 7, 2018

Automated announcement from builder-github

The package pykickstart-2.32-4.fc25 has been pushed to the r4.0 stable repository for dom0.
To install this update, please use the standard update command:

sudo qubes-dom0-update

Or update dom0 via Qubes Manager.

Changes included in this update

Automated announcement from builder-github

The package pykickstart-2.32-4.fc25 has been pushed to the r4.0 stable repository for dom0.
To install this update, please use the standard update command:

sudo qubes-dom0-update

Or update dom0 via Qubes Manager.

Changes included in this update

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment