New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VMs in file-based storage pools cannot use fstrim for /rw #3255

Open
na-- opened this Issue Oct 29, 2017 · 4 comments

Comments

Projects
None yet
4 participants
@na--

na-- commented Oct 29, 2017

Qubes OS version:

4.0 RC2

Affected TemplateVMs:

any (tested with VMs based on both the fedora and archlinux templates)


Steps to reproduce the behavior:

  1. Create a file-based storage pool in dom0: qvm-pool -a newpoolname -o dir_path=/somepath,revisions_to_keep=1
  2. Clone an existing VM to the new pool qvm-clone -P newpoolname some-vm some-vm-2
  3. Check that the VM is present in /somepath/appvms/some-vm-2 (though the sparseness of the private volume is not preserved - its filesize is equal to the maximum allowed for the VM)
  4. Start the new VM
  5. Verify that the private volume is mounted at /rw with discard by running mount
  6. Try to run sudo fstrim -v /rw

Expected behavior:

Successful execution of fstrim and reducing the actual filesize of the VM private partition file in dom0 to the actually used space (as in QubesOS 3.2 and lower)

Actual behavior:

Error: fstrim: /rw: the discard operation is not supported

General notes:


Related issues:

None that I could find

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Oct 29, 2017

Member

Private image on file-based pool is similar to root volume on Qubes 3.2 - online trim cannot work, but is should be possible to do it using temporary VM. Technically it is this way, because there is device-mapper snapshot-origin in the middle. It is there, to allow starting DispVMs from such VM.

In theory we could implement separate pool driver (or an option to file pool) to turn off this feature, which means:

  • only private volume could be placed there, no root volume
  • no DispVMs based on such VMs
    But first we have a lot of higher priority tasks (for example affecting default configuration).

(though the sparseness of the private volume is not preserved - its filesize is equal to the maximum allowed for the VM)

This bug should be trivial to fix.

BTW For file-based pool, IMO better approach would be looking at btrfs and implementing pool driver for it, instead of extending file + device-mapper based one. But that require btrfs instead of any filesystem.

Member

marmarek commented Oct 29, 2017

Private image on file-based pool is similar to root volume on Qubes 3.2 - online trim cannot work, but is should be possible to do it using temporary VM. Technically it is this way, because there is device-mapper snapshot-origin in the middle. It is there, to allow starting DispVMs from such VM.

In theory we could implement separate pool driver (or an option to file pool) to turn off this feature, which means:

  • only private volume could be placed there, no root volume
  • no DispVMs based on such VMs
    But first we have a lot of higher priority tasks (for example affecting default configuration).

(though the sparseness of the private volume is not preserved - its filesize is equal to the maximum allowed for the VM)

This bug should be trivial to fix.

BTW For file-based pool, IMO better approach would be looking at btrfs and implementing pool driver for it, instead of extending file + device-mapper based one. But that require btrfs instead of any filesystem.

@na--

This comment has been minimized.

Show comment
Hide comment
@na--

na-- Oct 29, 2017

I'm not sure what the practical value of starting disposable VMs based on a specific AppVM is vs. being able to use just TemplateVMs for the purpose - what are the use cases for that?

And if an option to disable this feature was added to the file pool driver, do we loose the ability to have multiple revisions_to_keep as well?

na-- commented Oct 29, 2017

I'm not sure what the practical value of starting disposable VMs based on a specific AppVM is vs. being able to use just TemplateVMs for the purpose - what are the use cases for that?

And if an option to disable this feature was added to the file pool driver, do we loose the ability to have multiple revisions_to_keep as well?

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Oct 29, 2017

Member

I'm not sure what the practical value of starting disposable VMs based on a specific AppVM is vs. being able to use just TemplateVMs for the purpose - what are the use cases for that?

Customizing also private volume for that VM (like browser plugins etc), while still using the same template for other VMs (less places to apply updates). And also smaller disk usage.

And if an option to disable this feature was added to the file pool driver, do we loose the ability to have multiple revisions_to_keep as well?

Unfortunately yes.

Member

marmarek commented Oct 29, 2017

I'm not sure what the practical value of starting disposable VMs based on a specific AppVM is vs. being able to use just TemplateVMs for the purpose - what are the use cases for that?

Customizing also private volume for that VM (like browser plugins etc), while still using the same template for other VMs (less places to apply updates). And also smaller disk usage.

And if an option to disable this feature was added to the file pool driver, do we loose the ability to have multiple revisions_to_keep as well?

Unfortunately yes.

@qubesuser

This comment has been minimized.

Show comment
Hide comment
@qubesuser

qubesuser Oct 29, 2017

How about just disallowing having both an AppVM and DispVMs based on it running at the same time when using a file pool? (and thus mounting the volume directly in the AppVM and only creating the LVM-based CoW setup when DispVMs are running)

It shouldn't be a problem in normal use, and the user can always just clone the VM and make the clone the dispvm template if it is.

But just deprecating the file pool in favor of the thin pool seems an even better short term solution.

How about just disallowing having both an AppVM and DispVMs based on it running at the same time when using a file pool? (and thus mounting the volume directly in the AppVM and only creating the LVM-based CoW setup when DispVMs are running)

It shouldn't be a problem in normal use, and the user can always just clone the VM and make the clone the dispvm template if it is.

But just deprecating the file pool in favor of the thin pool seems an even better short term solution.

@andrewdavidwong andrewdavidwong added this to the Release 4.0 milestone Oct 29, 2017

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Nov 5, 2017

storage/file: fix preserving spareness on volume clone
Force creating sparse file, even if source volume is not such file (for
example block device).

Reported by @na--
QubesOS/qubes-issues#3255

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Nov 7, 2017

storage/file: fix preserving spareness on volume clone
Force creating sparse file, even if source volume is not such file (for
example block device).

Reported by @na--
QubesOS/qubes-issues#3255

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Nov 7, 2017

storage/file: fix preserving spareness on volume clone
Force creating sparse file, even if source volume is not such file (for
example block device).

Reported by @na--
QubesOS/qubes-issues#3255

@qubesos-bot qubesos-bot referenced this issue in QubesOS/updates-status Nov 21, 2017

Closed

core-admin v4.0.12 (r4.0) #313

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment