Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upVMs in file-based storage pools cannot use fstrim for /rw #3255
Comments
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Oct 29, 2017
Member
Private image on file-based pool is similar to root volume on Qubes 3.2 - online trim cannot work, but is should be possible to do it using temporary VM. Technically it is this way, because there is device-mapper snapshot-origin in the middle. It is there, to allow starting DispVMs from such VM.
In theory we could implement separate pool driver (or an option to file pool) to turn off this feature, which means:
- only private volume could be placed there, no root volume
- no DispVMs based on such VMs
But first we have a lot of higher priority tasks (for example affecting default configuration).
(though the sparseness of the private volume is not preserved - its filesize is equal to the maximum allowed for the VM)
This bug should be trivial to fix.
BTW For file-based pool, IMO better approach would be looking at btrfs and implementing pool driver for it, instead of extending file + device-mapper based one. But that require btrfs instead of any filesystem.
|
Private image on file-based pool is similar to root volume on Qubes 3.2 - online trim cannot work, but is should be possible to do it using temporary VM. Technically it is this way, because there is device-mapper snapshot-origin in the middle. It is there, to allow starting DispVMs from such VM. In theory we could implement separate pool driver (or an option to file pool) to turn off this feature, which means:
This bug should be trivial to fix. BTW For file-based pool, IMO better approach would be looking at btrfs and implementing pool driver for it, instead of extending file + device-mapper based one. But that require btrfs instead of any filesystem. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
na--
Oct 29, 2017
I'm not sure what the practical value of starting disposable VMs based on a specific AppVM is vs. being able to use just TemplateVMs for the purpose - what are the use cases for that?
And if an option to disable this feature was added to the file pool driver, do we loose the ability to have multiple revisions_to_keep as well?
na--
commented
Oct 29, 2017
|
I'm not sure what the practical value of starting disposable VMs based on a specific AppVM is vs. being able to use just TemplateVMs for the purpose - what are the use cases for that? And if an option to disable this feature was added to the file pool driver, do we loose the ability to have multiple |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Oct 29, 2017
Member
I'm not sure what the practical value of starting disposable VMs based on a specific AppVM is vs. being able to use just TemplateVMs for the purpose - what are the use cases for that?
Customizing also private volume for that VM (like browser plugins etc), while still using the same template for other VMs (less places to apply updates). And also smaller disk usage.
And if an option to disable this feature was added to the file pool driver, do we loose the ability to have multiple revisions_to_keep as well?
Unfortunately yes.
Customizing also private volume for that VM (like browser plugins etc), while still using the same template for other VMs (less places to apply updates). And also smaller disk usage.
Unfortunately yes. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesuser
Oct 29, 2017
How about just disallowing having both an AppVM and DispVMs based on it running at the same time when using a file pool? (and thus mounting the volume directly in the AppVM and only creating the LVM-based CoW setup when DispVMs are running)
It shouldn't be a problem in normal use, and the user can always just clone the VM and make the clone the dispvm template if it is.
But just deprecating the file pool in favor of the thin pool seems an even better short term solution.
qubesuser
commented
Oct 29, 2017
|
How about just disallowing having both an AppVM and DispVMs based on it running at the same time when using a file pool? (and thus mounting the volume directly in the AppVM and only creating the LVM-based CoW setup when DispVMs are running) It shouldn't be a problem in normal use, and the user can always just clone the VM and make the clone the dispvm template if it is. But just deprecating the file pool in favor of the thin pool seems an even better short term solution. |
na-- commentedOct 29, 2017
Qubes OS version:
4.0 RC2
Affected TemplateVMs:
any (tested with VMs based on both the fedora and archlinux templates)
Steps to reproduce the behavior:
qvm-pool -a newpoolname -o dir_path=/somepath,revisions_to_keep=1qvm-clone -P newpoolname some-vm some-vm-2/somepath/appvms/some-vm-2(though the sparseness of the private volume is not preserved - its filesize is equal to the maximum allowed for the VM)/rwwithdiscardby runningmountsudo fstrim -v /rwExpected behavior:
Successful execution of
fstrimand reducing the actual filesize of the VM private partition file in dom0 to the actually used space (as in QubesOS 3.2 and lower)Actual behavior:
Error:
fstrim: /rw: the discard operation is not supportedGeneral notes:
Related issues:
None that I could find