Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upDifficult to determine free and used disk space with LVM thin provisioning #3240
Comments
na--
changed the title from
[dom0] difficult to determine free and used disk space with LVM thin provisioning
to
Difficult to determine free and used disk space with LVM thin provisioning
Oct 27, 2017
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
na--
Oct 27, 2017
As a slight off-topic continuation of the above General notes, I don't see how I can easily determine the storage pool used by a particular VM. qvm-ls, qvm-prefs and qvm-pool lack the ability to display that information. A python script that imports some of the qubes libs or reading from qubes.xml seems like the only way I can determine the storage pool for a particular VM.
na--
commented
Oct 27, 2017
•
|
As a slight off-topic continuation of the above General notes, I don't see how I can easily determine the storage pool used by a particular VM. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesuser
Oct 27, 2017
Background: With LVM, you assign block devices to be "PV"s (Qubes default is just one disk-sized volume inside a physical partition encrypted dm-crypt), which are then combined to form a "VG" (Qubes default is a "qubes_dom0" VG), from which "LV"s can be allocated, some of which are "thin pools" (Qubes default is one VG-sized "pool00"), from which "thin LV"s can be allocated which unlike non-thin LVs don't take up space until data is written into them or they are modified if they are snapshots (Qubes uses one for dom0, and for each VM there's a snapshot of the template rootfs "-root-snap", a volatile volume for swap "-volatile", the data volume for running VMs "private-snap", and the data volume at the time of last shutdown "-private").
Currently, just run "sudo lvs qubes_dom0/pool00" (or whatever the name of the thin pool is if you have a non-default setup).
The LSize column is the total storage space in the thin pool, and Data% is the percentage that is in use.
Also, while it's not used by default in Qubes, It's possible to have thin pools that auto-grow themselves, in which case the VFree column in "sudo vgs qubes_dom0" would tell you how much space is left in the VG for the thin pool to grow into.
You can also add a "Generic monitor" to the panel to show the output of that command, possibly after processing with a shell script.
I agree though that there needs to be a built-in GUI for this, and also qvm-pool should give this information.
The new LVM model is much better because it's more efficient than using a filesystem and it allows to take efficient snapshots easily, which means that you can clone VMs without requiring additional space, can easily implement the CoW for template roots, and can easily backup running VM, as well as supporting keeping old versions of the VM for backup purposes with efficient storage usage.
qubesuser
commented
Oct 27, 2017
•
|
Background: With LVM, you assign block devices to be "PV"s (Qubes default is just one disk-sized volume inside a physical partition encrypted dm-crypt), which are then combined to form a "VG" (Qubes default is a "qubes_dom0" VG), from which "LV"s can be allocated, some of which are "thin pools" (Qubes default is one VG-sized "pool00"), from which "thin LV"s can be allocated which unlike non-thin LVs don't take up space until data is written into them or they are modified if they are snapshots (Qubes uses one for dom0, and for each VM there's a snapshot of the template rootfs "-root-snap", a volatile volume for swap "-volatile", the data volume for running VMs "private-snap", and the data volume at the time of last shutdown "-private"). Currently, just run "sudo lvs qubes_dom0/pool00" (or whatever the name of the thin pool is if you have a non-default setup). The LSize column is the total storage space in the thin pool, and Data% is the percentage that is in use. Also, while it's not used by default in Qubes, It's possible to have thin pools that auto-grow themselves, in which case the VFree column in "sudo vgs qubes_dom0" would tell you how much space is left in the VG for the thin pool to grow into. You can also add a "Generic monitor" to the panel to show the output of that command, possibly after processing with a shell script. I agree though that there needs to be a built-in GUI for this, and also qvm-pool should give this information. The new LVM model is much better because it's more efficient than using a filesystem and it allows to take efficient snapshots easily, which means that you can clone VMs without requiring additional space, can easily implement the CoW for template roots, and can easily backup running VM, as well as supporting keeping old versions of the VM for backup purposes with efficient storage usage. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesuser
Oct 27, 2017
This command will just show the free space, and can be put in a "Generic monitor" added to the XFCE panel (change %.0f into %.1f for one decimal place, and so on, or remove the %s if you don't like the unit suffix):
sudo lvs --noheadings -o lv_size,data_percent /dev/mapper/qubes_dom0-pool00|perl -pe 's/\s*([0-9.]+)([a-zA-Z]*)\s+([0-9.]+).*/sprintf("%.0f%s", ($1 * (100 - $3) \/ 100.0), $2)/e'
qubesuser
commented
Oct 27, 2017
•
|
This command will just show the free space, and can be put in a "Generic monitor" added to the XFCE panel (change %.0f into %.1f for one decimal place, and so on, or remove the %s if you don't like the unit suffix): |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
na--
Oct 27, 2017
Thank you very much for the detailed and helpful answer!
Currently, just run "sudo lvs qubes_dom0/pool00" (or whatever the name of the thin pool is if you have a non-default setup).
Do you know why the size of the pool is less than the size of the available space on the drive? The test drive I installed QubesOS 4.0 (default install, no custom storage settings) on is 118 GB, qubes_dom0-swap is 7.6 GB, but the pool is only 94GB - are ~15 GBs missing?
$ sudo lvs qubes_dom0/pool00
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
pool00 qubes_dom0 twi-aotz-- 94.69g 58.43 29.44
I think vgs and pvs show them, but does that mean that they will be used as needed or that they are unutilized?
$ sudo vgs
VG #PV #LV #SN Attr VSize VFree
qubes_dom0 1 62 0 wz--n- 118.24g 15.81g
Also, should I be concerned about the Meta% and possibly filling up the metadata LV?
You can also add a "Generic monitor" to the panel to show the output of that command, possibly after processing with a shell script.
Thanks! That's precisely what I needed and should be easy enough, if a bit inelegant :)
(Edit: did not see your second answer while I was commenting, thanks for the script! )
The new LVM model is much better because it's more efficient than using a filesystem and it allows to take efficient snapshots easily, which means that you can clone VMs without requiring additional space, can easily implement the CoW for template roots, and can easily backup running VM, as well as supporting keeping old versions of the VM for backup purposes with efficient storage usage.
Thanks, I should have thought of those advantages, I know at least that much about LVM... I'm not sure how far along the Qubes tools are in taking advantage of them, I'm looking forward to the future blog post/news article/documentation that describes the new storage subsystem. I just noticed the qvm-volume tool - it looks like easy reverts may be possible even now, if revisions_to_keep are set for the lvm pool.
na--
commented
Oct 27, 2017
•
|
Thank you very much for the detailed and helpful answer!
Do you know why the size of the pool is less than the size of the available space on the drive? The test drive I installed QubesOS 4.0 (default install, no custom storage settings) on is 118 GB,
I think
Also, should I be concerned about the
Thanks! That's precisely what I needed and should be easy enough, if a bit inelegant :)
Thanks, I should have thought of those advantages, I know at least that much about LVM... I'm not sure how far along the Qubes tools are in taking advantage of them, I'm looking forward to the future blog post/news article/documentation that describes the new storage subsystem. I just noticed the |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
na--
Oct 27, 2017
Another slight storage-related offtopic...
Now that I think about it, I noticed something else that's strange - does dom0 need that much swap space? As you said, the swap volume for each VM is part of the volatile volume (last 1 GB I think), so why does dom0 have almost 8 GB of swap?
na--
commented
Oct 27, 2017
•
|
Another slight storage-related offtopic... |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesuser
Oct 27, 2017
I think the 15-16GB free might be space left to allow resizing the metadata LV for the thin pool, which is limited to 16GB size (just a guess though).
The Meta% getting to 100% should be fixable by extending the metadata partition (up to the 16GB maximum, using that free space); the man page says that this may also happen automatically, but it's not clear how that is configured and whether it's enabled in Qubes.
I think the installer just creates a swap partition with size half of the current RAM since Fedora does that because it's a reasonable thing for standard Linux installs; I think that's indeed not necessary for Qubes.
qubesuser
commented
Oct 27, 2017
|
I think the 15-16GB free might be space left to allow resizing the metadata LV for the thin pool, which is limited to 16GB size (just a guess though). The Meta% getting to 100% should be fixable by extending the metadata partition (up to the 16GB maximum, using that free space); the man page says that this may also happen automatically, but it's not clear how that is configured and whether it's enabled in Qubes. I think the installer just creates a swap partition with size half of the current RAM since Fedora does that because it's a reasonable thing for standard Linux installs; I think that's indeed not necessary for Qubes. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Oct 28, 2017
Member
I think the installer just creates a swap partition with size half of the current RAM since Fedora does that because it's a reasonable thing for standard Linux installs; I think that's indeed not necessary for Qubes.
Yes...
As for checking storage pool of VM's volumes - see qvm-volume tool.
Yes... As for checking storage pool of VM's volumes - see |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
na--
Oct 28, 2017
Thanks, I noticed qvm-volume when I wrote the above reply, it's pretty neat! Incidentally, I found both qvm-pool and qvm-volume after getting frustrated with the LVM command line tools and hitting qvm-+tab and reading through the list... :) Now that I mostly grok the new storage system, I like it very very much!
@marmarek do you think it's reasonable to extend qvm-pool and maybe qvm-volume to (optionally?) show the total/used/free space in the different pools and volumes?
na--
commented
Oct 28, 2017
|
Thanks, I noticed @marmarek do you think it's reasonable to extend |
andrewdavidwong
added
bug
C: desktop-linux
P: major
UX
labels
Oct 28, 2017
andrewdavidwong
added this to the Release 4.0 milestone
Oct 28, 2017
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Oct 28, 2017
Member
@marmarek do you think it's reasonable to extend qvm-pool and maybe qvm-volume to (optionally?) show the total/used/free space in the different pools and volumes?
Yes, definitely.
Yes, definitely. |
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Nov 5, 2017
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Nov 5, 2017
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Nov 7, 2017
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Nov 7, 2017
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Nov 7, 2017
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Nov 7, 2017
qubesos-bot
referenced this issue
in QubesOS/updates-status
Nov 21, 2017
Closed
core-admin v4.0.12 (r4.0) #313
added a commit
to marmarek/qubes-core-admin-client
that referenced
this issue
Jan 17, 2018
qubesos-bot
referenced this issue
in QubesOS/updates-status
Jan 18, 2018
Closed
core-admin-client v4.0.13 (r4.0) #364
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Mar 19, 2018
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Mar 19, 2018
marmarta
referenced this issue
in QubesOS/qubes-desktop-linux-manager
Mar 19, 2018
Merged
Disk Size Widget #19
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Mar 19, 2018
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Mar 19, 2018
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Mar 19, 2018
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Mar 19, 2018
marmarek
closed this
in
marmarek/qubes-desktop-linux-manager@a3b9362
Mar 20, 2018
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Mar 20, 2018
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Mar 20, 2018
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesos-bot
Mar 21, 2018
Automated announcement from builder-github
The package qubes-desktop-linux-manager-4.0.8-1.fc25 has been pushed to the r4.0 testing repository for dom0.
To test this update, please install it with the following command:
sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing
qubesos-bot
commented
Mar 21, 2018
|
Automated announcement from builder-github The package
|
qubesos-bot
added
the
r4.0-dom0-cur-test
label
Mar 21, 2018
qubesos-bot
referenced this issue
in QubesOS/updates-status
Mar 21, 2018
Closed
desktop-linux-manager v4.0.8 (r4.0) #457
qubesos-bot
referenced this issue
in QubesOS/updates-status
Mar 29, 2018
Closed
core-admin v4.0.25 (r4.0) #469
marmarek
referenced this issue
Mar 30, 2018
Closed
Implement UI Notifications for cases of a Qube disk full #1872
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesos-bot
May 14, 2018
Automated announcement from builder-github
The package qubes-desktop-linux-manager-4.0.9-1.fc25 has been pushed to the r4.0 stable repository for dom0.
To install this update, please use the standard update command:
sudo qubes-dom0-update
Or update dom0 via Qubes Manager.
qubesos-bot
commented
May 14, 2018
|
Automated announcement from builder-github The package
Or update dom0 via Qubes Manager. |
na-- commentedOct 27, 2017
•
edited
Edited 2 times
-
na--
edited Oct 27, 2017 (most recent)
-
na--
edited Oct 27, 2017
Qubes OS version:
R4.0 RC2
Affected TemplateVMs:
none (dom0 issue)
Steps to reproduce the behavior:
Try to find out how much free space is left on the device after a standard QubesOS 4.0 RC2 install and some moderate use.
Expected behavior:
Have an easy way (GUI and/or CLI) to determine:
It that's difficult with LVM thin provisioning, at least to have some official documentation that explains how to do it and the potential pitfalls (more on this below).
Actual behavior:
I did not find an easy user-friendly way to get useful information regarding the free and used storage space in R4.0 RC2. Here are some of the things I tried:
df -h /in dom0 is totally misleading, the available space it shows is wrong (and it looks like it's misleading in several ways due to this )qvm-ls --format diskto sum up the used space by VMs (something likeecho "$(( $(qvm-ls --format disk | awk '{print $3; }' | paste -sd'+') ))"and add dom0 separately...) is probably also misleading if I have cloned VMs - I think they are shallow clones, but their DISK size is identical to the parent VM when shown byqvm-ls --format diskPFreecolumn insudo pvsat first seemed to do the job, but I'm still not sure how accurate that is. I have to thoroughly review the linked issues and understand LVM and how the current storage design works to be sure...qvm-pooltool is of no use, it just lists the different storage pool names.The documentation about the new storage system seems outdated and incomplete.
General notes:
Any help is appreciated. Also, what are the benefits of the LVM model of provisioning compared to the old file-based model? And is there a way to use the old model by default? I don't remember seeing an option in the installer for this, but I may have missed it.
Right now I'm not convinced that the LVM volumes for VMs are worth it... I am strongly considering whether to use
qvm-clone -P varlibqubesto transfer all VMs to the old file storage mechanism, if that's the way to actually do it. It looks that I can also usequbes-prefsto set it as a default as well.Related issues:
I found those issues regarding the new storage system: #1842 and #2256; there probably are others