New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difficult to determine free and used disk space with LVM thin provisioning #3240

Closed
na-- opened this Issue Oct 27, 2017 · 11 comments

Comments

Projects
None yet
5 participants
@na--

na-- commented Oct 27, 2017

Qubes OS version:

R4.0 RC2

Affected TemplateVMs:

none (dom0 issue)


Steps to reproduce the behavior:

Try to find out how much free space is left on the device after a standard QubesOS 4.0 RC2 install and some moderate use.

Expected behavior:

Have an easy way (GUI and/or CLI) to determine:

  • how much free space is left on the disk
  • which VMs occupy up the most space (i.e. sort VMs by total space used)

It that's difficult with LVM thin provisioning, at least to have some official documentation that explains how to do it and the potential pitfalls (more on this below).

Actual behavior:

I did not find an easy user-friendly way to get useful information regarding the free and used storage space in R4.0 RC2. Here are some of the things I tried:

  • df -h / in dom0 is totally misleading, the available space it shows is wrong (and it looks like it's misleading in several ways due to this )
  • I don't think that information is correctly shown anywhere in the GUI, all file managers and free space widgets are misleading - they show only the virtual (thinly provisioned) free dom0 root space...
  • Try to use the output of qvm-ls --format disk to sum up the used space by VMs (something like echo "$(( $(qvm-ls --format disk | awk '{print $3; }' | paste -sd'+') ))" and add dom0 separately...) is probably also misleading if I have cloned VMs - I think they are shallow clones, but their DISK size is identical to the parent VM when shown by qvm-ls --format disk
  • Use the normal LVM tools to try and find the free space... My knowledge of LVM is very basic, especially regarding thin provisioning. At first the PFree column in sudo pvs at first seemed to do the job, but I'm still not sure how accurate that is. I have to thoroughly review the linked issues and understand LVM and how the current storage design works to be sure...
  • The new(?) qvm-pool tool is of no use, it just lists the different storage pool names.

The documentation about the new storage system seems outdated and incomplete.

General notes:

Any help is appreciated. Also, what are the benefits of the LVM model of provisioning compared to the old file-based model? And is there a way to use the old model by default? I don't remember seeing an option in the installer for this, but I may have missed it.

Right now I'm not convinced that the LVM volumes for VMs are worth it... I am strongly considering whether to use qvm-clone -P varlibqubes to transfer all VMs to the old file storage mechanism, if that's the way to actually do it. It looks that I can also use qubes-prefs to set it as a default as well.


Related issues:

I found those issues regarding the new storage system: #1842 and #2256; there probably are others

@na-- na-- changed the title from [dom0] difficult to determine free and used disk space with LVM thin provisioning to Difficult to determine free and used disk space with LVM thin provisioning Oct 27, 2017

@na--

This comment has been minimized.

Show comment
Hide comment
@na--

na-- Oct 27, 2017

As a slight off-topic continuation of the above General notes, I don't see how I can easily determine the storage pool used by a particular VM. qvm-ls, qvm-prefs and qvm-pool lack the ability to display that information. A python script that imports some of the qubes libs or reading from qubes.xml seems like the only way I can determine the storage pool for a particular VM.

na-- commented Oct 27, 2017

As a slight off-topic continuation of the above General notes, I don't see how I can easily determine the storage pool used by a particular VM. qvm-ls, qvm-prefs and qvm-pool lack the ability to display that information. A python script that imports some of the qubes libs or reading from qubes.xml seems like the only way I can determine the storage pool for a particular VM.

@qubesuser

This comment has been minimized.

Show comment
Hide comment
@qubesuser

qubesuser Oct 27, 2017

Background: With LVM, you assign block devices to be "PV"s (Qubes default is just one disk-sized volume inside a physical partition encrypted dm-crypt), which are then combined to form a "VG" (Qubes default is a "qubes_dom0" VG), from which "LV"s can be allocated, some of which are "thin pools" (Qubes default is one VG-sized "pool00"), from which "thin LV"s can be allocated which unlike non-thin LVs don't take up space until data is written into them or they are modified if they are snapshots (Qubes uses one for dom0, and for each VM there's a snapshot of the template rootfs "-root-snap", a volatile volume for swap "-volatile", the data volume for running VMs "private-snap", and the data volume at the time of last shutdown "-private").

Currently, just run "sudo lvs qubes_dom0/pool00" (or whatever the name of the thin pool is if you have a non-default setup).

The LSize column is the total storage space in the thin pool, and Data% is the percentage that is in use.

Also, while it's not used by default in Qubes, It's possible to have thin pools that auto-grow themselves, in which case the VFree column in "sudo vgs qubes_dom0" would tell you how much space is left in the VG for the thin pool to grow into.

You can also add a "Generic monitor" to the panel to show the output of that command, possibly after processing with a shell script.

I agree though that there needs to be a built-in GUI for this, and also qvm-pool should give this information.

The new LVM model is much better because it's more efficient than using a filesystem and it allows to take efficient snapshots easily, which means that you can clone VMs without requiring additional space, can easily implement the CoW for template roots, and can easily backup running VM, as well as supporting keeping old versions of the VM for backup purposes with efficient storage usage.

qubesuser commented Oct 27, 2017

Background: With LVM, you assign block devices to be "PV"s (Qubes default is just one disk-sized volume inside a physical partition encrypted dm-crypt), which are then combined to form a "VG" (Qubes default is a "qubes_dom0" VG), from which "LV"s can be allocated, some of which are "thin pools" (Qubes default is one VG-sized "pool00"), from which "thin LV"s can be allocated which unlike non-thin LVs don't take up space until data is written into them or they are modified if they are snapshots (Qubes uses one for dom0, and for each VM there's a snapshot of the template rootfs "-root-snap", a volatile volume for swap "-volatile", the data volume for running VMs "private-snap", and the data volume at the time of last shutdown "-private").

Currently, just run "sudo lvs qubes_dom0/pool00" (or whatever the name of the thin pool is if you have a non-default setup).

The LSize column is the total storage space in the thin pool, and Data% is the percentage that is in use.

Also, while it's not used by default in Qubes, It's possible to have thin pools that auto-grow themselves, in which case the VFree column in "sudo vgs qubes_dom0" would tell you how much space is left in the VG for the thin pool to grow into.

You can also add a "Generic monitor" to the panel to show the output of that command, possibly after processing with a shell script.

I agree though that there needs to be a built-in GUI for this, and also qvm-pool should give this information.

The new LVM model is much better because it's more efficient than using a filesystem and it allows to take efficient snapshots easily, which means that you can clone VMs without requiring additional space, can easily implement the CoW for template roots, and can easily backup running VM, as well as supporting keeping old versions of the VM for backup purposes with efficient storage usage.

@qubesuser

This comment has been minimized.

Show comment
Hide comment
@qubesuser

qubesuser Oct 27, 2017

This command will just show the free space, and can be put in a "Generic monitor" added to the XFCE panel (change %.0f into %.1f for one decimal place, and so on, or remove the %s if you don't like the unit suffix):
sudo lvs --noheadings -o lv_size,data_percent /dev/mapper/qubes_dom0-pool00|perl -pe 's/\s*([0-9.]+)([a-zA-Z]*)\s+([0-9.]+).*/sprintf("%.0f%s", ($1 * (100 - $3) \/ 100.0), $2)/e'

qubesuser commented Oct 27, 2017

This command will just show the free space, and can be put in a "Generic monitor" added to the XFCE panel (change %.0f into %.1f for one decimal place, and so on, or remove the %s if you don't like the unit suffix):
sudo lvs --noheadings -o lv_size,data_percent /dev/mapper/qubes_dom0-pool00|perl -pe 's/\s*([0-9.]+)([a-zA-Z]*)\s+([0-9.]+).*/sprintf("%.0f%s", ($1 * (100 - $3) \/ 100.0), $2)/e'

@na--

This comment has been minimized.

Show comment
Hide comment
@na--

na-- Oct 27, 2017

Thank you very much for the detailed and helpful answer!

Currently, just run "sudo lvs qubes_dom0/pool00" (or whatever the name of the thin pool is if you have a non-default setup).

Do you know why the size of the pool is less than the size of the available space on the drive? The test drive I installed QubesOS 4.0 (default install, no custom storage settings) on is 118 GB, qubes_dom0-swap is 7.6 GB, but the pool is only 94GB - are ~15 GBs missing?

$ sudo lvs qubes_dom0/pool00
  LV     VG         Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  pool00 qubes_dom0 twi-aotz-- 94.69g             58.43  29.44 

I think vgs and pvs show them, but does that mean that they will be used as needed or that they are unutilized?

$ sudo vgs
  VG         #PV #LV #SN Attr   VSize   VFree 
  qubes_dom0   1  62   0 wz--n- 118.24g 15.81g

Also, should I be concerned about the Meta% and possibly filling up the metadata LV?

You can also add a "Generic monitor" to the panel to show the output of that command, possibly after processing with a shell script.

Thanks! That's precisely what I needed and should be easy enough, if a bit inelegant :)
(Edit: did not see your second answer while I was commenting, thanks for the script! )

The new LVM model is much better because it's more efficient than using a filesystem and it allows to take efficient snapshots easily, which means that you can clone VMs without requiring additional space, can easily implement the CoW for template roots, and can easily backup running VM, as well as supporting keeping old versions of the VM for backup purposes with efficient storage usage.

Thanks, I should have thought of those advantages, I know at least that much about LVM... I'm not sure how far along the Qubes tools are in taking advantage of them, I'm looking forward to the future blog post/news article/documentation that describes the new storage subsystem. I just noticed the qvm-volume tool - it looks like easy reverts may be possible even now, if revisions_to_keep are set for the lvm pool.

na-- commented Oct 27, 2017

Thank you very much for the detailed and helpful answer!

Currently, just run "sudo lvs qubes_dom0/pool00" (or whatever the name of the thin pool is if you have a non-default setup).

Do you know why the size of the pool is less than the size of the available space on the drive? The test drive I installed QubesOS 4.0 (default install, no custom storage settings) on is 118 GB, qubes_dom0-swap is 7.6 GB, but the pool is only 94GB - are ~15 GBs missing?

$ sudo lvs qubes_dom0/pool00
  LV     VG         Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  pool00 qubes_dom0 twi-aotz-- 94.69g             58.43  29.44 

I think vgs and pvs show them, but does that mean that they will be used as needed or that they are unutilized?

$ sudo vgs
  VG         #PV #LV #SN Attr   VSize   VFree 
  qubes_dom0   1  62   0 wz--n- 118.24g 15.81g

Also, should I be concerned about the Meta% and possibly filling up the metadata LV?

You can also add a "Generic monitor" to the panel to show the output of that command, possibly after processing with a shell script.

Thanks! That's precisely what I needed and should be easy enough, if a bit inelegant :)
(Edit: did not see your second answer while I was commenting, thanks for the script! )

The new LVM model is much better because it's more efficient than using a filesystem and it allows to take efficient snapshots easily, which means that you can clone VMs without requiring additional space, can easily implement the CoW for template roots, and can easily backup running VM, as well as supporting keeping old versions of the VM for backup purposes with efficient storage usage.

Thanks, I should have thought of those advantages, I know at least that much about LVM... I'm not sure how far along the Qubes tools are in taking advantage of them, I'm looking forward to the future blog post/news article/documentation that describes the new storage subsystem. I just noticed the qvm-volume tool - it looks like easy reverts may be possible even now, if revisions_to_keep are set for the lvm pool.

@na--

This comment has been minimized.

Show comment
Hide comment
@na--

na-- Oct 27, 2017

Another slight storage-related offtopic...
Now that I think about it, I noticed something else that's strange - does dom0 need that much swap space? As you said, the swap volume for each VM is part of the volatile volume (last 1 GB I think), so why does dom0 have almost 8 GB of swap?

na-- commented Oct 27, 2017

Another slight storage-related offtopic...
Now that I think about it, I noticed something else that's strange - does dom0 need that much swap space? As you said, the swap volume for each VM is part of the volatile volume (last 1 GB I think), so why does dom0 have almost 8 GB of swap?

@qubesuser

This comment has been minimized.

Show comment
Hide comment
@qubesuser

qubesuser Oct 27, 2017

I think the 15-16GB free might be space left to allow resizing the metadata LV for the thin pool, which is limited to 16GB size (just a guess though).

The Meta% getting to 100% should be fixable by extending the metadata partition (up to the 16GB maximum, using that free space); the man page says that this may also happen automatically, but it's not clear how that is configured and whether it's enabled in Qubes.

I think the installer just creates a swap partition with size half of the current RAM since Fedora does that because it's a reasonable thing for standard Linux installs; I think that's indeed not necessary for Qubes.

I think the 15-16GB free might be space left to allow resizing the metadata LV for the thin pool, which is limited to 16GB size (just a guess though).

The Meta% getting to 100% should be fixable by extending the metadata partition (up to the 16GB maximum, using that free space); the man page says that this may also happen automatically, but it's not clear how that is configured and whether it's enabled in Qubes.

I think the installer just creates a swap partition with size half of the current RAM since Fedora does that because it's a reasonable thing for standard Linux installs; I think that's indeed not necessary for Qubes.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Oct 28, 2017

Member

I think the installer just creates a swap partition with size half of the current RAM since Fedora does that because it's a reasonable thing for standard Linux installs; I think that's indeed not necessary for Qubes.

Yes...

As for checking storage pool of VM's volumes - see qvm-volume tool.

Member

marmarek commented Oct 28, 2017

I think the installer just creates a swap partition with size half of the current RAM since Fedora does that because it's a reasonable thing for standard Linux installs; I think that's indeed not necessary for Qubes.

Yes...

As for checking storage pool of VM's volumes - see qvm-volume tool.

@na--

This comment has been minimized.

Show comment
Hide comment
@na--

na-- Oct 28, 2017

Thanks, I noticed qvm-volume when I wrote the above reply, it's pretty neat! Incidentally, I found both qvm-pool and qvm-volume after getting frustrated with the LVM command line tools and hitting qvm-+tab and reading through the list... :) Now that I mostly grok the new storage system, I like it very very much!

@marmarek do you think it's reasonable to extend qvm-pool and maybe qvm-volume to (optionally?) show the total/used/free space in the different pools and volumes?

na-- commented Oct 28, 2017

Thanks, I noticed qvm-volume when I wrote the above reply, it's pretty neat! Incidentally, I found both qvm-pool and qvm-volume after getting frustrated with the LVM command line tools and hitting qvm-+tab and reading through the list... :) Now that I mostly grok the new storage system, I like it very very much!

@marmarek do you think it's reasonable to extend qvm-pool and maybe qvm-volume to (optionally?) show the total/used/free space in the different pools and volumes?

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Oct 28, 2017

Member

@marmarek do you think it's reasonable to extend qvm-pool and maybe qvm-volume to (optionally?) show the total/used/free space in the different pools and volumes?

Yes, definitely.

Member

marmarek commented Oct 28, 2017

@marmarek do you think it's reasonable to extend qvm-pool and maybe qvm-volume to (optionally?) show the total/used/free space in the different pools and volumes?

Yes, definitely.

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Nov 5, 2017

storage: add size and usage properties to pool object
Add Pool.size and Pool.usage to the API. Implement them for LVM and File
pools. Add appropriate tests.

QubesOS/qubes-issues#3240

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Nov 5, 2017

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Nov 7, 2017

storage: add size and usage properties to pool object
Add Pool.size and Pool.usage to the API. Implement them for LVM and File
pools. Add appropriate tests.

QubesOS/qubes-issues#3240

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Nov 7, 2017

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Nov 7, 2017

storage: add size and usage properties to pool object
Add Pool.size and Pool.usage to the API. Implement them for LVM and File
pools. Add appropriate tests.

QubesOS/qubes-issues#3240

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Nov 7, 2017

@qubesos-bot qubesos-bot referenced this issue in QubesOS/updates-status Nov 21, 2017

Closed

core-admin v4.0.12 (r4.0) #313

marmarek added a commit to marmarek/qubes-core-admin-client that referenced this issue Jan 17, 2018

storage: add size and usage properties
It's already available in config dict, but lets provide uniform API. And
also it's a bit weird to look for usage data in configuration...

QubesOS/qubes-issues#3240

@qubesos-bot qubesos-bot referenced this issue in QubesOS/updates-status Jan 18, 2018

Closed

core-admin-client v4.0.13 (r4.0) #364

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Mar 19, 2018

storage: add Volume.usage property to API definition
It was already implemented by most pool drivers, but make it explicit
part of the API.

QubesOS/qubes-issues#3240

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Mar 19, 2018

storage: Add dummy size/usage properties to LinuxKernel pool
The pool lives inside dom0 filesystem, so do not count it separately -
it is already covered by default varlibqubes pool.

QubesOS/qubes-issues#3240

@marmarta marmarta referenced this issue in QubesOS/qubes-desktop-linux-manager Mar 19, 2018

Merged

Disk Size Widget #19

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Mar 19, 2018

storage: add Pool.included_in() method for checking nested pools
It may happen that one pool is inside a volume of other pool. This is
the case for example for varlibqubes pool (file driver,
dir_path=/var/lib/qubes) and default lvm pool (lvm_thin driver). The
latter include whole root filesystem, so /var/lib/qubes too.
This is relevant for proper disk space calculation - to not count some
space twice.

QubesOS/qubes-issues#3240
QubesOS/qubes-issues#3241

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Mar 19, 2018

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Mar 19, 2018

storage: add Pool.included_in() method for checking nested pools
It may happen that one pool is inside a volume of other pool. This is
the case for example for varlibqubes pool (file driver,
dir_path=/var/lib/qubes) and default lvm pool (lvm_thin driver). The
latter include whole root filesystem, so /var/lib/qubes too.
This is relevant for proper disk space calculation - to not count some
space twice.

QubesOS/qubes-issues#3240
QubesOS/qubes-issues#3241

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Mar 19, 2018

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Mar 20, 2018

storage: add Pool.included_in() method for checking nested pools
It may happen that one pool is inside a volume of other pool. This is
the case for example for varlibqubes pool (file driver,
dir_path=/var/lib/qubes) and default lvm pool (lvm_thin driver). The
latter include whole root filesystem, so /var/lib/qubes too.
This is relevant for proper disk space calculation - to not count some
space twice.

QubesOS/qubes-issues#3240
QubesOS/qubes-issues#3241

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Mar 20, 2018

@qubesos-bot

This comment has been minimized.

Show comment
Hide comment
@qubesos-bot

qubesos-bot Mar 21, 2018

Automated announcement from builder-github

The package qubes-desktop-linux-manager-4.0.8-1.fc25 has been pushed to the r4.0 testing repository for dom0.
To test this update, please install it with the following command:

sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing

Changes included in this update

Automated announcement from builder-github

The package qubes-desktop-linux-manager-4.0.8-1.fc25 has been pushed to the r4.0 testing repository for dom0.
To test this update, please install it with the following command:

sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing

Changes included in this update

@qubesos-bot qubesos-bot referenced this issue in QubesOS/updates-status Mar 21, 2018

Closed

desktop-linux-manager v4.0.8 (r4.0) #457

@qubesos-bot qubesos-bot referenced this issue in QubesOS/updates-status Mar 29, 2018

Closed

core-admin v4.0.25 (r4.0) #469

@qubesos-bot

This comment has been minimized.

Show comment
Hide comment
@qubesos-bot

qubesos-bot May 14, 2018

Automated announcement from builder-github

The package qubes-desktop-linux-manager-4.0.9-1.fc25 has been pushed to the r4.0 stable repository for dom0.
To install this update, please use the standard update command:

sudo qubes-dom0-update

Or update dom0 via Qubes Manager.

Changes included in this update

Automated announcement from builder-github

The package qubes-desktop-linux-manager-4.0.9-1.fc25 has been pushed to the r4.0 stable repository for dom0.
To install this update, please use the standard update command:

sudo qubes-dom0-update

Or update dom0 via Qubes Manager.

Changes included in this update

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment