New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VMs should be able to boot and show GUI or error message even if inodes or blocks in /rw are exhausted #3274

Open
qubesuser opened this Issue Nov 2, 2017 · 3 comments

Comments

Projects
None yet
4 participants
@qubesuser

Qubes OS version:

4.0-rc2

Steps to reproduce the behavior:

  1. Exhaust inodes on /rw by creating empty files until none can be created anymore
  2. Have a template like the Whonix ones that has additional users to create in /home (like "tunnel"). The problem will likely show up by exhausting blocks as well and perhaps also with templates without additional users.

Expected behavior:

The VM boots and Qubes somehow informs the user that he should resize the disk to have more inodes/blocks.

Actual behavior:

The VM boots, but the /rw/home -> /home bind mount doesn't happen, GUI doesn't start, the user has no idea of knowing what the problem is

General notes:

All the startup scripts and code should be able to cope with /rw being full; for things that are essential, like creating homes for essential users from /etc/skel, mount a tmpfs if there is no space on /rw and display an error dialog telling the user to resize the disk.

Ideally, this could be done with a special qrexec call to dom0 that asks for more disk space, allowing to fix the issue with one click (which would e.g. just double the size of the data partition), and that can also optionally tell dom0 that the VM entered a "recovery" mode (like the tmpfs mounting on homes) and should be rebooted once more disk space is made available.

That same qrexec call could also be invoked by a background process that monitors for the disk filling up, for a better user experience.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Nov 2, 2017

Member

I'd say it is a bad idea to make tmpfs /home in that case. I is too risky that someone will manage to actually use it to store some file ("I'd just use the VM for a moment and resize later") and then loose the data. Qrexec to dom0 to report the problem is a better approach. Or, something based on #889 ?

Member

marmarek commented Nov 2, 2017

I'd say it is a bad idea to make tmpfs /home in that case. I is too risky that someone will manage to actually use it to store some file ("I'd just use the VM for a moment and resize later") and then loose the data. Qrexec to dom0 to report the problem is a better approach. Or, something based on #889 ?

@qubesuser

This comment has been minimized.

Show comment
Hide comment
@qubesuser

qubesuser Nov 2, 2017

Yeah, actually that's indeed not a good idea, and a qrexec call to tell dom0 that the VM is failing to boot and why seems better.

Along perhaps with an explicit "recovery mode" that would not mount the private disk, and thus also allow to recover compromised VMs in addition to those with full or broken disks.

Yeah, actually that's indeed not a good idea, and a qrexec call to tell dom0 that the VM is failing to boot and why seems better.

Along perhaps with an explicit "recovery mode" that would not mount the private disk, and thus also allow to recover compromised VMs in addition to those with full or broken disks.

@tasket

This comment has been minimized.

Show comment
Hide comment
@tasket

tasket Mar 29, 2018

This would be really good to have in general, but also for user/community added services that run at startup like Qubes-VM-hardening. This service starts before /rw is mounted so it can perform checks & operations on the volume.

If the checks fail, it needs a good way to communicate that to the user before switching the VM to a failed maintenance state.

tasket commented Mar 29, 2018

This would be really good to have in general, but also for user/community added services that run at startup like Qubes-VM-hardening. This service starts before /rw is mounted so it can perform checks & operations on the volume.

If the checks fail, it needs a good way to communicate that to the user before switching the VM to a failed maintenance state.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment