Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upStandard fsck isn't run on VM startup #979
Comments
marmarek
added
enhancement
C: core
P: minor
labels
May 12, 2015
marmarek
added this to the Release 3.1 milestone
May 12, 2015
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 5, 2015
Member
Currently if one want to run fsck on /rw, it requires:
- Adding
singleparameter to kernel command line (qvm-prefs -s VMNAME kernelopts single) - Starting VM - it would timeout on qrexec connection, but that's ok.
- Access VM console using
sudo xl console VMNAME - Get shell access (just press enter when prompted for root password)
- Run fsck on /dev/xvdb (/rw):
fsck -f /dev/xvdb - Shutdown the VM -
powerofffrom that shell - Restore default kernel command line:
qvm-prefs -s VMNAME kernelopts default
|
Currently if one want to run fsck on /rw, it requires:
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 26, 2015
Member
We should go with running standard fsck on VM startup. If it isn't desirable for some VM (which doesn't have /rw at all - DispVM), this could be addressed in .mount unit file with some condition.
|
We should go with running standard fsck on VM startup. If it isn't desirable for some VM (which doesn't have /rw at all - DispVM), this could be addressed in .mount unit file with some condition. |
This was referenced Aug 26, 2015
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
cfcs
Aug 27, 2015
Alternatively you can run fsck on private.img from dom0 (if you trust fsck to not have security flaws, which is grantedly a risky assumption). Either way it's a required step for shrinking/compacting appvm filesystems to free up space.
cfcs
commented
Aug 27, 2015
|
Alternatively you can run fsck on private.img from dom0 (if you trust fsck to not have security flaws, which is grantedly a risky assumption). Either way it's a required step for shrinking/compacting appvm filesystems to free up space. |
marmarek
modified the milestones:
Release 3.2,
Release 3.1
Feb 8, 2016
marmarek
referenced this issue
Mar 29, 2016
Closed
Web page with list of wanted maintainers/developers/others #1700
marmarek
modified the milestones:
Release 3.2,
Release 4.0
Aug 5, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Rudd-O
Oct 23, 2016
The agent really should detect if there is an error in the kernel ringbuffer or journald log, and submit that error for dom0 to display it in qubes-manager or with a notification.
But this will not cover the case of a system entirely failing to boot. In that case, it's possible that monitoring the console log in dom0 can provide a mechanism that lets the user know (qubes-manager or notification) that the VM is not booting properly, MUCH, much faster than just waiting for a timeout and then just dying.
Rudd-O
commented
Oct 23, 2016
|
The agent really should detect if there is an error in the kernel ringbuffer or journald log, and submit that error for dom0 to display it in qubes-manager or with a notification. But this will not cover the case of a system entirely failing to boot. In that case, it's possible that monitoring the console log in dom0 can provide a mechanism that lets the user know (qubes-manager or notification) that the VM is not booting properly, MUCH, much faster than just waiting for a timeout and then just dying. |
nrgaway commentedApr 29, 2015
/rw is not mounted in the standard way from /etc/fstab, because of for example DispVM, which does not have /rw mounted at all. This is also the reason why standard fsck isn't run on VM startup.
Possible solutions:
Add feature to
qubes-managerto:2 .Check disk
3 .Repair disk