Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upqubesadmin/exc.py throws TypeError while initialising QubesException #3809
Comments
andrewdavidwong
added
bug
C: core
labels
Apr 10, 2018
andrewdavidwong
added this to the Release 4.0 updates milestone
Apr 10, 2018
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
pdinoto
Apr 11, 2018
Having a similar issue described in this original post on qubes-users mailing list. Will check.
pdinoto
commented
Apr 11, 2018
|
Having a similar issue described in this original post on qubes-users mailing list. Will check. |
marmarek
referenced this issue
Apr 12, 2018
Open
R4.0: Qubes won't start anymore: "ERROR: Got empty response from qubesd." #3810
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
pdinoto
Apr 12, 2018
I can confirm my issue was also related to the metadata of the default pool00 thin volume being full (>96%). In my case, the result was a broken system, as the volumes for all service VMs became unavailable, even after lvextend qubes_dom0/pool00_tmeta +L+256M and thin_repair qubes_dom0/pool00.
I suspect that having a thin pool metadata exhaustion while performing qube restore/cloning/moving may be corrupting the metadata. The volumes that became inaccessible are all those that were actively used when the issue appeared: all volumes from qubes being restored, -back and -snap volumes of running qubes and for some reason all volumes from sys-* qubes.
(Edited to correct typo indicated
pdinoto
commented
Apr 12, 2018
•
|
I can confirm my issue was also related to the metadata of the default I suspect that having a thin pool metadata exhaustion while performing qube restore/cloning/moving may be corrupting the metadata. The volumes that became inaccessible are all those that were actively used when the issue appeared: all volumes from qubes being restored, (Edited to correct typo indicated |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Apr 12, 2018
Member
even after lvextend qubes_dom0/pool00_tdata +L+256M and thin_repair qubes_dom0/pool00.
If that was about metadata, the right volume to extend is "tmeta" one.
If that was about metadata, the right volume to extend is "tmeta" one. |
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Apr 13, 2018
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
techgeeknz
Apr 17, 2018
I expect that the consequences of exhausting the tmeta volume are almost as severe as exhausting the tdata volume; in that some (but not all) write requests made to the thin volumes stacked upon it will be denied, leading to data loss and filesystem corruption. From the guest OS's perspective, it will look like a disk media or controller failure has occurred.
It should also be noted, however, that the thin provisioning issue is a separate bug; only superficially related to this one, which is about the software not returning a meaningful error message.
techgeeknz
commented
Apr 17, 2018
•
|
I expect that the consequences of exhausting the tmeta volume are almost as severe as exhausting the tdata volume; in that some (but not all) write requests made to the thin volumes stacked upon it will be denied, leading to data loss and filesystem corruption. From the guest OS's perspective, it will look like a disk media or controller failure has occurred. It should also be noted, however, that the thin provisioning issue is a separate bug; only superficially related to this one, which is about the software not returning a meaningful error message. |
techgeeknz commentedApr 9, 2018
Qubes OS version:
Qubes release 4.0 (R4.0)
Affected component(s):
qvm-start, probably also other qubesadmin toolsSteps to reproduce the behavior:
qvm-startwill crash:Expected behavior:
qvm-startthrows the correct exception, displaying a meaningful error message.Actual behavior:
exc.pythrows aTypeErrorbecause of unexpected literal percent (%) signs in the message text (see traceback above)General notes:
I wrapped line 29 of
exc.pywithin a try..except block:This resulted in the following message:
Related issues:
When 100% of the volume group's space is being provisioned, the installer should take steps to ensure the thin pool's metadata volume is appropriately sized. If the intention is to allow the thin pool data and metadata volumes to auto-expand as required, this should also be made clear in the interactive partitioner.