-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Crash on restore related to VM tags #25
Comments
This does look like one of those "can't/won't do it" behaviors from the Qubes daemon due to extenuating circumstances / absent resources. Although it does seem odd it would react that way to a simple tag, I guess some tags are special. It would be interesting to see the result from qvm-backup-restore in this case, if its any different. As it happens, I have a SplitGPG setup too so I can do some testing with it. Incidentally, Marek's (of Qubes) advice on this kind of error-from-settings situation is to catch all the errors and notify the user, then continue working. I mostly took his advice but I somehow thought tags wouldn't pose any problem, so that specific code doesn't catch exceptions. So that will have to change. The Of course, I'm open to ideas and POCs about how to make restores more convenient. One thing that could be done is to return to the practice of "no session means only one VM name is accepted" and then the latest session for that VM is chosen. But that older version didn't accept more than one VM name to restore in any case. An update of that could check that all VMs requested are from the same session, for instance. |
Well I did a large backup restore under very similar circumstances with almost the same qubes, also including that gpg-vault qube, using the qubes GUI manager (which uses Regarding |
Add verbose option for restore and verify
The crashing shouldn't occur anymore. Maybe change the title and devote this issue to special handling of vault VMs? |
Great! I think that means this issue can be closed as completed; what special handling did you have in mind? |
Its fine to leave it open, since you reported two things; just change title and maybe bit of main description to be about vault handling. Its up to you. |
I'm confused about what you mean regarding "vault handling"; as I see it the issue was about the crash as a result of an uncaught exception that occurred when the The other line of discussion was about how Maybe you're referring to the fact that in the OP I was wondering if Split-GPG policies had to do with this? If so, then I now don't think so anymore; instead, the issue is related to the QubesOS bug I reported and how So I'll close this as completed. Thank you. |
Got this when trying to restore a bunch of qubes from backup (both
wyng
andwyng-util-qubes
are on themain
branches; forwyng
:prog_date = "20240515"
; the util file was updated the same day):Error code seems to be "1". The qube in question was a kind of gpg-vault, which had "Split GPG" policies still in place in the Qubes Global Config from before I deleted it (not sure if relevant...I do see messages in
journalctl
about disabling thequbes-vm@gpg-vault.service
, followed bywireplumber[19406]: GetManagedObjects() failed: org.freedesktop.DBus.Error.NameHasNoOwner
errors and dom0 audit BPF messages; this is repeated twice). The fatal error injournalctl
comes a few seconds later and shows:As a side note, in the same backup restore session one other qube errored out with "Not matched", presumably because it had no change in my last backup, so its "Latest" session was different from the "Latest" session of the other qubes I was restoring, even though I didn't specify a session for the restore command, only qube names. IMHO the expected behavior is that it will pick the latest session available for each specified qube unless I specify a
--session
.The text was updated successfully, but these errors were encountered: