New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kernel update breaks VM functionality #2757
Comments
I've actually noticed this in my kernel testing, but I never reported it because I figured it had something to do with me going off script in compiling my own kernels. Anyway, what I've noticed: It only really occurs whenever dnf removes an older version of kernel-qubes-vm whenever dnf's installonly_limit threshold is exceeded. If you up that number higher or don't reach it, it doesn't occur and VMs set to use the default kernel get updated to use the new default kernel whenever a newer version of kernel-qubes-vm is installed. Specifically, a VM that's set to use the "default" kernel doesn't change properly to a new default whenever an older kernel-qubes-vm package is removed, even if qvm-prefs shows that VM is set to use the kernel currently set to default in Qubes Manager. If that VM had been set to use a specific kernel version prior to installation (i.e. not set to default or to pvgrub2), it's fine and the setting stays after uninstallation (although I never tested what happens if it's set to use a specific kernel version and then you uninstall that version). So I think it might have to do with the kernel-qubes-vm uninstall scripts not cleaning things up properly, however, fixing that may not necessarily fix any VMs that are currently misconfigured (in which case, a user would need to toggle switching between kernels in Qubes Manager in order for it to stick again, which is how I've been working around it). |
Here's what I did before seeing this ticket or the thread, which I couldn't, since my VMs wouldn't start. ;)
All VMs started normally after this. |
On Tue, Apr 18, 2017 at 03:41:04PM -0700, Andrew David Wong wrote:
Here's what I did before seeing this ticket or the thread, which I couldn't, since my VMs wouldn't start. ;)
```
$ for VM in `qvm-ls --raw-list`; do qvm-prefs -s $VM kernel 4.4.55-11; done
```
All VMs started normally after this.
Better set to "default" to avoid this issue in the future.
…--
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
Didn't even know that was an option, thanks! |
updated workaround to reflect this. |
In my case "default" did not work and got the same error, but using Qubes Manager to select kernel 4.4.55-11 rather than default (even if identification number is the same) for each VM did work. |
this also seems to affect qvm-trim-template |
andrew's solution doesn't work for me with 'default' because Qubes still thinks the default VM kernel is 4.10.10-15 (which I removed). |
I got so bored, I:
|
New kernel (4.4.62) landed in stable and even newer one (4.4.67) in testing. Lets see if this will happen again. |
Didn't see this until after updating... But, fortunately, I haven't noticed the problem. |
upon update, some VMs kernel stayed with the older kernel |
Bug in setting VM kernel by Qubes Manager fixed here: QubesOS/qubes-manager#33 QubesOS/updates-status#65 (qubes-manager-3.2.12) |
Qubes OS version (e.g.,
R3.2
):2017-4-18 dom0 update on otherwise up-to-date R3.2.
Affected TemplateVMs (e.g.,
fedora-23
, if applicable):all templates and VMs that use linux kernels.
Expected behavior:
smooth update to
4.4.55-11
for all templates and VMs.Actual behavior:
old kernel that gets deleted (4.4.11) doesn't get deleted cleanly, or process breaks that should move templates & VMs to newer kernel. Because of this, attempting to open any affected VM throws an error that the kernel doesn't exist.
Steps to reproduce the behavior:
upgrade using
sudo qubes-dom0-update
.General notes:
Reported in qubes-users:
https://groups.google.com/d/msgid/qubes-users/20170418174103.GT1486%40mail-itl
Workaround
in dom0 run:
You should now be able to boot the affected VM.
The text was updated successfully, but these errors were encountered: