-
-
Notifications
You must be signed in to change notification settings - Fork 13.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runInLinuxVM, test-driver: disable vmx cpu feature for guest vm #83920
Conversation
Passing vmx (virtualization extensions) to guest vm can cause issues if build is running inside nested vm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Our builds also run with native CPU features so that runInLinuxVM
is not that much of a regression. Otherwise the change looks good to me.
@@ -74,7 +74,7 @@ let | |||
throw "Non-EFI boot methods are only supported on i686 / x86_64" | |||
else '' | |||
def assemble_qemu_flags(): | |||
flags = "-cpu host" | |||
flags = "-cpu host,-vmx" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if this is a problem, but this passes -vmx
also on non-x86 platforms. Does that even work then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have to check that one, thanks!
@volth I am thinking enabling CPU features individually might be better, or that maybe tests can redefine CPU it requires. This way may achieve greater compatibility and reproducibility. |
I came across this issue while trying to build a singularity container under nixos-unstable. I had the issue described in issuse #85394 |
But as I see you had issues, as your build requires nested virtualization? I think CPU should default to qemu default, as this is most compatible, and then passing additional required cpu features. Looks like what we need is avx and vmx extensions. I was thinking to make improved pull request with support for these optional CPU features. @volth What do you think would be sane default and which additional CPU features are needed? |
Yes, my build required nested virtualization. |
This pull request has been mentioned on NixOS Discourse. There might be relevant details there: https://discourse.nixos.org/t/new-constraint-between-release-19-09-and-20-03-in-package-qemu/6297/4 |
In my opinion the presence of vmx should be autodetected. The check can be as simple as grep 'flags.*vmx' /proc/cpuinfo. It is possible to enable nested virtualization with kvm and vmx may be needed for some use cases. |
It is already auto detected by qemu, but the thing is it can crash on different qemu/kvm (kernel) versions in some circumstances, like I had. I think extended CPU features should be explicitly enabled, as host CPU can vary and can result in unexpected behavior. |
appears broken without NixOS/nixpkgs#83920
This appears to avoid requiring KVM when it’s not available. This is what I originally though -cpu host did. Unfortunately not much documentation available from the QEMU side on this, but this appears to square with help: $ qemu-system-x86 -cpu help ... x86 host KVM processor with all supported host features x86 max Enables all features supported by the accelerator in the current host ... Whether we actually want to support this not clear, since this only happens when your CPU doesn’t have full KVM support. Some Nix builders are lying about kvm support though. Things aren’t too slow without it though. Fixes NixOS#85394 Alternative to NixOS#83920
This appears to avoid requiring KVM when it’s not available. This is what I originally though -cpu host did. Unfortunately not much documentation available from the QEMU side on this, but this appears to square with help: $ qemu-system-x86 -cpu help ... x86 host KVM processor with all supported host features x86 max Enables all features supported by the accelerator in the current host ... Whether we actually want to support this not clear, since this only happens when your CPU doesn’t have full KVM support. Some Nix builders are lying about kvm support though. Things aren’t too slow without it though. Fixes NixOS#85394 Alternative to NixOS#83920
This appears to avoid requiring KVM when it’s not available. This is what I originally though -cpu host did. Unfortunately not much documentation available from the QEMU side on this, but this appears to square with help: $ qemu-system-x86 -cpu help ... x86 host KVM processor with all supported host features x86 max Enables all features supported by the accelerator in the current host ... Whether we actually want to support this not clear, since this only happens when your CPU doesn’t have full KVM support. Some Nix builders are lying about kvm support though. Things aren’t too slow without it though. Fixes NixOS#85394 Alternative to NixOS#83920 (cherry picked from commit 47b56e7)
I marked this as stale due to inactivity. → More info |
Since #95956 is has no sense: Also nixpkgs/nixos/tests/virtualbox.nix Lines 362 to 363 in 2ba3a65
is meaningless, |
I marked this as stale due to inactivity. → More info |
Has this been fixed by #95956 ? |
Passing vmx (virtualization extensions) to guest vm can cause
issues if build is running inside nested vm.
Motivation for this change
When i was trying to run build inside VM with enabled nested virtualization, build failed as there was error with passing vmx extensions. This patch disables virtualization extension in qemu based tests. As far as i am aware no nixos tests require vmx extensions, as before we did not have CPU set to
host
.The other concern i have is if changing cpu to
host
is good, as it can make build non deterministic and result in crashes like i had.Things done
sandbox
innix.conf
on non-NixOS linux)nix-shell -p nixpkgs-review --run "nixpkgs-review wip"
./result/bin/
)nix path-info -S
before and after)