Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HyperV: add support for new flags added in libvirt >= 4.7.0 #1949

Closed
wants to merge 6 commits into from

Conversation

ffromani
Copy link
Contributor

@ffromani ffromani commented Jan 17, 2019

WIP

What this PR does / why we need it:
This PR adds support for the last HyperV highlightenments
added in libvirt 4.7.0.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #1919

Special notes for your reviewer:
N/A

Release note:

add support for HyperV enlightenments added in libvirt >= 4.7.0

@kubevirt-bot kubevirt-bot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API size/L labels Jan 17, 2019
@davidvossel
Copy link
Member

Travis fails. it looks like some 'make generate' changes are missing.

@MarSik @michalskrivanek This pr seems relevant to the discussion in #1919

@michalskrivanek
Copy link

yeah, in general not all of them will work on all versions. qemu-kvm-ev (and its equivalent in future) will likely not support some.
OTOH this is just about defining them, IMHO it makes sense to list them here and the actual feasibility of each flag should be handled by #1919 instead

@ffromani
Copy link
Contributor Author

ffromani commented Jan 21, 2019

Travis fails. it looks like some 'make generate' changes are missing.

@MarSik @michalskrivanek This pr seems relevant to the discussion in #1919

I think I already added that, I've now pushed one last bit which seems irrelevant, but still.

Quickly glancing at the failures, they all seem unrelated to this patchset, which just adds three new hyperv booleans:

[Fail] ContainerDisk Starting with virtio-win with virtio-win as secondary disk [It] should boot and have the virtio as sata CDROM
/root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:199

�[0m�[91mpackage k8s.io/code-generator/cmd/deepcopy-gen: unrecognized import path "k8s.io/code-generator/cmd/deepcopy-gen" (https fetch: Get https://k8s.io/code-generator/cmd/deepcopy-gen?go-get=1: dial tcp 23.236.58.218:443: connect: connection refused)

HEAD is now at 5f66499b all: bump to 2.5.0
�[0m�[91mpackage k8s.io/code-generator/cmd/deepcopy-gen: unrecognized import path "k8s.io/code-generator/cmd/deepcopy-gen" (https fetch: Get https://k8s.io/code-generator/cmd/deepcopy-gen?go-get=1: dial tcp 23.236.58.218:443: connect: connection refused)

Summarizing 3 Failures:

[Fail] DataVolume Integration Starting a VirtualMachineInstance with a DataVolume as a volume source using Alpine import [It] should be successfully started and stopped multiple times
/root/go/src/kubevirt.io/kubevirt/tests/datavolume_test.go:56

[Fail] Templates Creating VM from Template with RHEL Template when the VirtualMachine was created [It] should succeed to start the VirtualMachine via oc command
/root/go/src/kubevirt.io/kubevirt/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113

[Fail] ContainerDisk Starting with virtio-win with virtio-win as secondary disk [It] should boot and have the virtio as sata CDROM
/root/go/src/kubevirt.io/kubevirt/tests/registry_disk_test.go:199

This patch exposes in the API the last HyperV highlightenments
added in libvirt 4.7.0.

Signed-off-by: Francesco Romani <fromani@redhat.com>
This patch add support for the HyperV enlightenments added
in libvirt 4.7

Signed-off-by: Francesco Romani <fromani@redhat.com>
This apparently cosmetic-only change is the only change that
running 'make generate' does to my source tree.
Uploading for the sake of the completenss.

Signed-off-by: Francesco Romani <fromani@redhat.com>
a couple of the newer hyperV flags (frequencies, reenlightenment) we
recently added are related to L2 support, and require host kernel
updates.
For this reason we want to have a feature gate to guard those settings.

Signed-off-by: Francesco Romani <fromani@redhat.com>
Signed-off-by: Francesco Romani <fromani@redhat.com>
@ffromani
Copy link
Contributor Author

ping?

@ffromani
Copy link
Contributor Author

to learn about the meaning of hyperv flags: https://schd.ws/hosted_files/devconfcz2019/cf/vkuznets_enlightening_kvm_devconf2019.pdf

@cynepco3hahue
Copy link

@fromanirh Do we really need to expose these options via API or can we set some defaults that will work good, I just did not remember that we had such customization options under the oVirt?

PR himself lgtm

@ffromani
Copy link
Contributor Author

@fromanirh Do we really need to expose these options via API or can we set some defaults that will work good

We'd like to do that in https://github.com/kubevirt/common-templates/ , and we like to avoid hardcoding defaults in the code.
Thus, we need to expose the tunables in the API

I just did not remember that we had such customization options under the oVirt?

In oVirt VDSM used to expose just one flags and apply the "best" settings itself - and in that case the best settings where hardcoded in the Vdsm source code.
Now Engine just directly sends the best possible XML.

Here I'm following the path kubevirt choose when it added the hyperv options back in time.

@cynepco3hahue
Copy link

@fromanirh I see, thanks for the clarification.

Copy link

@cynepco3hahue cynepco3hahue left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@fabiand
Copy link
Member

fabiand commented Feb 27, 2019

How are these flags used?

I wonder if we need to expose all of them individually in the API, or if we could have someling like an HyperVFlags group, which would then add all relevant flags.

@ffromani
Copy link
Contributor Author

How are these flags used?

I wonder if we need to expose all of them individually in the API

There are two group of hyperv flags:

  1. "standard" (aka non-nested, L1 guests) flags - pretty much we want to enable them all by default for windows guest, according to devs there are no drawbacks, we can only gain from them
  2. "nested" (aka for L2 guests) flags - we should enable them only conditionally if the admin want to allow nested virtualization, because enabling flags can lead to some performance penalty.

, or if we could have someling like an HyperVFlags group, which would then add all relevant flags.

Not sure I completely understood.
We were discussing adding something like "enable all" flag in the API, but there the discussion is in the erarly stages.
QEMU wants to add such a flag.

This presentation is an awesome source of informations: https://schd.ws/hosted_files/devconfcz2019/cf/vkuznets_enlightening_kvm_devconf2019.pdf

@MarSik
Copy link
Contributor

MarSik commented Feb 27, 2019

@fabiand There is also the issue with the newest flags (synic for example) that require support from libvirt, qemu and host kernel. The VM will not boot otherwise. We control most of the chain, but not all of it. The qemu "enable all" flag @fromanirh mentioned will enable all flags that are compatible with the node. That is something we can't easily do without reimplementing the detection logic.

@ffromani
Copy link
Contributor Author

@fabiand There is also the issue with the newest flags (synic for example) that require support from libvirt, qemu and host kernel. The VM will not boot otherwise. We control most of the chain, but not all of it. The qemu "enable all" flag @fromanirh mentioned will enable all flags that are compatible with the node. That is something we can't easily do without reimplementing the detection logic.

For SyNIC we will need another Issue/PR, because SyNIC support is already merged in kubevirt core

@ffromani
Copy link
Contributor Author

ffromani commented Mar 1, 2019

@fabiand There is also the issue with the newest flags (synic for example) that require support from libvirt, qemu and host kernel. The VM will not boot otherwise. We control most of the chain, but not all of it. The qemu "enable all" flag @fromanirh mentioned will enable all flags that are compatible with the node. That is something we can't easily do without reimplementing the detection logic.

For SyNIC we will need another Issue/PR, because SyNIC support is already merged in kubevirt core

We are discussing SyNIC - and in general, optimization dependent on host kernel features - on kubevirt-dev.

@ffromani
Copy link
Contributor Author

ffromani commented Mar 4, 2019

Now that the SyNIC/feature gate is handled elsewhere, anything else I can do here? Any more comments?

Signed-off-by: Francesco Romani <fromani@redhat.com>
@ffromani
Copy link
Contributor Author

ffromani commented Mar 5, 2019

Now that the SyNIC/feature gate is handled elsewhere, anything else I can do here? Any more comments?

CC @davidvossel @fabiand @cynepco3hahue

@vladikr
Copy link
Member

vladikr commented Mar 5, 2019

I'll just write my thoughts here.

From my point of view, it's wrong to allow setting flags that will lead a vm to a crash.
I think this should be avoided and this is completely unexpected to the users.

If we are allowing the API users to set these flags, I think we must distinguish between scenarios that affect scheduling - I want my VM to run on a host that supports these flags.
Otherwise, virt-launcher should have a logic to check if the kernel it is running on can support these flags and if not these should be ignored and not lead to a crashing vm.

@ffromani
Copy link
Contributor Author

ffromani commented Apr 3, 2019

Partially obsoleted by #2162 and by upgrade to libvirt 5.1 in
https://github.com/kubevirt/libvirt/blob/master/Dockerfile
Will send a new clean PR soon.

@ffromani ffromani closed this Apr 3, 2019
@ffromani ffromani deleted the hyperv-phase-2 branch October 9, 2019 06:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Make sure all the necessary HyperV enlightements can be configured
8 participants