New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Basic guest vCPUs to pCPUs pinning #1381
Conversation
77729c6
to
bafa3cf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validations will be added to check that the VM pod resources and limits will conform with Guaranteed QoS class?
pkg/registry-disk/registry-disk.go
Outdated
|
|
||
| resources := kubev1.ResourceRequirements{} | ||
| resources.Limits = make(kubev1.ResourceList) | ||
| resources.Limits[kubev1.ResourceCPU] = resource.MustParse("50m") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How was this value decided ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, just needed to start somewhere.
This container only mounts a path. - I'll make it a global default variable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like that we can introduce CPU pinning, but in my opinion it very limited, we still do not give any possibility to a user to control exact CPU pinning.
pkg/virt-handler/vm.go
Outdated
| func (d *VirtualMachineController) addNodeCpuManagerLabel() { | ||
| entries, err := filepath.Glob("/proc/*/cmdline") | ||
| if err != nil { | ||
| log.DefaultLogger().Reason(err).Errorf("failed to set a cpu manager label on host %s", d.host) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you make more specific messages? It will be easier to track it on debug
| @@ -22,8 +22,11 @@ package virthandler | |||
| import ( | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need some renaming under this file from VM to VMI, but it must not be a part of the PR 😄
pkg/virt-handler/vm.go
Outdated
| @@ -776,6 +783,33 @@ func (d *VirtualMachineController) heartBeat(interval time.Duration, stopCh chan | |||
| } | |||
| } | |||
|
|
|||
| func (d *VirtualMachineController) addNodeCpuManagerLabel() { | |||
| entries, err := filepath.Glob("/proc/*/cmdline") | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How many time can it take on some loaded machine? (20k process)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't 20k processes running anywhere. on my machine it with ~1k processes it took the following, but that also includes go compilation and getting its' stuck up:
time go run trym22.go
real 0m0.143s
user 0m0.115s
sys 0m0.058s
I don't believe it should be a problem, however, this is just a workaround at the moment, until k8s addresses the issue: kubernetes/kubernetes#66525
| "kubevirt.io/kubevirt/pkg/virt-launcher/virtwrap/api" | ||
| ) | ||
|
|
||
| const CPUSET_PATH = "/sys/fs/cgroup/cpuset/cpuset.cpus" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it must be lower case if you do not want to use it outside of the package.
Thanks for the review! Yea, it's because we have to rely on the k8s CPU manager, which is limited... |
| resources.Limits[k8sv1.ResourceCPU] = *resource.NewQuantity(int64(cpus.Cores), resource.BinarySI) | ||
| resources.Requests[k8sv1.ResourceCPU] = *resource.NewQuantity(int64(cpus.Cores), resource.BinarySI) | ||
| } else { | ||
| if cpuLimit, ok := resources.Limits[k8sv1.ResourceCPU]; ok { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just wonder if we need to check that resources can be converted to an integer because a user can to specify
cpus.DedicatedCPUPlacementresources.Limitsequal to something like 200m
In this case I think you will fail in converter, so better to add this check to validation webhook
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Absolutely. I've added several validations but didn't post yet as I'm still struggling with the functional tests.
Also, I wanted to explicitly check, somewhere in the SyncVM that the pods QOS is Guaranteed, but didn't figure out yet how to get to pods informer at that stage.
|
Added validations, still working on the functional tests. |
628314e
to
81f459a
Compare
| }) | ||
| It("should reject specs with non-integer cpu limits values", func() { | ||
| vmi.Spec.Domain.Resources.Limits = k8sv1.ResourceList{ | ||
| k8sv1.ResourceCPU: resource.MustParse("800m"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe add another negative test with a fraction value (cpu = 1.5) not in milli format.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yanirq you don't trust resource.MustParse() to handle it? ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vladikr if it does so i do :)
|
I've been testing CPU manager as is with kubevirt and its working pretty good. Trying to understand what this change will improve. Seems like we can specify particular CPUs? Currently with CPU manager, the VM is pinned on whatever CPU the process happens to start on, but can't be specified. |
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Adding the qos helper package from k8s.io/kubernetes/pkg/apis/core/v1/helper/qos to avoid indroducing a dependency on k8s.io/kubernetes Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
…uested Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Add a SYS_NICE capability to virt-launcher container when cpu pinning is requested Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
…pinning Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
…h its status Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
Signed-off-by: Vladik Romanovsky <vromanso@redhat.com>
fe2d945
to
eb0ca8f
Compare
|
ci test please |
What this PR does / why we need it:
Replying on k8s CPUManager feature, this PR implements a basic guest vCPUs to a dedicated pods' pCPUs pinning.
In general, k8s CPU manager will pin pods' CPUs under certain conditions:
Once pods' cpus are dedicated and pinned, virt-launcher can pin VMs' vCPUS to the pods pCPUS
This PR also provides a mechanism that once
dedicatedCpuPlacementpolicy has been requested, it will "enforce" the above conditions on the pod spec, so the k8s cpu manager will pin pods' cpus and reject inconsistent requirements.The user may either express the vCPU requirements using cpu.cores to resources.[requests, limits].cpu
togethre with cpu.dedicatedCpuPlacement = true
OR
Special notes for your reviewer:
What's currently missing:
Functional tests are dependant on kubevirt/kubevirtci#37
Release note: