New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pre-pull custom kernel / initrd as an init container #6589
Pre-pull custom kernel / initrd as an init container #6589
Conversation
/cc @rmohr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm OK with the approach, it makes sense to me.
I'm not OK with the implementation; I think having 2 distinct functions would make this clearer.
pkg/container-disk/container-disk.go
Outdated
@@ -212,9 +212,9 @@ func GenerateContainers(vmi *v1.VirtualMachineInstance, podVolumeName string, bi | |||
return generateContainersHelper(vmi, podVolumeName, binVolumeName, false) | |||
} | |||
|
|||
func GenerateKernelBootContainer(vmi *v1.VirtualMachineInstance, podVolumeName string, binVolumeName string) *kubev1.Container { | |||
func GenerateKernelBootContainers(vmi *v1.VirtualMachineInstance, podVolumeName string, binVolumeName string) (container, initContainer *kubev1.Container) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having 2 different functions would make this a lot clearer.
For instance, why is the init container the second container to be returned ? I - not sure if I'm a weird person ... - would have assumed it to be the first, because I need it first (yes, weak argument).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, in code, regular containers are used first :)
But you have a point, I'll change that.
Thanks!
pkg/container-disk/container-disk.go
Outdated
createContainer := func(isInit bool) *kubev1.Container { | ||
return generateContainerFromVolume(vmi, podVolumeName, binVolumeName, isInit, true, &kernelBootVolume, fakeVolumeIdx) | ||
} | ||
return createContainer(false), createContainer(true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also having 2 functions would prevent having boolean arguments, which are basically a code smell.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks.
return generateContainerFromVolume(vmi, podVolumeName, binVolumeName, isInit, true, &kernelBootVolume, fakeVolumeIdx) | ||
} | ||
return createContainer(false), createContainer(true) | ||
return generateContainerFromVolume(vmi, podVolumeName, binVolumeName, isInit, true, &kernelBootVolume, fakeVolumeIdx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also don't like the booleans here, but, this is clearly refactor-land. :)
@@ -1375,6 +1375,7 @@ func (t *templateService) renderLaunchManifest(vmi *v1.VirtualMachineInstance, t | |||
// this causes containerDisks to be pre-pulled before virt-launcher starts. | |||
initContainers = append(initContainers, containerdisk.GenerateInitContainers(vmi, "container-disks", "virt-bin-share-dir")...) | |||
|
|||
kernelBootInitContainer := containerdisk.GenerateKernelBootInitContainer(vmi, "container-disks", "virt-bin-share-dir") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you could inline this below:
if kernelBootInitContainer != nil {
initContainers = append(
initContainers,
*containerdisk.GenerateKernelBootInitContainer(vmi, "container-disks", "virt-bin-share-dir"))
}
This is only an opinionated suggestion. You're welcome to disagree and / or not accept it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's possible :)
The only caveat is that we assume containerdisk.GenerateKernelBootInitContainer(vmi, "container-disks", "virt-bin-share-dir")
is never nil if if kernelBootContainer
is not nil and if we're wrong we'll be brutally killed.
on the other hand maybe it's better to be killed right away than continue executing wrong behavior.
So sure, I'll change it :)
/test pull-kubevirt-e2e-k8s-1.21-sig-compute |
/retest |
|
||
Expect(hasContainerWithName(containers, "kernel-boot")).To(BeTrue()) | ||
Expect(hasContainerWithName(pod.Spec.InitContainers, "container-disk-binary")).To(BeTrue()) | ||
for _, containerArray := range [][]kubev1.Container{initContainers, containers} { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks slightly over-engineered. How about simply doing this:
Expect(hasContainerWithName(initContainers, "container-disk-binary")).To(BeTrue())
Expect(hasContainerWithName(initContainers, "kernel-boot")).To(BeTrue())
Expect(hasContainerWithName(containers, "kernel-boot")).To(BeTrue())
it is much easier to understand and runtime efficiency is really not a concern.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @rmohr!
Done.
Currently when one specifies custom initrd / kernel image to boot from it is not being prepulled as an init container. Therefore, virt-launcher might exceed timeouts. Therefore kernel-boot container is now being prepulled. Signed-off-by: Itamar Holder <iholder@redhat.com>
Signed-off-by: Itamar Holder <iholder@redhat.com>
Signed-off-by: Itamar Holder <iholder@redhat.com>
Signed-off-by: Itamar Holder <iholder@redhat.com>
27f1583
to
292d778
Compare
@iholder-redhat: The following test failed, say
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest |
/approve thanks. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: rmohr The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cherrypick release-0.44 |
@xpivarc: #6589 failed to apply on top of branch "release-0.44":
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What this PR does / why we need it:
Currently when one specifies custom initrd / kernel image to boot from (a feature that was enabled in this PR #5416) it is not being prepulled as an init container - this is now changed.
This is important because if the image is not pre-pulled it'll be pulled during compute container runtime which can lead to virt-launcher exceeding timeouts.
Addresses this issue: #6552
Fixes this Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2017251
Special notes for your reviewer:
Release note: