Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-pull custom kernel / initrd as an init container #6589

Merged

Conversation

iholder101
Copy link
Contributor

@iholder101 iholder101 commented Oct 13, 2021

What this PR does / why we need it:
Currently when one specifies custom initrd / kernel image to boot from (a feature that was enabled in this PR #5416) it is not being prepulled as an init container - this is now changed.

This is important because if the image is not pre-pulled it'll be pulled during compute container runtime which can lead to virt-launcher exceeding timeouts.

Addresses this issue: #6552
Fixes this Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2017251

Special notes for your reviewer:

Release note:

custom kernel / initrd to boot from is now pre-pulled which improves stability

@kubevirt-bot kubevirt-bot added release-note Denotes a PR that will be considered when it comes time to generate release notes. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. size/S labels Oct 13, 2021
@iholder101
Copy link
Contributor Author

/cc @rmohr

Copy link
Contributor

@maiqueb maiqueb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm OK with the approach, it makes sense to me.

I'm not OK with the implementation; I think having 2 distinct functions would make this clearer.

@@ -212,9 +212,9 @@ func GenerateContainers(vmi *v1.VirtualMachineInstance, podVolumeName string, bi
return generateContainersHelper(vmi, podVolumeName, binVolumeName, false)
}

func GenerateKernelBootContainer(vmi *v1.VirtualMachineInstance, podVolumeName string, binVolumeName string) *kubev1.Container {
func GenerateKernelBootContainers(vmi *v1.VirtualMachineInstance, podVolumeName string, binVolumeName string) (container, initContainer *kubev1.Container) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think having 2 different functions would make this a lot clearer.

For instance, why is the init container the second container to be returned ? I - not sure if I'm a weird person ... - would have assumed it to be the first, because I need it first (yes, weak argument).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, in code, regular containers are used first :)
But you have a point, I'll change that.

Thanks!

Comment on lines 235 to 238
createContainer := func(isInit bool) *kubev1.Container {
return generateContainerFromVolume(vmi, podVolumeName, binVolumeName, isInit, true, &kernelBootVolume, fakeVolumeIdx)
}
return createContainer(false), createContainer(true)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also having 2 functions would prevent having boolean arguments, which are basically a code smell.

Copy link
Contributor

@maiqueb maiqueb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks.

return generateContainerFromVolume(vmi, podVolumeName, binVolumeName, isInit, true, &kernelBootVolume, fakeVolumeIdx)
}
return createContainer(false), createContainer(true)
return generateContainerFromVolume(vmi, podVolumeName, binVolumeName, isInit, true, &kernelBootVolume, fakeVolumeIdx)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also don't like the booleans here, but, this is clearly refactor-land. :)

@@ -1375,6 +1375,7 @@ func (t *templateService) renderLaunchManifest(vmi *v1.VirtualMachineInstance, t
// this causes containerDisks to be pre-pulled before virt-launcher starts.
initContainers = append(initContainers, containerdisk.GenerateInitContainers(vmi, "container-disks", "virt-bin-share-dir")...)

kernelBootInitContainer := containerdisk.GenerateKernelBootInitContainer(vmi, "container-disks", "virt-bin-share-dir")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you could inline this below:

if kernelBootInitContainer != nil {
    initContainers = append(
        initContainers, 
        *containerdisk.GenerateKernelBootInitContainer(vmi, "container-disks", "virt-bin-share-dir"))
}

This is only an opinionated suggestion. You're welcome to disagree and / or not accept it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's possible :)
The only caveat is that we assume containerdisk.GenerateKernelBootInitContainer(vmi, "container-disks", "virt-bin-share-dir") is never nil if if kernelBootContainer is not nil and if we're wrong we'll be brutally killed.

on the other hand maybe it's better to be killed right away than continue executing wrong behavior.

So sure, I'll change it :)

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Oct 13, 2021
@iholder101
Copy link
Contributor Author

/test pull-kubevirt-e2e-k8s-1.21-sig-compute

@iholder101
Copy link
Contributor Author

/retest


Expect(hasContainerWithName(containers, "kernel-boot")).To(BeTrue())
Expect(hasContainerWithName(pod.Spec.InitContainers, "container-disk-binary")).To(BeTrue())
for _, containerArray := range [][]kubev1.Container{initContainers, containers} {
Copy link
Member

@rmohr rmohr Oct 15, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks slightly over-engineered. How about simply doing this:

Expect(hasContainerWithName(initContainers, "container-disk-binary")).To(BeTrue())
Expect(hasContainerWithName(initContainers, "kernel-boot")).To(BeTrue())
Expect(hasContainerWithName(containers, "kernel-boot")).To(BeTrue())

it is much easier to understand and runtime efficiency is really not a concern.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @rmohr!
Done.

Currently when one specifies custom initrd / kernel image
to boot from it is not being prepulled as an init container.
Therefore, virt-launcher might exceed timeouts. Therefore
kernel-boot container is now being prepulled.

Signed-off-by: Itamar Holder <iholder@redhat.com>
Signed-off-by: Itamar Holder <iholder@redhat.com>
Signed-off-by: Itamar Holder <iholder@redhat.com>
Signed-off-by: Itamar Holder <iholder@redhat.com>
@iholder101 iholder101 force-pushed the feaure/kernel-boot-container-as-init branch from 27f1583 to 292d778 Compare October 17, 2021 08:14
@kubevirt-bot kubevirt-bot removed the lgtm Indicates that a PR is ready to be merged. label Oct 17, 2021
@kubevirt-bot
Copy link
Contributor

kubevirt-bot commented Oct 17, 2021

@iholder-redhat: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubevirt-e2e-arm64 b5df147 link false /test pull-kubevirt-e2e-arm64

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@iholder101
Copy link
Contributor Author

/retest

@rmohr
Copy link
Member

rmohr commented Oct 19, 2021

/approve

thanks.

@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: rmohr

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 19, 2021
@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Oct 19, 2021
@kubevirt-bot kubevirt-bot merged commit 5d06fbd into kubevirt:main Oct 19, 2021
@xpivarc
Copy link
Member

xpivarc commented Nov 19, 2021

/cherrypick release-0.44

@kubevirt-bot
Copy link
Contributor

@xpivarc: #6589 failed to apply on top of branch "release-0.44":

Applying: Prepull kernel/initrd custom boot container as initContainer
Using index info to reconstruct a base tree...
M	pkg/container-disk/container-disk.go
M	pkg/virt-controller/services/template.go
Falling back to patching base and 3-way merge...
Auto-merging pkg/virt-controller/services/template.go
CONFLICT (content): Merge conflict in pkg/virt-controller/services/template.go
Auto-merging pkg/container-disk/container-disk.go
CONFLICT (content): Merge conflict in pkg/container-disk/container-disk.go
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0001 Prepull kernel/initrd custom boot container as initContainer
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

In response to this:

/cherrypick release-0.44

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. lgtm Indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/S
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants