Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use nocloud when specified #224

Closed
wants to merge 18 commits into from

Conversation

hh
Copy link

@hh hh commented Feb 10, 2023

We are picking this up from @BobyMCbobs (thanks for the work on this)
Context: #212

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: hh
Once this PR has been reviewed and has the lgtm label, please assign agradouski for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Feb 10, 2023
@coveralls
Copy link

coveralls commented Feb 10, 2023

Pull Request Test Coverage Report for Build 4169745530

  • 38 of 56 (67.86%) changed or added relevant lines in 2 files are covered.
  • 2 unchanged lines in 1 file lost coverage.
  • Overall coverage decreased (-0.2%) to 52.448%

Changes Missing Coverage Covered Lines Changed/Added Lines %
pkg/kubevirt/machine.go 2 4 50.0%
pkg/kubevirt/utils.go 36 52 69.23%
Files with Coverage Reduction New Missed Lines %
pkg/kubevirt/utils.go 2 73.94%
Totals Coverage Status
Change from base Build 4069808194: -0.2%
Covered Lines: 964
Relevant Lines: 1838

💛 - Coveralls

@hh
Copy link
Author

hh commented Feb 10, 2023

Ensuring we loop in @davidvossel

someone could theoretically set a CloudInitConfigDrive volume too. I don't like that this logic is assuming that only a CloudInitNoCloud volume get's the special behavior.

making the default cloudInitConfigDrive is good though

I'm comfortable merging this once the mergo logic is replaced.

@davidvossel
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot added the ok-to-test Indicates a non-member PR verified by an org member that is safe to test. label Feb 13, 2023
@davidvossel
Copy link
Contributor

this comment [1] from the previous pr was the primary remaining thing. We don't have to remove mergo from go.mod since i know it's an indirect dependency somewhere in the dependency chain. I'd just like to see that we dont' introduce that dependency directly in the capk code for this PR simply because it isn't needed.

  1. Use nocloud when specified #212 (comment)

@@ -61,7 +62,6 @@ require (
github.com/googleapis/gnostic v0.5.5 // indirect
github.com/gorilla/websocket v1.5.0 // indirect
github.com/gregjones/httpcache v0.0.0-20181110185634-c63ab54fda8f // indirect
github.com/imdario/mergo v0.3.12 // indirect
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing mergo as requested

Comment on lines +159 to +163
if index >= 0 {
template.Spec.Volumes[index] = cloudInitVolume
} else {
template.Spec.Domain.Devices.Disks = append(template.Spec.Domain.Devices.Disks, cloudInitDisk)
}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this what you are looking for @davidvossel ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not exactly.

The disk and volume logic needs to be independent. Here's what we need.

If the volume already exists, override it template.Spec.Volumes[index] = cloudInitVolume else append the volume to the template.Spec.Volumes slice.

If the disk matching the volume name already exists, then leave it alone. If the disk doesn't exist, append it.

The idea here is that someone can create their own disk mapping to the cloud init volume, using whatever parameters they want and we'll use that... but if no disk is provided that matches the cloud-init volume name, we'll automatically create one using the virtio bus.

And for the volumes, some one can create a volume with a specific cloud init type, and we'll inject the user data into that, or if no volume exists already for cloud init, we'll create one for the user defaulting to cloud init config drive.

Comment on lines 223 to 230
func detectCloudInitDisk(vmi *kubevirtv1.VirtualMachineInstanceTemplateSpec) (foundCloudInitDisk bool) {
for _, v := range vmi.Spec.Domain.Devices.Disks {
if v.Disk != nil && v.Name == cloudInitVolumeName {
return true
}
}
return false
}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure it makes sense to detect if it's a CloudInitDisk or not anymore?

Comment on lines 159 to 169
switch cloudInitType {
case cloudInitNoCloud:
err := mergo.Merge(&template.Spec.Volumes[index], cloudInitVolume)
if err != nil {
return nil, err
}
return template, nil
case cloudInitConfigDrive:
template.Spec.Volumes = append(template.Spec.Volumes, cloudInitVolume)
}
if !detectCloudInitDisk(template) {
Copy link
Author

@hh hh Feb 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just making sure it makes sense to remove this switch & merge logic with the simplified version.

Copy link
Contributor

@davidvossel davidvossel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i left a comment inline, before this merges we'll need to squash the commits down

Comment on lines +159 to +163
if index >= 0 {
template.Spec.Volumes[index] = cloudInitVolume
} else {
template.Spec.Domain.Devices.Disks = append(template.Spec.Domain.Devices.Disks, cloudInitDisk)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not exactly.

The disk and volume logic needs to be independent. Here's what we need.

If the volume already exists, override it template.Spec.Volumes[index] = cloudInitVolume else append the volume to the template.Spec.Volumes slice.

If the disk matching the volume name already exists, then leave it alone. If the disk doesn't exist, append it.

The idea here is that someone can create their own disk mapping to the cloud init volume, using whatever parameters they want and we'll use that... but if no disk is provided that matches the cloud-init volume name, we'll automatically create one using the virtio bus.

And for the volumes, some one can create a volume with a specific cloud init type, and we'll inject the user data into that, or if no volume exists already for cloud init, we'll create one for the user defaulting to cloud init config drive.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 15, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 14, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants