Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support DV garbage collection #8134

Merged
merged 1 commit into from Aug 15, 2022
Merged

Conversation

arnongilboa
Copy link
Contributor

@arnongilboa arnongilboa commented Jul 20, 2022

Signed-off-by: Arnon Gilboa agilboa@redhat.com

What this PR does / why we need it:
CDI added support for garbage collection of completed DVs.
Here we add the KubeVirt support, and adapt tests accordingly, so in case of garbage collected DV it won't be referred.
See design here.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

Special notes for your reviewer:

Release note:

Support DataVolume garbage collection

@kubevirt-bot kubevirt-bot added dco-signoff: yes Indicates the PR's author has DCO signed all their commits. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. size/L labels Jul 20, 2022
@kubevirt-bot kubevirt-bot added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Jul 20, 2022
Copy link
Member

@mhenriks mhenriks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good start, couple questions

@@ -21,6 +21,9 @@ set -ex pipefail

DOCKER_TAG=${DOCKER_TAG:-devel}
KUBEVIRT_DEPLOY_CDI=${KUBEVIRT_DEPLOY_CDI:-true}
##### DO NOT APPROVE #####
# FIXME: default should be false, as we want to cover the existing no-GC flows
CDI_DV_GC=${CDI_DV_GC:-0}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You planning a GC specific lane?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Already added an optional lane pull-kubevirt-e2e-k8s-1.24-sig-storage-dv-gc

@@ -409,6 +409,9 @@ func GetVMIInformerIndexers() cache.Indexers {
if vol.PersistentVolumeClaim != nil {
pvcs = append(pvcs, fmt.Sprintf("%s/%s", vmi.Namespace, vol.PersistentVolumeClaim.ClaimName))
}
if vol.DataVolume != nil {
pvcs = append(pvcs, fmt.Sprintf("%s/%s", vmi.Namespace, vol.DataVolume.Name))
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain the rationale for this? To me, it seems to muddy things a bit. Why not lookup in both indexes? We do that in a couple places, like here: https://github.com/kubevirt/kubevirt/blob/main/pkg/virt-controller/watch/vm.go#L1720-L1733

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like a good idea. I'll try and update.

@@ -693,7 +693,8 @@ func (ctrl *VMExportController) isKubevirtContentType(pvc *corev1.PersistentVolu
if pvc.Spec.VolumeMode != nil && *pvc.Spec.VolumeMode == corev1.PersistentVolumeBlock {
return true
}
isKubevirt := pvc.Annotations[annContentType] == string(cdiv1.DataVolumeKubeVirt)
contentType, ok := pvc.Annotations[annContentType]
isKubevirt := ok && (contentType == string(cdiv1.DataVolumeKubeVirt) || contentType == "")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In what cases was this change necessary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

KubeVirt was mistakenly (inconsistently with CDI) regarding a PVC with default (empty string) content type as non-KubeVirt (which means Archive, I guess).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isnt it also kubevirt if !ok? if we dont have the annotation at all?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ShellyKa13 if the PVC doesn't have the ContentType annotation at all (unlike empty which defaults to KubeVirt), we cannot say it's KubeVirt content type as CDI did not set the annotation.

. "kubevirt.io/kubevirt/tests/framework/matcher"
)

func DeleteDataVolume(dv *cdiv1.DataVolume) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty sure this is not the right place for these helper functions. Perhaps in libstorage?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I can move it to libstorage. @ShellyKa13 ack?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah Im not even sure we need a new file maybe all this functions can get into tests/libstorage/datavolume.go..
If you think it should be in a different file then tests/libstorage/datavolume_gc.go

// Don't create DV if PVC already exists
pvc, err := c.getPersistentVolumeClaimFromCache(vm.Namespace, template.Name)
if err == nil && pvc != nil {
populatedFor := pvc.Annotations["cdi.kubevirt.io/storage.populatedFor"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this annotation added when DV is GC'd? Otherwise, it shouldn't always be there

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it's already used in the restore for creating a DV when its populated PVC already exists.
@ShellyKa13 ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes in the restore in case of datavolumetemplate we update the populatedFor to be the name of the new dvrestore name

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what is going on here is correct. I don't think "populatedFor" should have any effect here. It is used for more things than vm restore, btw.

As-is, it appears to me that any "populatedFor" DataVolumes will never be garbage collected. Or more precisely, they will always get recreated after they are garbage collected.

I think VM controller should simply only create the DV if the PVC does not exist.

That means that the restore controller has to create the DVs during restore, though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As-is, it appears to me that any "populatedFor" DataVolumes will never be garbage collected. Or more precisely, they will always get recreated after they are garbage collected.

not sure I understand why they will always be recreated, actually this code exactly should make sure that they will not be recreated (if annotation exists continue).
But either way I do understand what you are saying regarding the datavolume should be created in the restore and I agree, I did wonder why we create the pvc but not the datavolume in the restore process.
The question is if we decided that the default behavior is as if gc is on then on the case of restore there is no need to create the data volumes at all since the pvcs are already created from the volumesnapshots.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the "right" thing to do is to have restore create the DV even if it is garbage collected right away. BUT this may be a tricky thing to do if GC is enabled. How do we track that a DataVolume was created successfully if it may be garbage collected right away? May have to persist in restore status. Basically we have to make sure that the restore code does not consistently keep creating DVs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mhenriks things get a bit complex with your "right" direction. I don't see why not keep restore consistent with the new behavior, so if PVC exists DV is not created. Afaik no KubeVirt code currently requires/refers the DV in the restore flow, and all CI lanes look green with this behavior for both GC enabled/disabled.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really don't like the idea of mixing this compact PR with improvement of the restore DV creation. We sure won't duplicate the VM controller logic, so a quite heavy refactoring will be needed there.
However, I understand that for some reason (preparation for DV source = volume snapshot?) you don't want to get rid of the restore DV creation when PVC exist.
Let's consider this simple solution: when restore creates the PVC it is annotated with dvRequired, so when VM controller encounters PVC with that annotation, it creates the DV and removes the PVC annotation.
wdyt?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just thought of an easy way to track whether restore controller created the DataVolume. Initially, the restore PVC has no owner. When DV is created, it will take ownership of the PVC. When the DV is garbage collected, the VM will own the PVC. So, restore process will only create the DV when it does not exist and the PVC has no owner. What do you think?

As you can probably guess, my preference is to not change existing behavior unless there is a very good reason.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like a clean logic, so why not keep DV creation in the VM controller with this minor condition? of course, you can later move DV creation to restore, but not in this PR.

if err == nil {
return storageClassName, nil
}
pvcKey := cacheKeyFunc(namespace, volume.VolumeSource.DataVolume.Name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getStorageClassNameForDV returns error if DV is not in cache (e.g. GC), but I agree handling it there is better.

// Don't create DV if PVC already exists
pvc, err := c.getPersistentVolumeClaimFromCache(vm.Namespace, template.Name)
if err == nil && pvc != nil {
populatedFor := pvc.Annotations["cdi.kubevirt.io/storage.populatedFor"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes in the restore in case of datavolumetemplate we update the populatedFor to be the name of the new dvrestore name

. "kubevirt.io/kubevirt/tests/framework/matcher"
)

func DeleteDataVolume(dv *cdiv1.DataVolume) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah Im not even sure we need a new file maybe all this functions can get into tests/libstorage/datavolume.go..
If you think it should be in a different file then tests/libstorage/datavolume_gc.go


err = virtCli.CdiClient().CdiV1beta1().DataVolumes(dv.Namespace).Delete(context.Background(), dv.Name, metav1.DeleteOptions{})
if !IsDataVolumeGC(virtCli) {
Expect(err).To(BeNil())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some places added dv = nil, not sure if should be added here too

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding that. Let's see what @mhenriks thinks about DeleteDataVolume(dv **v1beta1.DataVolume).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no dont change it to **, just do dv=nil were it was deleted just after calling this function

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not nil them all in the func? I don't see any risk

tests/utils.go Outdated
@@ -106,6 +106,8 @@ const (
StartingVMInstance = "Starting a VirtualMachineInstance"
WaitingVMInstanceStart = "Waiting until the VirtualMachineInstance will start"
CouldNotFindComputeContainer = "could not find compute container for pod"
DeletingDataVolume = "Deleting the DataVolume"
VerifyingGC = "Verifying DataVolume garbage collection"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this 2 doesnt seem to belong here, should be in the same file as the helper functions they are used in

@@ -479,6 +479,15 @@ func (c *VMController) handleDataVolumes(vm *virtv1.VirtualMachine, dataVolumes
}
}
if !exists {
// Don't create DV if PVC already exists
pvc, err := c.getPersistentVolumeClaimFromCache(vm.Namespace, template.Name)
if err == nil && pvc != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

requeue if err != nil?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. Nice catch!

@arnongilboa
Copy link
Contributor Author

/retest-required

@arnongilboa
Copy link
Contributor Author

/test pull-kubevirt-e2e-k8s-1.24-sig-storage-dv-gc

1 similar comment
@arnongilboa
Copy link
Contributor Author

/test pull-kubevirt-e2e-k8s-1.24-sig-storage-dv-gc

@akalenyu
Copy link
Contributor

/test ?

@kubevirt-bot
Copy link
Contributor

@akalenyu: The following commands are available to trigger required jobs:

  • /test pull-kubevirt-apidocs
  • /test pull-kubevirt-build
  • /test pull-kubevirt-build-arm64
  • /test pull-kubevirt-check-unassigned-tests
  • /test pull-kubevirt-client-python
  • /test pull-kubevirt-e2e-k8s-1.22-operator
  • /test pull-kubevirt-e2e-k8s-1.22-sig-compute
  • /test pull-kubevirt-e2e-k8s-1.22-sig-compute-migrations
  • /test pull-kubevirt-e2e-k8s-1.22-sig-monitoring
  • /test pull-kubevirt-e2e-k8s-1.22-sig-network
  • /test pull-kubevirt-e2e-k8s-1.22-sig-storage
  • /test pull-kubevirt-e2e-k8s-1.23-operator
  • /test pull-kubevirt-e2e-k8s-1.23-operator-nonroot
  • /test pull-kubevirt-e2e-k8s-1.23-sig-compute
  • /test pull-kubevirt-e2e-k8s-1.23-sig-compute-cgroupsv2
  • /test pull-kubevirt-e2e-k8s-1.23-sig-compute-migrations
  • /test pull-kubevirt-e2e-k8s-1.23-sig-compute-migrations-nonroot
  • /test pull-kubevirt-e2e-k8s-1.23-sig-compute-nonroot
  • /test pull-kubevirt-e2e-k8s-1.23-sig-network
  • /test pull-kubevirt-e2e-k8s-1.23-sig-network-nonroot
  • /test pull-kubevirt-e2e-k8s-1.23-sig-storage
  • /test pull-kubevirt-e2e-k8s-1.23-sig-storage-cgroupsv2
  • /test pull-kubevirt-e2e-k8s-1.23-sig-storage-nonroot
  • /test pull-kubevirt-e2e-k8s-1.24-ipv6-sig-network
  • /test pull-kubevirt-e2e-k8s-1.24-operator
  • /test pull-kubevirt-e2e-k8s-1.24-sig-compute
  • /test pull-kubevirt-e2e-k8s-1.24-sig-network
  • /test pull-kubevirt-e2e-k8s-1.24-sig-storage
  • /test pull-kubevirt-e2e-kind-1.22-sriov
  • /test pull-kubevirt-e2e-kind-1.23-vgpu
  • /test pull-kubevirt-e2e-kind-1.23-vgpu-nonroot
  • /test pull-kubevirt-e2e-windows2016
  • /test pull-kubevirt-generate
  • /test pull-kubevirt-manifests
  • /test pull-kubevirt-prom-rules-verify
  • /test pull-kubevirt-unit-test
  • /test pull-kubevirt-verify-go-mod
  • /test pull-kubevirtci-bump-kubevirt

The following commands are available to trigger optional jobs:

  • /test build-kubevirt-builder
  • /test pull-kubevirt-check-tests-for-flakes
  • /test pull-kubevirt-code-lint
  • /test pull-kubevirt-e2e-arm64
  • /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime
  • /test pull-kubevirt-e2e-k8s-1.22-sig-performance
  • /test pull-kubevirt-e2e-k8s-1.23-single-node
  • /test pull-kubevirt-e2e-k8s-1.23-swap-enabled
  • /test pull-kubevirt-e2e-k8s-1.24-sig-storage-dv-gc
  • /test pull-kubevirt-e2e-kind-1.22-sriov-nonroot
  • /test pull-kubevirt-fossa
  • /test pull-kubevirt-gosec
  • /test pull-kubevirt-goveralls
  • /test pull-kubevirt-verify-rpms

Use /test all to run the following jobs that were automatically triggered:

  • pull-kubevirt-apidocs
  • pull-kubevirt-build
  • pull-kubevirt-build-arm64
  • pull-kubevirt-check-tests-for-flakes
  • pull-kubevirt-check-unassigned-tests
  • pull-kubevirt-client-python
  • pull-kubevirt-code-lint
  • pull-kubevirt-e2e-k8s-1.22-operator
  • pull-kubevirt-e2e-k8s-1.22-sig-compute
  • pull-kubevirt-e2e-k8s-1.22-sig-compute-migrations
  • pull-kubevirt-e2e-k8s-1.22-sig-network
  • pull-kubevirt-e2e-k8s-1.22-sig-performance
  • pull-kubevirt-e2e-k8s-1.22-sig-storage
  • pull-kubevirt-e2e-k8s-1.23-operator
  • pull-kubevirt-e2e-k8s-1.23-operator-nonroot
  • pull-kubevirt-e2e-k8s-1.23-sig-compute
  • pull-kubevirt-e2e-k8s-1.23-sig-compute-migrations
  • pull-kubevirt-e2e-k8s-1.23-sig-compute-migrations-nonroot
  • pull-kubevirt-e2e-k8s-1.23-sig-compute-nonroot
  • pull-kubevirt-e2e-k8s-1.23-sig-network
  • pull-kubevirt-e2e-k8s-1.23-sig-network-nonroot
  • pull-kubevirt-e2e-k8s-1.23-sig-storage
  • pull-kubevirt-e2e-k8s-1.23-sig-storage-nonroot
  • pull-kubevirt-e2e-k8s-1.24-ipv6-sig-network
  • pull-kubevirt-e2e-k8s-1.24-operator
  • pull-kubevirt-e2e-k8s-1.24-sig-compute
  • pull-kubevirt-e2e-k8s-1.24-sig-network
  • pull-kubevirt-e2e-k8s-1.24-sig-storage
  • pull-kubevirt-e2e-kind-1.22-sriov
  • pull-kubevirt-e2e-kind-1.23-vgpu
  • pull-kubevirt-e2e-kind-1.23-vgpu-nonroot
  • pull-kubevirt-e2e-windows2016
  • pull-kubevirt-fossa
  • pull-kubevirt-generate
  • pull-kubevirt-goveralls
  • pull-kubevirt-manifests
  • pull-kubevirt-prom-rules-verify
  • pull-kubevirt-unit-test
  • pull-kubevirt-verify-go-mod

In response to this:

/test ?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@akalenyu
Copy link
Contributor

/test pull-kubevirt-e2e-k8s-1.24-sig-storage-dv-gc

@akalenyu
Copy link
Contributor

/test pull-kubevirt-fossa

@arnongilboa
Copy link
Contributor Author

/test pull-kubevirt-e2e-k8s-1.24-sig-storage-dv-gc

1 similar comment
@arnongilboa
Copy link
Contributor Author

/test pull-kubevirt-e2e-k8s-1.24-sig-storage-dv-gc

@arnongilboa
Copy link
Contributor Author

/test pull-kubevirt-e2e-kind-1.23-vgpu-nonroot

@mhenriks
Copy link
Member

/lgtm

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Aug 10, 2022
@mhenriks
Copy link
Member

/test pull-kubevirt-e2e-k8s-1.24-sig-storage-dv-gc

@kubevirt-commenter-bot
Copy link

/retest-required
This bot automatically retries required jobs that failed/flaked on approved PRs.
Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@arnongilboa
Copy link
Contributor Author

/test pull-kubevirt-e2e-k8s-1.24-sig-storage-dv-gc

@arnongilboa
Copy link
Contributor Author

/retest

@kubevirt-bot
Copy link
Contributor

@arnongilboa: The /retest command does not accept any targets.
The following commands are available to trigger required jobs:

  • /test pull-kubevirt-apidocs
  • /test pull-kubevirt-build
  • /test pull-kubevirt-build-arm64
  • /test pull-kubevirt-check-unassigned-tests
  • /test pull-kubevirt-client-python
  • /test pull-kubevirt-e2e-k8s-1.22-operator
  • /test pull-kubevirt-e2e-k8s-1.22-sig-compute
  • /test pull-kubevirt-e2e-k8s-1.22-sig-compute-migrations
  • /test pull-kubevirt-e2e-k8s-1.22-sig-monitoring
  • /test pull-kubevirt-e2e-k8s-1.22-sig-network
  • /test pull-kubevirt-e2e-k8s-1.22-sig-storage
  • /test pull-kubevirt-e2e-k8s-1.23-operator
  • /test pull-kubevirt-e2e-k8s-1.23-operator-nonroot
  • /test pull-kubevirt-e2e-k8s-1.23-sig-compute
  • /test pull-kubevirt-e2e-k8s-1.23-sig-compute-cgroupsv2
  • /test pull-kubevirt-e2e-k8s-1.23-sig-compute-migrations
  • /test pull-kubevirt-e2e-k8s-1.23-sig-compute-migrations-nonroot
  • /test pull-kubevirt-e2e-k8s-1.23-sig-compute-nonroot
  • /test pull-kubevirt-e2e-k8s-1.23-sig-network
  • /test pull-kubevirt-e2e-k8s-1.23-sig-network-nonroot
  • /test pull-kubevirt-e2e-k8s-1.23-sig-storage
  • /test pull-kubevirt-e2e-k8s-1.23-sig-storage-cgroupsv2
  • /test pull-kubevirt-e2e-k8s-1.23-sig-storage-nonroot
  • /test pull-kubevirt-e2e-k8s-1.24-ipv6-sig-network
  • /test pull-kubevirt-e2e-k8s-1.24-operator
  • /test pull-kubevirt-e2e-k8s-1.24-sig-compute
  • /test pull-kubevirt-e2e-k8s-1.24-sig-network
  • /test pull-kubevirt-e2e-k8s-1.24-sig-storage
  • /test pull-kubevirt-e2e-kind-1.22-sriov
  • /test pull-kubevirt-e2e-kind-1.23-vgpu
  • /test pull-kubevirt-e2e-kind-1.23-vgpu-nonroot
  • /test pull-kubevirt-e2e-windows2016
  • /test pull-kubevirt-generate
  • /test pull-kubevirt-manifests
  • /test pull-kubevirt-prom-rules-verify
  • /test pull-kubevirt-unit-test
  • /test pull-kubevirt-verify-go-mod
  • /test pull-kubevirtci-bump-kubevirt

The following commands are available to trigger optional jobs:

  • /test build-kubevirt-builder
  • /test pull-kubevirt-check-tests-for-flakes
  • /test pull-kubevirt-code-lint
  • /test pull-kubevirt-e2e-arm64
  • /test pull-kubevirt-e2e-k8s-1.22-sig-compute-realtime
  • /test pull-kubevirt-e2e-k8s-1.22-sig-performance
  • /test pull-kubevirt-e2e-k8s-1.23-single-node
  • /test pull-kubevirt-e2e-k8s-1.23-swap-enabled
  • /test pull-kubevirt-e2e-k8s-1.24-sig-storage-dv-gc
  • /test pull-kubevirt-e2e-kind-1.22-sriov-nonroot
  • /test pull-kubevirt-fossa
  • /test pull-kubevirt-gosec
  • /test pull-kubevirt-goveralls
  • /test pull-kubevirt-verify-rpms

Use /test all to run the following jobs that were automatically triggered:

  • pull-kubevirt-apidocs
  • pull-kubevirt-build
  • pull-kubevirt-build-arm64
  • pull-kubevirt-check-tests-for-flakes
  • pull-kubevirt-check-unassigned-tests
  • pull-kubevirt-client-python
  • pull-kubevirt-code-lint
  • pull-kubevirt-e2e-k8s-1.22-operator
  • pull-kubevirt-e2e-k8s-1.22-sig-compute
  • pull-kubevirt-e2e-k8s-1.22-sig-compute-migrations
  • pull-kubevirt-e2e-k8s-1.22-sig-network
  • pull-kubevirt-e2e-k8s-1.22-sig-performance
  • pull-kubevirt-e2e-k8s-1.22-sig-storage
  • pull-kubevirt-e2e-k8s-1.23-operator
  • pull-kubevirt-e2e-k8s-1.23-operator-nonroot
  • pull-kubevirt-e2e-k8s-1.23-sig-compute
  • pull-kubevirt-e2e-k8s-1.23-sig-compute-migrations
  • pull-kubevirt-e2e-k8s-1.23-sig-compute-migrations-nonroot
  • pull-kubevirt-e2e-k8s-1.23-sig-compute-nonroot
  • pull-kubevirt-e2e-k8s-1.23-sig-network
  • pull-kubevirt-e2e-k8s-1.23-sig-network-nonroot
  • pull-kubevirt-e2e-k8s-1.23-sig-storage
  • pull-kubevirt-e2e-k8s-1.23-sig-storage-nonroot
  • pull-kubevirt-e2e-k8s-1.24-ipv6-sig-network
  • pull-kubevirt-e2e-k8s-1.24-operator
  • pull-kubevirt-e2e-k8s-1.24-sig-compute
  • pull-kubevirt-e2e-k8s-1.24-sig-network
  • pull-kubevirt-e2e-k8s-1.24-sig-storage
  • pull-kubevirt-e2e-kind-1.22-sriov
  • pull-kubevirt-e2e-kind-1.23-vgpu
  • pull-kubevirt-e2e-kind-1.23-vgpu-nonroot
  • pull-kubevirt-e2e-windows2016
  • pull-kubevirt-fossa
  • pull-kubevirt-generate
  • pull-kubevirt-goveralls
  • pull-kubevirt-manifests
  • pull-kubevirt-prom-rules-verify
  • pull-kubevirt-unit-test
  • pull-kubevirt-verify-go-mod

In response to this:

/retest pull-kubevirt-e2e-k8s-1.22-sig-storage
/retest pull-kubevirt-e2e-k8s-1.23-sig-compute-migrations

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@arnongilboa
Copy link
Contributor Author

/test pull-kubevirt-e2e-k8s-1.22-sig-storage
/test pull-kubevirt-e2e-k8s-1.23-sig-compute-migrations

@kubevirt-commenter-bot
Copy link

/retest-required
This bot automatically retries required jobs that failed/flaked on approved PRs.
Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@arnongilboa
Copy link
Contributor Author

/test pull-kubevirt-e2e-kind-1.22-sriov

@kubevirt-commenter-bot
Copy link

/retest-required
This bot automatically retries required jobs that failed/flaked on approved PRs.
Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@mhenriks
Copy link
Member

/test pull-kubevirt-e2e-k8s-1.24-sig-storage-dv-gc

@mhenriks
Copy link
Member

/hold

@kubevirt-bot kubevirt-bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 11, 2022
@arnongilboa
Copy link
Contributor Author

@mhenriks @ShellyKa13 @awels @akalenyu
The failure in the dv-gc lane is due to a bug in CDI 1.50.0, which was fixed in 1.51.0.
So this PR is blocked by kubevirt/kubevirtci#827 - please get it in asap.

@kubevirt-bot kubevirt-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Aug 12, 2022
CDI added support for [garbage collection of completed DVs]
(kubevirt/containerized-data-importer#2233).

Here we add the KubeVirt support, and adapt tests accordingly, so in
case of garbage collected DV it won't be referred.

See design [here](https://github.com/kubevirt/community/blob/main/
design-proposals/garbage-collect-completed-dvs.md).

Signed-off-by: Arnon Gilboa <agilboa@redhat.com>
@kubevirt-bot kubevirt-bot removed lgtm Indicates that a PR is ready to be merged. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Aug 13, 2022
@kubevirt-bot
Copy link
Contributor

@arnongilboa: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubevirt-e2e-k8s-1.24-sig-storage-dv-gc 6131e49 link false /test pull-kubevirt-e2e-k8s-1.24-sig-storage-dv-gc
pull-kubevirt-fossa ef62cbf link false /test pull-kubevirt-fossa

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@ShellyKa13
Copy link
Contributor

/lgtm

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Aug 15, 2022
@ShellyKa13
Copy link
Contributor

/unhold

@kubevirt-bot kubevirt-bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 15, 2022
@kubevirt-bot kubevirt-bot merged commit 595051e into kubevirt:main Aug 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. lgtm Indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XL
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants