Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add VMI backend storage and persistent TPM support #8156

Merged
merged 9 commits into from
Apr 19, 2023

Conversation

jean-edouard
Copy link
Contributor

@jean-edouard jean-edouard commented Jul 21, 2022

What this PR does / why we need it:
This PR contains a first commit that implements a way to persist VM files on the backend side.
The second commit then uses that to add persistent TPM support.
The third commit modifies the behavior of the backend storage PVC to add support for migration.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

Special notes for your reviewer:
This is better reviewed commit by commit. I can split into 2 PRs but could only open £2 after £1 is merged.
I mostly followed the design proposal: kubevirt/community#170

Release note:

TPM VM device can now be set to persistent

@kubevirt-bot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@kubevirt-bot kubevirt-bot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. release-note Denotes a PR that will be considered when it comes time to generate release notes. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. size/L labels Jul 21, 2022
@jean-edouard
Copy link
Contributor Author

/test pull-kubevirt-e2e-k8s-1.24-sig-compute

@jean-edouard
Copy link
Contributor Author

/test pull-kubevirt-e2e-k8s-1.24-sig-compute

@jean-edouard
Copy link
Contributor Author

/test pull-kubevirt-e2e-k8s-1.24-sig-compute

@jean-edouard
Copy link
Contributor Author

/test pull-kubevirt-e2e-k8s-1.24-sig-storage

@jean-edouard jean-edouard marked this pull request as ready for review July 22, 2022 19:27
@kubevirt-bot kubevirt-bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jul 22, 2022
Copy link
Member

@vladikr vladikr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work Jed.
I have a few questions, mainly about live migration.

}
pvc := &v1.PersistentVolumeClaim{
ObjectMeta: metav1.ObjectMeta{
Name: PVCPrefix + vmi.Name,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this be deleted together with the VM object?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want it to, yes.
I set its owner to the same value(s) as the VMI's owner, not sure if that's enough, I need to add a test for it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That wasn't the case anymore. I just pushed a commit to set the PVC owner to the VM object (or whatever the VMI owner is).
For discrete VMIs (i.e. not tied to a VM), I decided to set to owner of the PVC to the VMI, rendering it useless, but also avoiding garbage PVCs and security concerns.
PTAL!


// BackendStorageClass is the name of the storage class to use for the PVCs created to preserve VM state, like TPM.
// The storage class must support RWX in filesystem mode.
BackendStorageClass string `json:"backendStorageClass,omitempty"`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To me BackendStorageClass sounds too generic and needs some kind of a prefix to explain what it's meant for.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. I've had a hard time naming this thing, suggestions are welcome!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe just VMStateStorageClass?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -843,7 +846,11 @@ func (c *MigrationController) handleTargetPodCreation(key string, migration *vir
return nil
}
}
return c.createTargetPod(migration, vmi, sourcePod)
backendStoragePVC, err := backendstorage.CreateIfNeeded(vmi, c.clusterConfig, c.clientset, backendstorage.IsMigration)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit lost. Could you please explain why do we need to do this?
I mean why can't we simply skip this volume during migration and re-use the original on the destination? ( same as we do with other volumes)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be nice but I don't think we can.
When talking about volumes used by the VM it's not a problem, since the VM only ever runs in one place.
However, in this case, the contents of the PVC would be in use in 2 places at once (source container and target container).
In the case of swtpm, we could end up with corrupt data. In fact, swtpm takes a file-based lock to prevent exactly that, so the migration actually fails when using the original in the destination.

Copy link
Member

@vladikr vladikr Jul 25, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, in this case, the contents of the PVC would be in use in 2 places at once (source container and target container).

How can this be in use in two places? During the migration, the target domain is paused and will only be unpaused when the source is dead.
Why would this case be any different?

In the case of swtpm, we could end up with corrupt data. In fact, swtpm takes a file-based lock to prevent exactly that, so the migration actually fails when using the original in the destination.

We would end up in this situation for any migration, but this is not happening because of what I've described above. Also, source qemu normally locks so the destination won't write into it accidentally.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, in this case, the contents of the PVC would be in use in 2 places at once (source container and target container).

How can this be in use in two places? During the migration, the target domain is paused and will only be unpaused when the source is dead. Why would this case be any different?

Yes, the target domain is paused but not the target pod! This case is different because the contents of the PVC are actually accessed by the pod itself, not the domain.

In the case of swtpm, we could end up with corrupt data. In fact, swtpm takes a file-based lock to prevent exactly that, so the migration actually fails when using the original in the destination.

We would end up in this situation for any migration, but this is not happening because of what I've described above. Also, source qemu normally locks so the destination won't write into it accidentally.

Yup, this again applies to the domain but not to its hosting pod.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, in this case, the contents of the PVC would be in use in 2 places at once (source container and target container).

How can this be in use in two places? During the migration, the target domain is paused and will only be unpaused when the source is dead. Why would this case be any different?

Yes, the target domain is paused but not the target pod! This case is different because the contents of the PVC are actually accessed by the pod itself, not the domain.

Can you please explain why the pod is accessing this content (especially if the domain is not running) if this content belongs to the domain?

Filesystem volumes that we attach to the VMIs should behave similarly. These volumes are mounted in the pod (compute container) and have a diskX.img file that is being used by qemu.
It seems to me like a similar scenario. If not, can you please explain why not?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I see your point now... Even when coming from the domain, most I/O operations ultimately happen in the backend. I guess a more accurate answer to your question is that qemu is equipped to handle cases where the same volume is mounted on both the source and the target, and swtpm is not... We could open an issue against swtpm and switch to using just 1 PVC if/when the issue gets resolved.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I see your point now... Even when coming from the domain, most I/O operations ultimately happen in the backend. I guess a more accurate answer to your question is that qemu is equipped to handle cases where the same volume is mounted on both the source and the target, and swtpm is not... We could open an issue against swtpm and switch to using just 1 PVC if/when the issue gets resolved.

Qemu is able to handle this mainly because the domain is paused and will not be unpaused until the source is destroyed. The same should happen with swptm. What is the reason this service would generate any I/O if qemu, which is its only consumer, is paused?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, great question. It almost feels like that lock swtpm has in place is there to block this very scenario, I think we should ask about it, I'll open an issue.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure which lock would that be, but if this code has been accepted into libvirt then Im sure this volume should be treated as any other.

Honestly, I'd be against us developing an additional mechanism to handle PVCs , instead this volume should be treated the same way as any other filesystem volume we attach to the guests.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This lock (inside virt-launcher/compute, running as non-root): /var/run/kubevirt-private/libvirt/qemu/swtpm/09d086e6-ba5d-4647-8f7d-c16d17ae20c0/tpm2/.lock

Please keep in mind that we do not attach this volume to the guest! Libvirt is not aware of it, and the libvirt TPM option do not have a way to provide a specific disk or even location for storing the TPM state.

pkg/virt-controller/services/rendervolumes.go Outdated Show resolved Hide resolved
tests/vmi_tpm_test.go Outdated Show resolved Hide resolved
@@ -65,11 +77,63 @@ func CreateIfNeeded(vmi *corev1.VirtualMachineInstance, clusterConfig *virtconfi
VolumeMode: &modeFile,
},
}
// Whatever owns the VMI (i.e. a VM) now owns the PVC
Copy link
Member

@alicefr alicefr Jul 25, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When the owner is deleted, do we need also to delete the PVC?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I'm not sure how this mechanism works, I'll look into it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PVC indeed gets deleted with the VM, I added a test for it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, what happen if we simpy create a VMI? This doesn't have a owner reference. What happen to the PVC in this case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case the PVC stays, since we don't know if/when that VMI will be started again...
The alternative is to prevent the creation of VMIs with persistent settings enabled.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potentially yes. Same thing if a VM if stopped and you create a VMI with its name.
However:

  • If the libvirt UUID is different, the existing TPM will be wiped out and a blank one will be created
  • If the original VMI took ownership of the TPM, the new VMI will need the administrator password to do anything on it
  • If the secrets were sealed against a set of PCRs that are unique to the original VMI, the new one will not be able to see them
  • Most importantly, this should be seen as a convenience feature rather than a security one. The whole premise of TPM is the hardware vault protecting its flash memory...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. Thanks for clarification. So:

If the libvirt UUID is different, the existing TPM will be wiped out and a blank one will be created

This basically means that there is no point in allowing persistent TPM settings for VMIs since UUID will be different every time, am I right?

And BTW, just curious: I never checked that, but if I create a VM, then each time that I start/stop it the UUID will be the same, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is something I've been wondering about and didn't find a proper answer to, I should probably dive into the libvirt code.
Observations showed that creating/deleting a VMI from the same yaml file multiple times does produce the same UUID. Same about VM start-stop.
Maybe the UUID is based on a hash of the XML spec?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought a new UUID was regenerated each time. Interesting...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, the libvirt docs says a random UUID is generated for each new domain (and every boot of a VM/VMI effectively creates a new domain).
However, I found this, though it is specific to VMs:
https://github.com/kubevirt/kubevirt/blob/main/pkg/virt-controller/watch/vm.go#L1395

I just tried to verify my claim that VMIs keep getting the same UUID and actually got different values across reboots! Not sure what changed or maybe my initial observation was bogus...
However, even for VMIs we do request a specific UUID in the domain XML, so we have control over it. Good to know in case we want to ensure it's always the same value or something.

pkg/backend-storage/backend-storage.go Outdated Show resolved Hide resolved
pkg/backend-storage/backend-storage.go Outdated Show resolved Hide resolved
tests/vmi_tpm_test.go Outdated Show resolved Hide resolved
@jean-edouard
Copy link
Contributor Author

/hold
Thanks a lot for all the reviews!!
Holding while I address PR comments and work on fixing the nonroot failure.

@kubevirt-bot kubevirt-bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jul 25, 2022
@kubevirt-bot kubevirt-bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 6, 2023
@jean-edouard
Copy link
Contributor Author

jean-edouard commented Apr 6, 2023

Do snapshots have to consider this PVC as well?

That's a good point, they probably do, I'll look into it, thanks!

Added as a new commit.
It is a bit awkward since the only RWX FS storage class supported by kubevirtci, nfs-csi, doesn't support snapshots, therefore the functional test doesn't test much...
I'm not sure if cephfs comes with snaphot support, or if it can be configured for it, but that could be a solution to properly test the feature.
For now, if you'd rather block snapshots for VMs with persistent backend storage instead of merging a half-tested feature, let me know and I'll do that instead. Thanks!

@jean-edouard jean-edouard force-pushed the backendstorageandtpm branch 2 times, most recently from a6e2435 to 08d558f Compare April 10, 2023 21:38
@jean-edouard
Copy link
Contributor Author

Do snapshots have to consider this PVC as well?

That's a good point, they probably do, I'll look into it, thanks!

Added as a new commit. It is a bit awkward since the only RWX FS storage class supported by kubevirtci, nfs-csi, doesn't support snapshots, therefore the functional test doesn't test much... I'm not sure if cephfs comes with snaphot support, or if it can be configured for it, but that could be a solution to properly test the feature. For now, if you'd rather block snapshots for VMs with persistent backend storage instead of merging a half-tested feature, let me know and I'll do that instead. Thanks!

So, cephfs does support snapshot, so I was able to test my rough implementation and it didn't quite work.
The PVC does get snapshotted and restored properly, however I didn't realize that the restore operation creates new PVCs and updates the name inside the VM object, as opposed to renaming/removing the existing PVCs to reuse their name.
This is a problem for backend storage, as the PVC is not a volume of the VM, therefore we have no way of telling the VM about the name of the restored PVC.

I have a couple ideas on how to fix that, but I definitely think it should be done as a separate effort.
So I removed the commit for snapshot support and replaced it with a change to the snapshot admitter to reject VMs that use backend storage for now.
I will be working on snapshot support next, but I think it would be nice to have basic persistent TPM supported as soon as possible.

@rmohr WDYT?

@fossedihelm
Copy link
Contributor

/cc

Copy link
Contributor

@fossedihelm fossedihelm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @jean-edouard! It looks great overall. I added some comments below.
Let me know what do you think about them. Thanks :)

@@ -85,3 +101,182 @@ var _ = Describe("[sig-compute]vTPM", decorators.SigCompute, func() {
})
})
})

var _ = Describe("[sig-storage]vTPM", func() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add decorators.SigStorage too, otherwise these tests are not covered in the storage lane

Suggested change
var _ = Describe("[sig-storage]vTPM", func() {
var _ = Describe("[sig-storage]vTPM", decorators.SigStorage, func() {

@@ -0,0 +1,98 @@
package backendstorage
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing header

if util.IsNonRootVMI(vmi) {
// For non-root VMIs, the TPM state lives under /var/run/kubevirt-private/libvirt/qemu/swtpm
// To persist it, we need the persistent PVC to be mounted under that location.
// /var/run/kubevirt-private is an emptyDir, and k8s wwould automatically create the right sub-directories under it.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mega nit(typo):
wwould

@@ -330,6 +332,61 @@ func withAccessCredentials(accessCredentials []v1.AccessCredential) VolumeRender
}
}

func withTPM(vmi *v1.VirtualMachineInstance) VolumeRendererOption {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we add a unit test for this?

Comment on lines +157 to +158
storageClass, exists := libstorage.GetRWXFileSystemStorageClass()
Expect(exists).To(BeTrue(), "No RWX FS storage class found")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is an inconsistent behavior between this Describe and the subsequent It:
Here we will fail if the storageClass does not exist, later we silently Skip the test.
Maybe this can converge in the same behavior inside the BeforeEach block.
Thanks

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the BeforeEach you could also directly adjust the KubevirtCR (marking the whole Context as Serial), and the vmi itself

Comment on lines 200 to 214
if op1 == "migrate" {
migrateVMI(vmi)
} else if op1 == "restart" {
restartVM(vm)
}

checkTPM(vmi)

if op2 == "migrate" {
migrateVMI(vmi)
} else if op2 == "restart" {
restartVM(vm)
}

checkTPM(vmi)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of op1 and op2 you can switch using ops ...string and then loop through it with:

Suggested change
if op1 == "migrate" {
migrateVMI(vmi)
} else if op1 == "restart" {
restartVM(vm)
}
checkTPM(vmi)
if op2 == "migrate" {
migrateVMI(vmi)
} else if op2 == "restart" {
restartVM(vm)
}
checkTPM(vmi)
for _, op := range ops{
switch op {
case "migrate":
migrateVMI(vmi)
case "restart":
restartVM(vm)
}
checkTPM(vmi)
}


migrateVMI := func(vmi *v1.VirtualMachineInstance) {
By("Migrating the VMI")
checks.SkipIfMigrationIsNotPossible()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this Skip can be done at the beginning of the test to save some time and resources

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually a trick to still run the beginning of the test, and fail on error, even on clusters where migration is not possible.
So here the restart + migrate test will still test restart before getting skipped.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it! Thanks for the explanation 👍

Comment on lines +117 to +125
err = virtClient.VirtualMachine(vm.Namespace).Stop(context.Background(), vm.Name, &v1.StopOptions{})
ExpectWithOffset(1, err).ToNot(HaveOccurred())
EventuallyWithOffset(1, func() error {
_, err = virtClient.VirtualMachineInstance(vm.Namespace).Get(context.Background(), vm.Name, &k8smetav1.GetOptions{})
return err
}, 300*time.Second, 1*time.Second).ShouldNot(Succeed())

By("Starting the VM")
err = virtClient.VirtualMachine(vm.Namespace).Start(context.Background(), vm.Name, &v1.StartOptions{})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can I ask why didn't you use .Restart()?
Thanks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's important here to ensure that the virt-launcher pod and the VMI object both get destroyed to properly test the persistence of the data.
I would be worried that .Restart() might do any kind of soft reboot, anywhere from keeping the same pod to staying on the same node.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed! In this case maybe you can rename the function to stopAndStart 🙂 But feel free to ignore this

Comment on lines 222 to 233
By("Ensuring the PVC gets deleted")
Eventually(func() error {
_, err = virtClient.VirtualMachine(vm.Namespace).Get(context.Background(), vm.Name, &k8smetav1.GetOptions{})
if !errors.IsNotFound(err) {
return fmt.Errorf("VM %s not removed: %v", vm.Name, err)
}
_, err = virtClient.CoreV1().PersistentVolumeClaims(vm.Namespace).Get(context.Background(), backendstorage.PVCForVMI(vmi), k8smetav1.GetOptions{})
if !errors.IsNotFound(err) {
return fmt.Errorf("PVC %s not removed: %v", backendstorage.PVCForVMI(vmi), err)
}
return nil
}, 300*time.Second, 1*time.Second).ShouldNot(HaveOccurred())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably you can avoid this check since the subsequent test will ensure the same

Signed-off-by: Jed Lejosne <jed@redhat.com>
@rmohr
Copy link
Member

rmohr commented Apr 17, 2023

I have a couple ideas on how to fix that, but I definitely think it should be done as a separate effort.
So I removed the commit for snapshot support and replaced it with a change to the snapshot admitter to reject VMs that use backend storage for now.
I will be working on snapshot support next, but I think it would be nice to have basic persistent TPM supported as soon as possible.

@rmohr WDYT?

Yes, sounds good to me. I am just also not sure when looking into block volumes, and mayb having pre-populated PVCs if this feature is ready yet for general consumption? I have not plans right now to go deeper into reviewing your work here, so feel free to continue without my approval, but maybe move this for now behind a feature gate until we have more contrete plans on the missing gaps like snapshot/restore/block pvc/making pvc name discoverable/... ?

Signed-off-by: Jed Lejosne <jed@redhat.com>
Signed-off-by: Jed Lejosne <jed@redhat.com>
Signed-off-by: Jed Lejosne <jed@redhat.com>
…ories

Signed-off-by: Jed Lejosne <jed@redhat.com>
Signed-off-by: Jed Lejosne <jed@redhat.com>
Signed-off-by: Jed Lejosne <jed@redhat.com>
Signed-off-by: Jed Lejosne <jed@redhat.com>
@jean-edouard
Copy link
Contributor Author

Thank you @fossedihelm for the great review!
I addressed most comments in the latest force-push.
I am now working on adding a unit test and a feature gate.

Signed-off-by: Jed Lejosne <jed@redhat.com>
Copy link
Contributor

@fossedihelm fossedihelm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/hold
@jean-edouard Put an hold to give the opportunity to respond to comments.
Feel free to unhold.
Thanks so much for this great work

@@ -107,6 +107,7 @@ func AdjustKubeVirtResource() {
virtconfig.VMExportGate,
virtconfig.KubevirtSeccompProfile,
virtconfig.HotplugNetworkIfacesGate,
virtconfig.VMPersistentState,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to enable this fg for the whole test suite? Or do we want to restrict it to the only persistent vTPM tests?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it falls into the same category as the other FGs here, i.e. not all tests need them, but no test requires them to be absent.
There's also no reason for the FG to impact other tests, and if it ever does we want to know about it!
@fossedihelm please cancel the hold if you agree!
@vasiliy-ul heads-up, you approved this PR a while back, there has been a fair amount of changes since then.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jean-edouard Agreed! I leave it to @vasiliy-ul to cancel the hold since a new review of it may be needed

@kubevirt-bot kubevirt-bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 18, 2023
@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label Apr 18, 2023
@jean-edouard
Copy link
Contributor Author

/retest

@vasiliy-ul
Copy link
Contributor

Looks good.

/unhold

@kubevirt-bot kubevirt-bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 19, 2023
@kubevirt-bot
Copy link
Contributor

kubevirt-bot commented Apr 19, 2023

@jean-edouard: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubevirt-e2e-kind-1.22-sriov 8a77472 link true /test pull-kubevirt-e2e-kind-1.22-sriov
pull-kubevirt-fossa e34f291 link false /test pull-kubevirt-fossa
pull-kubevirt-e2e-arm64 e34f291 link false /test pull-kubevirt-e2e-arm64

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API lgtm Indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XL
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet