-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Eventually()
missing Should()
statement and sync error
#11101
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1957,24 +1957,32 @@ status: | |
|
||
Context("[Serial] when node becomes unhealthy", Serial, func() { | ||
const componentName = "virt-handler" | ||
var nodeName string | ||
|
||
AfterEach(func() { | ||
libpod.DeleteKubernetesApiBlackhole(getHandlerNodePod(virtClient, nodeName), componentName) | ||
Eventually(func(g Gomega) { | ||
g.Expect(getHandlerNodePod(virtClient, nodeName).Items[0]).To(HaveConditionTrue(k8sv1.PodReady)) | ||
}, 120*time.Second, time.Second).Should(Succeed()) | ||
|
||
tests.WaitForConfigToBePropagatedToComponent("kubevirt.io=virt-handler", util.GetCurrentKv(virtClient).ResourceVersion, | ||
tests.ExpectResourceVersionToBeLessEqualThanConfigVersion, 120*time.Second) | ||
}) | ||
|
||
It("[Serial] the VMs running in that node should be respawned", func() { | ||
By("Starting VM") | ||
vm := startVM(virtClient, createVM(virtClient, libvmi.NewCirros())) | ||
vmi, err := virtClient.VirtualMachineInstance(vm.Namespace).Get(context.Background(), vm.Name, &k8smetav1.GetOptions{}) | ||
Expect(err).ToNot(HaveOccurred()) | ||
|
||
nodeName := vmi.Status.NodeName | ||
nodeName = vmi.Status.NodeName | ||
oldUID := vmi.UID | ||
|
||
By("Blocking virt-handler from reconciling the VMI") | ||
libpod.AddKubernetesApiBlackhole(getHandlerNodePod(virtClient, nodeName), componentName) | ||
Eventually(getHandlerNodePod(virtClient, nodeName).Items[0], 120*time.Second, time.Second, HaveConditionFalse(k8sv1.PodReady)) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How about?
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The issue with I like the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why did we revert the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't get it.
which explain the source of the issue. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @jcanocan also observed that an There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks @fossedihelm for this nice piece of information. This explains all the issues faced. As far as I was able to test, the nearest |
||
|
||
DeferCleanup(func() { | ||
libpod.DeleteKubernetesApiBlackhole(getHandlerNodePod(virtClient, nodeName), componentName) | ||
Eventually(getHandlerNodePod(virtClient, nodeName).Items[0], 120*time.Second, time.Second, HaveConditionTrue(k8sv1.PodReady)) | ||
}) | ||
Eventually(func(g Gomega) { | ||
g.Expect(getHandlerNodePod(virtClient, nodeName).Items[0]).To(HaveConditionFalse(k8sv1.PodReady)) | ||
}, 120*time.Second, time.Second).Should(Succeed()) | ||
|
||
pod, err := libvmi.GetPodByVirtualMachineInstance(vmi, vmi.Namespace) | ||
Expect(err).ToNot(HaveOccurred()) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not flaky?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm trying to find out if this may lead to flaky. The purpose is to wait until configuration versions are in sync. I ran out of ideas in this regard. So I think we should rerun a couple of times the
pull-kubevirt-check-tests-for-flakes
to find it out. Sorry for the brute force approach 😞