Skip to content

Commit

Permalink
test: Make LRP restore test case robust and optimized
Browse files Browse the repository at this point in the history
The goal of the test is to check if curl to a clusterIP svc endpoint is redirected
to both the backends when the original svc entry is restored upon LRP removal.
However, the current test logic expects the same backend should be selected for
all the pod clients simultaneously, and this can lengthen test duration.
This doesn't seem right since backend selection is not exactly deterministic.
More importantly, we only need both backends to be selected at least once for all the client pods.

Flip the order in which we loop over backends and client pods. Loop over client pods first, and
then make curl calls until we hit both the backends on each of the client pods.
This way we can potentially avoid making some of the curl duplicate calls by not having
to synchronize what backends VIP calls are redirected to across multiple nodes.

Signed-off-by: Aditi Ghag <aditi@cilium.io>
  • Loading branch information
aditighag authored and aanm committed May 28, 2021
1 parent 1d8f8e2 commit 6a3e846
Showing 1 changed file with 16 additions and 14 deletions.
30 changes: 16 additions & 14 deletions test/k8sT/Services.go
Original file line number Diff line number Diff line change
Expand Up @@ -816,25 +816,27 @@ var _ = SkipDescribeIf(helpers.RunsOn54Kernel, "K8sServicesTest", func() {
}

var wg sync.WaitGroup
for _, testCase := range testCases {
for _, name := range []string{be1Name, be2Name} {
for _, tc := range testCases {
pods, err := kubectl.GetPodNames(helpers.DefaultNamespace, tc.selector)
Expect(err).Should(BeNil(), "cannot retrieve pod names by filter %q", tc.selector)
Expect(len(pods)).Should(BeNumerically(">", 0), "no pod exists by filter %q", tc.selector)
for _, pod := range pods {
wg.Add(1)
go func(tc lrpTestCase, want string) {
go func(tc lrpTestCase, pod string) {
defer GinkgoRecover()
defer wg.Done()
want := []string{be1Name, be2Name}
be1Found := false
be2Found := false
Eventually(func() bool {
pods, err := kubectl.GetPodNames(helpers.DefaultNamespace, tc.selector)
Expect(err).Should(BeNil(), "cannot retrieve pod names by filter %q", tc.selector)
Expect(len(pods)).Should(BeNumerically(">", 0), "no pod exists by filter %q", tc.selector)
ret := true
for _, pod := range pods {
res := kubectl.ExecPodCmd(helpers.DefaultNamespace, pod, tc.cmd)
Expect(err).To(BeNil(), "%s failed in %s pod", tc.cmd, pod)
ret = ret && strings.Contains(res.Stdout(), want)
}
return ret
res := kubectl.ExecPodCmd(helpers.DefaultNamespace, pod, tc.cmd)
ExpectWithOffset(1, res).Should(helpers.CMDSuccess(),
"%s failed in %s pod", tc.cmd, pod)
be1Found = be1Found || strings.Contains(res.Stdout(), want[0])
be2Found = be2Found || strings.Contains(res.Stdout(), want[1])
return be1Found && be2Found
}, 30*time.Second, 1*time.Second).Should(BeTrue(), "assertion fails for test case: %v", tc)
}(testCase, name)
}(tc, pod)
}
}
wg.Wait()
Expand Down

0 comments on commit 6a3e846

Please sign in to comment.