-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tests: Updates the should delete a collection of pods test #108593
tests: Updates the should delete a collection of pods test #108593
Conversation
@claudiubelu: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
962e0b6
to
da70741
Compare
test/e2e/common/node/pods.go
Outdated
framework.ExpectNoError(err, "3 pods not found") | ||
// wait as required for all 3 pods to be running | ||
ginkgo.By("waiting for all 3 pods to be running") | ||
err := e2epod.WaitForPodsRunningReady(f.ClientSet, f.Namespace, 3, 0, podStartTimeout, make(map[string]string)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
err := e2epod.WaitForPodsRunningReady(f.ClientSet, f.Namespace, 3, 0, podStartTimeout, make(map[string]string)) | |
err := e2epod.WaitForPodsRunningReady(f.ClientSet, f.Namespace.Name, 3, 0, f.Timeouts.PodStart, nil) |
- Prefer the framework's PodStart timeout
- can just use nil for the empty map
We should wait for all 3 pods after we've spawned them, instead of spawning and waiting for them sequentially.
da70741
to
720ffb8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: claudiubelu, tallclair The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
4 similar comments
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
…#108593-upstream-release-1.23 Automated cherry pick of #108593: tests: Updates the should delete a collection of pods test
What type of PR is this?
/kind cleanup
/sig testing
What this PR does / why we need it:
We should wait for all 3 pods after we've spawned them, instead of spawning and waiting for them sequentially.
Based on the suggestion: #106183 (comment)
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: