-
Notifications
You must be signed in to change notification settings - Fork 921
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl wait timeout argument is poorly documented and ill-suited to waiting on multiple resources #1219
kubectl wait timeout argument is poorly documented and ill-suited to waiting on multiple resources #1219
Comments
/sig cli |
Here is a way to reproduce:
Output:
^ shows the command took 10s (because Replicas=2) when the timeout itself was only supposed to be 5s. |
/triage accept This was discussed on the bug scrub today and we agree that this is not good behavior. To solve this we will need to implement either contexts or goroutines to run these waiters in parallel to more appropriately match the user expectation here. |
@mpuckett159: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/triage accepted |
Hello, We have a similar problem, expecting kubectl wait to wait for X seconds in total with "--timeout=Xs", e.g.:
However it waits for X seconds * Number of deployments with not-ready pods. Could you please consider also our scenario in the fix? Kind Regards, Vitaly |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale If noone is willing to take it, I can work on that. |
/assign |
Workaround for those using timeout $((300+5)) kubectl wait --for=condition=Ready --all pod --timeout=300s |
This is just a re-submit of #754 which, despite being confirmed & assigned, was closed as stale without any fix.
What happened:
Run
kubectl wait
with a selector matching more than one resource and a timeoutWhat you expected to happen:
The timeout should apply to the wait command, not to the individual resources.
With the timeout applying to resources sequentially, it makes waiting on more than one resource with any kind of timeout basically unusable.
How to reproduce it (as minimally and precisely as possible):
kubectl wait pod --selector=... --timeout=30s
Anything else we need to know?:
cc @eranreshef the original reporter and @JabusKotze who assigned the prior issue to themselves
The text was updated successfully, but these errors were encountered: