-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl wait unable to not wait for service ready #80828
Comments
@kubernetes/sig-usability-feature-requests |
@JoeyLuffa: Reiterating the mentions to trigger a notification: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
When you run
While run I will take a look whether should we make sure condition list should be available for all kind of resources. /assign |
/remove-sig usability |
Services do not have conditions in their status. The ability to wait for an arbitrary status field value is covered by feature request #83094 /close |
@liggitt: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
For others seeking a solution until
|
As the Kubernetes load balancer service for the mongo-express application does not require redeployment on every application change, add a dispatch workflow to perform the deployment of the load balancer service only when triggered by an administrative dispatch event. This GitHub Actions workflow expects an Azure Kubernetes Service cluster to exist and for its name and resource group to be configured as GitHub Actions secrets, as well as the Azure login credentials. After deployment of the persistent volumes manifest, the workflow uses kubectl to wait for the service's ingress IP to be assigned. Unfortunately this can not be done exclusively with the "kubectl wait" command as its JSON condition matching requires a fixed string, and the IP address cannot be known in advance. Instead, use "kubectl get" in a loop and wait for the load balancer's status to contain an "ingress" key, as suggested in: kubernetes/kubernetes#80828 (comment)
As the Kubernetes load balancer service for the mongo-express application does not require redeployment on every application change, add a dispatch workflow to perform the deployment of the load balancer service only when triggered by an administrative dispatch event. This GitHub Actions workflow expects an Azure Kubernetes Service cluster to exist and for its name and resource group to be configured as GitHub Actions secrets, as well as the Azure login credentials. After deployment of the persistent volumes manifest, the workflow uses kubectl to wait for the service's ingress IP to be assigned. Unfortunately this can not be done exclusively with the "kubectl wait" command as its JSON condition matching requires a fixed string, and the IP address cannot be known in advance. Instead, use "kubectl get" in a loop and wait for the load balancer's status to contain an "ingress" key, as suggested in: kubernetes/kubernetes#80828 (comment)
As the Kubernetes load balancer service for the mongo-express application does not require redeployment on every application change, add a dispatch workflow to perform the deployment of the load balancer service only when triggered by an administrative dispatch event. This GitHub Actions workflow expects an Azure Kubernetes Service cluster to exist and for its name and resource group to be configured as GitHub Actions secrets, as well as the Azure login credentials. After deployment of the persistent volumes manifest, the workflow uses kubectl to wait for the service's ingress IP to be assigned. Unfortunately this can not be done exclusively with the "kubectl wait" command as its JSON condition matching requires a fixed string, and the IP address cannot be known in advance. Instead, use "kubectl get" in a loop and wait for the load balancer's status to contain an "ingress" key, as suggested in: kubernetes/kubernetes#80828 (comment) See also this concern regarding how JSON condition matching was implemented for "kubectl wait": kubernetes/kubernetes#83094 (comment)
I am trying to use
kubectl wait --for=condition=ready service/<service-name> --timeout=60s
to wait a service to be ready. However, it doesn't work. And I have to switch tokubectl wait --for=condition=ready pod -l app=<app-name> --timeout=60s
.Any commands I can use to wait for a service to be ready with
kubectl wait
command? Thanks.@kubernetes/sig-usability-feature-requests
The text was updated successfully, but these errors were encountered: