Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl wait unable to not wait for service ready #80828

Closed
goplusgo opened this issue Jul 31, 2019 · 7 comments
Closed

kubectl wait unable to not wait for service ready #80828

goplusgo opened this issue Jul 31, 2019 · 7 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. kind/support Categorizes issue or PR as a support question. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@goplusgo
Copy link

goplusgo commented Jul 31, 2019

I am trying to use kubectl wait --for=condition=ready service/<service-name> --timeout=60s to wait a service to be ready. However, it doesn't work. And I have to switch to kubectl wait --for=condition=ready pod -l app=<app-name> --timeout=60s.

Any commands I can use to wait for a service to be ready with kubectl wait command? Thanks.

@kubernetes/sig-usability-feature-requests

@goplusgo goplusgo added the kind/support Categorizes issue or PR as a support question. label Jul 31, 2019
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jul 31, 2019
@goplusgo
Copy link
Author

@kubernetes/sig-usability-feature-requests

@k8s-ci-robot k8s-ci-robot added sig/usability Categorizes an issue or PR as relevant to SIG Usability. kind/feature Categorizes issue or PR as related to a new feature. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jul 31, 2019
@k8s-ci-robot
Copy link
Contributor

@JoeyLuffa: Reiterating the mentions to trigger a notification:
@kubernetes/sig-usability-feature-requests

In response to this:

@kubernetes/sig-usability-feature-requests

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@hainesc
Copy link
Contributor

hainesc commented Aug 2, 2019

When you run kubectl describe on a pod, you can see a condition list like this:

Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 

While run kubectl describe on a service, there is no condition list. That is reason why kubectl wait does no work on service.

I will take a look whether should we make sure condition list should be available for all kind of resources.

/assign

@vllry
Copy link
Contributor

vllry commented Oct 9, 2019

/remove-sig usability
/sig cli

@k8s-ci-robot k8s-ci-robot added sig/cli Categorizes an issue or PR as relevant to SIG CLI. and removed sig/usability Categorizes an issue or PR as relevant to SIG Usability. labels Oct 9, 2019
@liggitt
Copy link
Member

liggitt commented Nov 5, 2019

Services do not have conditions in their status. The ability to wait for an arbitrary status field value is covered by feature request #83094

/close

@k8s-ci-robot
Copy link
Contributor

@liggitt: Closing this issue.

In response to this:

Services do not have conditions in their status. The ability to wait for an arbitrary status field value is covered by #83094

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jonashackt
Copy link

jonashackt commented Nov 25, 2021

For others seeking a solution until v1.23 is available broadly (see https://stackoverflow.com/a/70108500/4964553):

timeout 10s bash -c 'until kubectl get service/<service-name> --output=jsonpath='{.status.loadBalancer}' | grep "ingress"; do : ; done'

chrisd8088 added a commit to chrisd8088/mongo-express that referenced this issue Mar 15, 2023
As the Kubernetes load balancer service for the mongo-express application
does not require redeployment on every application change, add a dispatch
workflow to perform the deployment of the load balancer service only
when triggered by an administrative dispatch event.

This GitHub Actions workflow expects an Azure Kubernetes Service cluster
to exist and for its name and resource group to be configured as
GitHub Actions secrets, as well as the Azure login credentials.

After deployment of the persistent volumes manifest, the workflow
uses kubectl to wait for the service's ingress IP to be assigned.
Unfortunately this can not be done exclusively with the "kubectl wait"
command as its JSON condition matching requires a fixed string, and
the IP address cannot be known in advance.  Instead, use "kubectl get"
in a loop and wait for the load balancer's status to contain an
"ingress" key, as suggested in:

kubernetes/kubernetes#80828 (comment)
chrisd8088 added a commit to chrisd8088/mongo-express that referenced this issue Mar 15, 2023
As the Kubernetes load balancer service for the mongo-express application
does not require redeployment on every application change, add a dispatch
workflow to perform the deployment of the load balancer service only
when triggered by an administrative dispatch event.

This GitHub Actions workflow expects an Azure Kubernetes Service cluster
to exist and for its name and resource group to be configured as
GitHub Actions secrets, as well as the Azure login credentials.

After deployment of the persistent volumes manifest, the workflow
uses kubectl to wait for the service's ingress IP to be assigned.
Unfortunately this can not be done exclusively with the "kubectl wait"
command as its JSON condition matching requires a fixed string, and
the IP address cannot be known in advance.  Instead, use "kubectl get"
in a loop and wait for the load balancer's status to contain an
"ingress" key, as suggested in:

kubernetes/kubernetes#80828 (comment)
chrisd8088 added a commit to chrisd8088/mongo-express that referenced this issue Mar 15, 2023
As the Kubernetes load balancer service for the mongo-express application
does not require redeployment on every application change, add a dispatch
workflow to perform the deployment of the load balancer service only
when triggered by an administrative dispatch event.

This GitHub Actions workflow expects an Azure Kubernetes Service cluster
to exist and for its name and resource group to be configured as
GitHub Actions secrets, as well as the Azure login credentials.

After deployment of the persistent volumes manifest, the workflow
uses kubectl to wait for the service's ingress IP to be assigned.
Unfortunately this can not be done exclusively with the "kubectl wait"
command as its JSON condition matching requires a fixed string, and
the IP address cannot be known in advance.  Instead, use "kubectl get"
in a loop and wait for the load balancer's status to contain an
"ingress" key, as suggested in:

kubernetes/kubernetes#80828 (comment)

See also this concern regarding how JSON condition matching was
implemented for "kubectl wait":

kubernetes/kubernetes#83094 (comment)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. kind/support Categorizes issue or PR as a support question. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants