-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl wait on arbitrary jsonpath #83094
Comments
/sig cli |
The reports of conditions' death have been greatly exaggerated :) Rather than a specific phase field, a jsonpath evaluation and expected value might be interesting |
something like the following?
|
guess what i want to see is ... wait until all the items reached the phase (or) wait until the single item reached the phase i specified. right? |
@deads2k Can you please suggest some command lines that will be useful for a "generic wait"? |
/assign |
Conditions are generally better than phase. Phases were a mistake in our original pod implementation because they implied a single unified state machine, but we found that instead we had multiple orthogonal states that are better represented as separate conditions. You could try |
A bit more on chat: dims 10:18 AM if the resource.group/resource.name | resource.group has multiple items, then we ensure that the jsonpath is true for each? what happens when there is nothing selected? |
@deads2k But how can you capture when a pod is up and running, but not ready? Putting Ansible aside this helm manifest deploys a pod with Vault but it won't be ready until the application has been configured; so is 0/1 but in running state (due to a readiness probe). NAME READY STATUS RESTARTS AGE
vault-0 0/1 Running 0 3m55s Currently tt's required to execute some kubectl exec commands into this pod to become ready. As I'm automating the process with Ansible I've tried creating a task with the bare command
As a workaround, I'm handling the error of not ready adding retries in Ansible, but I think it could be interesting having more control from K8s |
I have a similar use-case, that can benefit from this feature. I'm waiting for a Kubernetes Job that runs an integration test. I want to wait for the output in my Jenkins CI/CD tool. Now we wait for completion:
But if the job fails, the waiting continues until the timeout, as the DesignLogical conditionsI was expecting the CLI to take multiple arguments as conditions:
But that doesn't work (yet 🤞 ). As a job can only have one condition, providing multiple options as OR logic seems to be sufficient for all use-cases. GenericGoing by this thread, I could use something like:
If I understand correctly, the logic operators are currently lacking in the Kubectl JSONpath syntax. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
any update on this thread ? |
/assign |
/unassign @hpandeycodeit |
We also need this. It's flexible enough to cover lots of use cases. |
it's been more than an a month |
Will work on this incorporate with PR #92277 and #83959. /assign Update (10/4): A bit busy, but making little progress. Have tested Jsonpath function (https://github.com/kubernetes/client-go/blob/master/util/jsonpath/jsonpath.go) , will start implementation soon Update (10/19): Finished implementation, will clean up and add more tests soon. Update (10/25): Adding unit test cases Update (10/27): Finished testing. Ready to review and merge |
which version of kubectl has this? any example btw? |
@ldemailly it's available from kubectl version 1.23, you can try |
It seems that the current Is it possible to modify the existing syntax of |
As the Kubernetes load balancer service for the mongo-express application does not require redeployment on every application change, add a dispatch workflow to perform the deployment of the load balancer service only when triggered by an administrative dispatch event. This GitHub Actions workflow expects an Azure Kubernetes Service cluster to exist and for its name and resource group to be configured as GitHub Actions secrets, as well as the Azure login credentials. After deployment of the persistent volumes manifest, the workflow uses kubectl to wait for the service's ingress IP to be assigned. Unfortunately this can not be done exclusively with the "kubectl wait" command as its JSON condition matching requires a fixed string, and the IP address cannot be known in advance. Instead, use "kubectl get" in a loop and wait for the load balancer's status to contain an "ingress" key, as suggested in: kubernetes/kubernetes#80828 (comment) See also this concern regarding how JSON condition matching was implemented for "kubectl wait": kubernetes/kubernetes#83094 (comment)
Would be highly appreciated to have |
@lauchokyip can you please advise if the implementation that you merged can be used to reach the goal mentioned in these two comments (1 and 2)? |
Hi @minherz , thanks for bringing it up. I used to contribute during my free time but currently I am looking for new opportunities so I don't have the resources to do that. Would you be able to bring it up to sig-cli meeting to get their attention? Much appreciated |
@lauchokyip thank you. I've submitted a discussion topic for the next sig-cli bi-weekly meeting. I will post the results later. |
Following today's SIG CLI meeting, I have created a new feature request #117761. |
@jonashackt I hope that the change in the version 1.28 should allow to wait for a K8s service to get its IPs. Syntax is supposed to be like: kubectl wait --for=jsonpath='{.status.loadBalancer.ingress}' service/my_lb_service or the old style: kubectl wait --for=jsonpath='{.status.loadBalancer.ingress[0].ip}' service/my_lb_service |
This works in 1.28 |
@volatilemolotov thank you for confirming that! |
Tests on ARO were failing with: "error: no matching resources found", this because it's not possible to watch a resource that hasn't been created. See kubernetes/kubernetes#83094 & radius-project/radius#6914 Fixes: 4e1bb64 ("integration: Disable installing Security Profile Operator") Signed-off-by: Mauricio Vásquez <mauriciov@microsoft.com>
kubectl currently can
wait
oncondition
ordelete
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/kubectl/pkg/cmd/wait/wait.go#L110-L111
In addition to the above, can we please add the ability to wait on high level summary that we have
phase
?https://cs.k8s.io/?q=Phase%20.*json%3A%22phase&i=nope&files=&repos=kubernetes/kubernetes
why? Looks like we are not adding conditions anymore, so CRD(s) seem to have picked up the
phase
at least. Will help avoid having to write loops like this ( from https://github.com/kubernetes-sigs/cluster-api-provider-gcp/pull/175/files )Another example is : https://github.com/kubernetes-incubator/kube-aws/blob/master/contrib/cluster-backup/restore.sh#L80-L86
The text was updated successfully, but these errors were encountered: