-
Notifications
You must be signed in to change notification settings - Fork 39k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl exec by pod label #8876
Comments
Interesting idea. I think it will have to wait for our 1.0 launch, but it sounds reasonable. |
yeah, sadly I think we should punt this past 1.0. For now, you can hack around this with:
but that's pretty ugly. |
I'm currently using: POD_INDEX=1 kubectl exec -p \
`kubectl get pod -l <labels> \
-t "{{ with index .items ${POD_INDEX:-0} }}{{ .metadata.name }}{{ end }}"` -- <cmd> |
This is what we've been calling the "-q" pattern -- we should have an output format that just dumps names in a form that can be used on the kubectl command line. See item 16 here: |
Ah, here: #5906 |
almost works. With kubectl 1.6.0, you currently get:
Because Update: 1.5.2 returns |
|
I definitely agree it should be usable to |
Yet another variation of a workaround
|
/sig cli |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/remove-lifecycle rotten |
I have a use-case. I want to create a systemd service that does a kubectl exec and runs a script inside a pod. The problem is systemd doesn't let you do subshells by design, i.e. this doesn't work:
The typical workaround to this systemd feature is to have systemd run bash and pass the command using the -c flag. Since you're in a real bash shell and not systemd, subshells work again:
There are certain disadvantages (related to exit codes and file handles) to using a subshell here, and to avoid them I would have to do something ugly using temp files...
This insecure pattern would be alleviated by being able to pass my selectors directly to kubectl exec. |
Without subshells |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Another annoyance is that I'd say
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
It looks like there is no ultimate solution available after 5 years. I wrote a simple plugin script kubectl-tmux-exec for my own recently. Although it does not provide "run in any one container", it still works well to "run in all containers". I hope it will help you guys! |
/kind feature |
So on a Kubernetes 1.17.X server, the following works:
|
Server version 1.15 already works like your command above, however I suggest adding For older versions, I use:
For 1.15, I use:
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Not sure if everyone know, but
solves this issue to me. |
Agh! @kvaps, thank you! How long has that been possible without me realising..? 🤦♂️ (NB solves the 'run on any one' case that I care about, but not (unless there's some flag) the 'run on all' case that some others want, or the titular 'by pod label' case.) |
@kvaps, I could kiss you! But this did the trick, instead of specifying the pod name in the launch config, I instead specified the deploy/deploymentName, because that always stays the same and it works! |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Good workaround, but doesn't cover the other cases. |
/close |
@soltysh: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What @soltysh said is elaborated on in this answer: https://stackoverflow.com/a/65982378/647151 Basically, use |
As above, that solves my case, but not the 'run on all' matches that's also mentioned in OP, and what some here are primarily looking for. That's why the closure's unpopular (though I'm happy). I assume I'm not able to, but: |
I ended up here also with the same use case (I wanted to run a Basically I want clusterssh/pssh type capability. /open |
I often find myself wanting to exec commands on single-container pods, and since pod names are not "stable", I use some wrapper script to avoid constantly updating pod names:
My
get_current_pod_name
returns the name of the first pod matching a label, and that covers the case when I want to execute the script on any one and only one container.For fetching server logs of different pods/containers, for example, I'm currently giving an argument to
get_current_pod_name
to actually give me the n-th pod from the list of matches, but that's kind ugly... and no multiplexing.I found this comment by @brendanburns on #8448 which led me to believe that there might be more people who'd like "-l" argument support:
Would it be reasonable to add label filtering + some way to say "run in any one container" or "run in all containers" (in this case, maybe no support for "-it" flags...)?
The text was updated successfully, but these errors were encountered: