Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl exec by pod label #8876

Closed
rhcarvalho opened this issue May 27, 2015 · 51 comments
Closed

kubectl exec by pod label #8876

rhcarvalho opened this issue May 27, 2015 · 51 comments
Labels
area/kubectl kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@rhcarvalho
Copy link
Contributor

I often find myself wanting to exec commands on single-container pods, and since pod names are not "stable", I use some wrapper script to avoid constantly updating pod names:

kubectl exec -p $(get_current_pod_name) my_script.sh

My get_current_pod_name returns the name of the first pod matching a label, and that covers the case when I want to execute the script on any one and only one container.

For fetching server logs of different pods/containers, for example, I'm currently giving an argument to get_current_pod_name to actually give me the n-th pod from the list of matches, but that's kind ugly... and no multiplexing.

I found this comment by @brendanburns on #8448 which led me to believe that there might be more people who'd like "-l" argument support:

I would think to do that we'd actually want to use label syntax, e.g.

kubectl exec -l stage=production,user=bburns or whatever.

Would it be reasonable to add label filtering + some way to say "run in any one container" or "run in all containers" (in this case, maybe no support for "-it" flags...)?

@thockin
Copy link
Member

thockin commented May 27, 2015

Interesting idea. I think it will have to wait for our 1.0 launch, but it sounds reasonable.

@thockin thockin added priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels May 27, 2015
@brendandburns
Copy link
Contributor

yeah, sadly I think we should punt this past 1.0. For now, you can hack around this with:

kubectl exec `kubectl list -l=<labels> --template=<golang template> -o=template` <cmd>

but that's pretty ugly.

@rhcarvalho
Copy link
Contributor Author

I'm currently using:

POD_INDEX=1 kubectl exec -p \
            `kubectl get pod -l <labels> \
                             -t "{{ with index .items ${POD_INDEX:-0} }}{{ .metadata.name }}{{ end }}"` -- <cmd>

@bgrant0607
Copy link
Member

This is what we've been calling the "-q" pattern -- we should have an output format that just dumps names in a form that can be used on the kubectl command line. See item 16 here:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/cli-roadmap.md

@bgrant0607
Copy link
Member

Ah, here: #5906

@bgrant0607 bgrant0607 added team/ux and removed sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Aug 4, 2015
@rcoup
Copy link

rcoup commented Apr 26, 2017

kubectl exec -i -t $(kubectl get pod -l "role=xyz" -o name) -- bash

almost works. With kubectl 1.6.0, you currently get:

error: invalid resource name "pods/xyz-3930372477-qdq3c": [may not contain '/']

Because -o name now consistently returns the resource type. kubectl exec should ignore a leading pods/ if it's present.

Update: 1.5.2 returns pod/, 1.6.0 returns pods/. ¯\_(ツ)_/¯

@SimenB
Copy link
Contributor

SimenB commented Apr 26, 2017

sed 's/pod\///' should work

@rcoup
Copy link

rcoup commented Apr 26, 2017

@SimenB oh, sure -- the shell has infinite hackability :) But making -o name output consistently able to be used seems like a good goal.

For other readers, incorporating @SimenB's command means this will work:

kubectl exec -i -t $(kubectl get pod -l "role=xyz" -o name | sed 's/pods\///') -- bash

@SimenB
Copy link
Contributor

SimenB commented Apr 26, 2017

I definitely agree it should be usable to exec without preprocessing, just pointing out a workaround

@mshytikov
Copy link

Yet another variation of a workaround

kubectl exec -i -t $(kubectl get pod -l "role=xyz" -o jsonpath='{.items[0].metadata.name}') -- bash

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 31, 2017
@0xmichalis
Copy link
Contributor

/sig cli

@k8s-ci-robot k8s-ci-robot added the sig/cli Categorizes an issue or PR as relevant to SIG CLI. label Jun 3, 2017
@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 3, 2017
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 26, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 25, 2018
@viteksafronov
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jan 25, 2018
@jimethn
Copy link

jimethn commented Apr 11, 2018

I have a use-case.

I want to create a systemd service that does a kubectl exec and runs a script inside a pod. The problem is systemd doesn't let you do subshells by design, i.e. this doesn't work:

ExecStart=kubectl exec -t $(kubectl get pods -l=my=selector -o jsonpath='{.items[0].metadata.name}') -- /somecommand.sh

The typical workaround to this systemd feature is to have systemd run bash and pass the command using the -c flag. Since you're in a real bash shell and not systemd, subshells work again:

ExecStart=/bin/bash -c "kubectl exec -t $(kubectl get pods -l=my=selector -o jsonpath='{.items[0].metadata.name}') -- /somecommand.sh"

There are certain disadvantages (related to exit codes and file handles) to using a subshell here, and to avoid them I would have to do something ugly using temp files...

ExecStartPre=/bin/bash -c "kubectl get pods -l=my=selector -o jsonpath='{.items[0].metadata.name}' > /tmp/targetpod"
ExecStartPre=/bin/bash -c "echo kubectl exec -t $(cat /tmp/targetpod) -- /somecommand.sh > /tmp/thecommand.sh && chmod a+x /tmp/thecommand.sh"
ExecStart=/tmp/thecommand.sh
ExecStartPost=rm /tmp/targetpod
ExecStartPost=rm /tmp/thecommand.sh

This insecure pattern would be alleviated by being able to pass my selectors directly to kubectl exec.

@cristi-d
Copy link

Without subshells
kubectl get pods | grep <pod name pattern> | cut -d ' ' -f 1 | xargs -I{} kubectl exec {} <command>

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 17, 2019
@kvaps
Copy link
Member

kvaps commented Nov 17, 2019

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 17, 2019
@cben
Copy link

cben commented Nov 26, 2019

Another annoyance is that -o name --show-kind=false has no effect, it still shows pod/ (kubernetes/kubectl#669).
If like me you feel go-template and jsonpath are "last resort", your next stop is -o custom-columns=NAME:.metadata.name. That's workable but it adds a NAME headers line so you also need --no-headers.

I'd say kubectl exec is similar to kubectl logs, let's learn from that. Both for uniformity and because logs usability is much better:

  • It accepts kubectl logs pod/... syntax. ✔️
  • It also accepts some "higher" resources like kubectl logs replicaset/..., kubectl logs job/..., kubectl logs deployment/..., even ``kubectl logs service/...`, resolving to a single (arbitrary?) pod under it.
    This IMHO is a hidden gem, not widely known [citation needed] but extremely useful. 💎 👏
  • Help claims it supports -l / --selector syntax to select pods by label, although it doesn't work for me (but I'm on old version)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 24, 2020
@AndrewSav
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 25, 2020
@predatorray
Copy link

It looks like there is no ultimate solution available after 5 years. I wrote a simple plugin script kubectl-tmux-exec for my own recently. Although it does not provide "run in any one container", it still works well to "run in all containers". I hope it will help you guys!

@brianpursley
Copy link
Member

/kind feature

@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Apr 29, 2020
@saamalik
Copy link

So on a Kubernetes 1.17.X server, the following works:

kubectl -n hubble-system exec  $(kubectl get pod -n hubble-system -l component=mongo -o name) -it mongo

@asrail
Copy link

asrail commented Jul 26, 2020

So on a Kubernetes 1.17.X server, the following works:

kubectl -n hubble-system exec  $(kubectl get pod -n hubble-system -l component=mongo -o name) -it mongo

Server version 1.15 already works like your command above, however I suggest adding --field-selector=status.phase==Running to the get command.

For older versions, I use:

kubectl -n namespace exec $(kubectl -n namespace get po -o name --field-selector=status.phase==Running -l app=appname | head -1 | sed 's/pods\?\///' ) -ti

For 1.15, I use:

kubectl -n namespace exec $(kubectl -n namespace get po -o name --field-selector=status.phase==Running -l app=appname' ) -ti

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 24, 2020
@OliverCole
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 24, 2020
@kvaps
Copy link
Member

kvaps commented Oct 24, 2020

Not sure if everyone know, but

kubectl exec deploy/<deployname>

solves this issue to me.

@OJFord
Copy link

OJFord commented Oct 24, 2020

Agh! @kvaps, thank you! How long has that been possible without me realising..? 🤦‍♂️

(NB solves the 'run on any one' case that I care about, but not (unless there's some flag) the 'run on all' case that some others want, or the titular 'by pod label' case.)

@willemodendaal
Copy link

@kvaps, I could kiss you!
I've been trying to get VSCode to attach a debugger to a running Kubernetes pod for two days now. It's tricky because all the documentation shows you need to specify the container name in the launch configuration file, but the container names in K8s are somewhat dynamic, so that doesn't work (eg. this article).
I even asked about it on the vscode-docker extension repo: microsoft/vscode#89758 (comment)

But this did the trick, instead of specifying the pod name in the launch config, I instead specified the deploy/deploymentName, because that always stays the same and it works!
I can now attach a debugger to a running Kubernetes pod in VSCode.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 3, 2021
@OliverCole
Copy link

Good workaround, but doesn't cover the other cases.
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 3, 2021
@soltysh
Copy link
Contributor

soltysh commented Mar 31, 2021

kubectl exec along with a few others can easily find the first pod in a any workload (deployment, stateful set, daemon set, etc) and execute desired command.

/close

@k8s-ci-robot
Copy link
Contributor

@soltysh: Closing this issue.

In response to this:

kubectl exec along with a few others can easily find the first pod in a any workload (deployment, stateful set, daemon set, etc) and execute desired command.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@malthe
Copy link

malthe commented Nov 30, 2022

What @soltysh said is elaborated on in this answer: https://stackoverflow.com/a/65982378/647151

Basically, use deploy/<deployment-name> as the name passed to kubectl exec.

@OJFord
Copy link

OJFord commented Dec 2, 2022

As above, that solves my case, but not the 'run on all' matches that's also mentioned in OP, and what some here are primarily looking for. That's why the closure's unpopular (though I'm happy).

I assume I'm not able to, but:
/open

@andrewpollock
Copy link

andrewpollock commented Sep 26, 2023

I ended up here also with the same use case (I wanted to run a df on all my pods).

Basically I want clusterssh/pssh type capability.

/open

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubectl kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
Development

Successfully merging a pull request may close this issue.