Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.
Sign upSupport the user flag from docker exec in kubectl exec #30656
Comments
k8s-github-robot
added
area/kubectl
team/cluster
labels
Aug 15, 2016
This comment has been minimized.
This comment has been minimized.
|
SGTM. @kubernetes/kubectl any thoughts on this? |
This comment has been minimized.
This comment has been minimized.
|
It's not unreasonable, but we'd need pod security policy to control the user input and we'd probably have to disallow user by name (since we don't allow it for containers - you must specify UID). |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
|
Legitimate use-case |
This comment has been minimized.
This comment has been minimized.
killdash9
commented
Oct 11, 2016
|
Any update on this? |
This comment has been minimized.
This comment has been minimized.
miracle2k
commented
Oct 12, 2016
|
My app container image is built using buildpacks. I'd like to open a shell. When I do, I am root, and all the env vars are set. But the buildpack-generated environment is not there. If I open a login shell for the app user ( |
This comment has been minimized.
This comment has been minimized.
|
I thought On Tue, Oct 11, 2016 at 5:26 PM, Michael Elsdörfer <notifications@github.com
|
This comment has been minimized.
This comment has been minimized.
adarshaj
commented
Oct 12, 2016
|
@miracle2k - Have you tried |
This comment has been minimized.
This comment has been minimized.
miracle2k
commented
Oct 14, 2016
|
@adarshaj @smarterclayton Thanks for the tips. |
This comment has been minimized.
This comment has been minimized.
jsindy
commented
Nov 30, 2016
|
Here is an example how I need this functionality. The official Jenkins image runs as the user Jenkins. I have a persistent disk attached that I need to resize. If kubectl had the --user I could bash in as root and resize2fs. Unfortunately without it it is an extreme pain. |
This comment has been minimized.
This comment has been minimized.
|
An additional use case - you're being security conscious so all processes running inside the container are not privileged. But now something unexpectedly isn't working and you want to go in as root to e.g. install debug utilities and figure out what's wrong on the live system. |
This comment has been minimized.
This comment has been minimized.
|
Installing stuff for debugging purposes is my use case as well. Currently I |
This comment has been minimized.
This comment has been minimized.
gaballard
commented
May 24, 2017
|
What's the status on this? This functionality would be highly useful |
This comment has been minimized.
This comment has been minimized.
|
I didn't check, but does the |
This comment has been minimized.
This comment has been minimized.
No, those have to do with identifying yourself to the kubernetes API, not passing through to inform the chosen uid for the exec call |
k8s-github-robot
added
the
needs-sig
label
May 31, 2017
This comment has been minimized.
This comment has been minimized.
whereisaaron
commented
Jun 6, 2017
•
|
The lack of the user flag is a hassle. Use case is I have a container that runs as an unprivileged user, I mount a volume on it, but the volume folder is not owned by the user. There is no option to mount the volume with specified permissions. I can't use an entrypoint script to change the permissions because that runs as the unprivileged user. I can't use a lifecycle.preStart hook because that runs as the unprivileged user too. I guess though this should be an additional RBAC permission, to allow/block 'exec' as other than the container user. Ideally the lifeCycle hooks should be able to run as root in the container, even when the container does not. Right now the best alternative is probably to run an init container against the same mount; kind of an overhead to start a separate container and mount volumes, when really I just need a one-line command as root at container start. |
This comment has been minimized.
This comment has been minimized.
|
/sig cli |
k8s-ci-robot
added
the
sig/cli
label
Jun 23, 2017
k8s-github-robot
removed
the
needs-sig
label
Jun 23, 2017
This comment has been minimized.
This comment has been minimized.
skorski
commented
Jun 27, 2017
|
+1 for this feature. Not having this makes debugging things a lot more painful. |
This comment has been minimized.
This comment has been minimized.
johnjjung
commented
Jul 10, 2017
|
+1 for this feature. I have to rebuild my docker container and make sure the Docker file has USER root as the last line, then debug, then disable this. docker command line seems to have a --user flag |
This comment has been minimized.
This comment has been minimized.
BenAbineriBubble
commented
Jul 10, 2017
|
johnjjung, if you have ssh access to the node you can connect to the container using docker with the user flag which might save you a bit of time. |
This comment has been minimized.
This comment has been minimized.
johnjjung
commented
Jul 10, 2017
|
Hmm, awesome let me try this
…On Jul 10, 2017, 11:34 -0400, BenAbineriBubble ***@***.***>, wrote:
johnjjung, if you have ssh access to the node you can connect to the container using docker with the user flag which might save you a bit of time.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
This comment has been minimized.
This comment has been minimized.
jiaj12
commented
Aug 31, 2017
|
+1 really a issue, I have to ssh and then exec the docker exec, such annoying |
This comment has been minimized.
This comment has been minimized.
|
/cc @frobware |
This comment has been minimized.
This comment has been minimized.
|
/sig node |
k8s-ci-robot
added
the
sig/node
label
Aug 30, 2018
This comment has been minimized.
This comment has been minimized.
Nowaker
commented
Aug 31, 2018
This comment has been minimized.
This comment has been minimized.
alexandersauerbier
commented
Oct 26, 2018
|
+1 |
This comment has been minimized.
This comment has been minimized.
haizaar
commented
Nov 28, 2018
|
While we are waiting for this to be properly supported, an intermediate solution can be to run your docker CMD with |
This comment has been minimized.
This comment has been minimized.
greenoid
commented
Dec 14, 2018
|
+1 |
This comment has been minimized.
This comment has been minimized.
sniederm
commented
Feb 15, 2019
•
|
I would also appreciate such a -u flag. +1. Just an idea: For example something like a
This would avoid having only a subset of the underlying functionality of the container runtime in kubectl. Furthermore it saves effort since there is no need to map and abstract the supported arguments from the kubelet layer all the way down to the container for every supported container type. So in summary without the BTW: Thanks to @SimenB for the hint to ssh into the node and use Docker directly. That solved my problem temporarily. Using Minikube I was able to do the following to log-in as root: |
This comment has been minimized.
This comment has been minimized.
Nowaker
commented
Mar 26, 2019
•
|
A workaround script that automates the unpleasant. SSH access to the node required. Usage: ./shell-into-pod-as-root.sh podname
./shell-into-pod-as-root.sh podname shEnjoy!
|
This comment has been minimized.
This comment has been minimized.
AndrewSav
commented
Mar 27, 2019
|
@Nowaker how do you handle namespaces? |
This comment has been minimized.
This comment has been minimized.
fejta-bot
commented
Jun 25, 2019
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
k8s-ci-robot
added
the
lifecycle/stale
label
Jun 25, 2019
This comment has been minimized.
This comment has been minimized.
Gregy
commented
Jun 25, 2019
|
/remove-lifecycle stale |
k8s-ci-robot
removed
the
lifecycle/stale
label
Jun 25, 2019
This comment has been minimized.
This comment has been minimized.
bryanhuntesl
commented
Jun 25, 2019
Yeah - it's trivial to just use the docker exec to do this - it's mostly about consistency - multi-user docker containers are a bit of a joke really - a legacy from converting a VM to a container. I'm dealing with this with grafana at the moment - I suppose this will pass with time. |
This comment has been minimized.
This comment has been minimized.
jordanwilson230
commented
Jun 25, 2019
•
|
@bryanhuntesl There's discussion of workarounds above which doesn't require manual ssh'ing to a node. You can also try this plugin -- https://github.com/jordanwilson230/kubectl-plugins |
This comment has been minimized.
This comment has been minimized.
gjcarneiro
commented
Jun 25, 2019
|
What if you don't want to allow users to ssh into a node? Allowing users ssh access to a node, as well as allowing them to have access to docker, can be a security risk. Docker doesn't know anything about namespaces or k8s permissions. If a user can run SSH is not a proper solution, IMHO. |
This comment has been minimized.
This comment has been minimized.
bryanhuntesl
commented
Jun 25, 2019
I second that opinion - using an out of band mechanism to gain direct access is increasing the potential attack area. |
This comment has been minimized.
This comment has been minimized.
jordanwilson230
commented
Jun 25, 2019
•
There are solutions that do not require SSH @gjcarneiro. Also, a user must first add their public SSH key in the Compute Metadata before they are allowed SSH access to a node (if on GCP) @bryanhuntesl. |
This comment has been minimized.
This comment has been minimized.
max88991
commented
Aug 1, 2019
|
@liggitt It's been three years since this topic started, any conclusions? |
This comment has been minimized.
This comment has been minimized.
|
I'll write a KEP real soon. Wanted to do it to learn and go through the process and hopefully make a contribution. |
This comment has been minimized.
This comment has been minimized.
gitnik
commented
Aug 1, 2019
|
I am not sure if this solution has been mentioned before but what we did as a workaround is have all our containers include a script that'll log you in as the correct user. Plus a motd: Dockerfile: USER root
RUN echo "su -s /bin/bash www-data" >> /root/.bashrc
# this exit statement here is needed in order to exit from the new shell directly or else you need to type exit twice
RUN echo "exit" >> /root/.bashrc
# /var/www is www-data's home directory
COPY motd.sh /var/www/.bashrcmotd.sh: RED='\033[0;31m'
YELLOW='\033[0;33m'
echo -e "${RED}"
echo "##################################################################"
echo "# You've been automatically logged in as www-data. #"
echo "##################################################################"
echo -e "${YELLOW} "
echo "If you want to login as root instead:"
echo -e "$(if [ "$KUBERNETES_PORT" ]; then echo 'kubectl'; else echo 'docker'; fi) exec -ti $(hostname) -- bash --noprofile -norc"
TEXT_RESET='\033[0m'
echo -e "${TEXT_RESET} " |
VikParuchuri commentedAug 15, 2016
It looks like
docker execis being used as the backend forkubectl exec.docker exechas the--userflag, which allows you to run a command as a particular user. This same functionality doesn't exist in Kubernetes.Our use case is that we spin up pods, and execute untrusted code in them. However, there are times when after creating the pod, we need to run programs that need root access (they need to access privileged ports, etc).
We don't want to run the untrusted code as root in the container, which prevents us from just escalating permissions for all programs.
I looked around for references to this problem, but only found this StackOverflow answer from last year -- http://stackoverflow.com/questions/33293265/execute-command-into-kubernetes-pod-as-other-user .
There are some workarounds to this, such as setting up a server in the container that takes commands in, or defaulting to root, but dropping to another user before running untrusted code. However, these workarounds break nice Kubernetes/Docker abstractions and introduce security holes.