Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support the user flag from docker exec in kubectl exec #30656

Closed
VikParuchuri opened this issue Aug 15, 2016 · 104 comments
Closed

Support the user flag from docker exec in kubectl exec #30656

VikParuchuri opened this issue Aug 15, 2016 · 104 comments

Comments

@VikParuchuri
Copy link

@VikParuchuri VikParuchuri commented Aug 15, 2016

It looks like docker exec is being used as the backend for kubectl exec. docker exec has the --user flag, which allows you to run a command as a particular user. This same functionality doesn't exist in Kubernetes.

Our use case is that we spin up pods, and execute untrusted code in them. However, there are times when after creating the pod, we need to run programs that need root access (they need to access privileged ports, etc).

We don't want to run the untrusted code as root in the container, which prevents us from just escalating permissions for all programs.

I looked around for references to this problem, but only found this StackOverflow answer from last year -- http://stackoverflow.com/questions/33293265/execute-command-into-kubernetes-pod-as-other-user .

There are some workarounds to this, such as setting up a server in the container that takes commands in, or defaulting to root, but dropping to another user before running untrusted code. However, these workarounds break nice Kubernetes/Docker abstractions and introduce security holes.

@adohe-zz
Copy link
Contributor

@adohe-zz adohe-zz commented Aug 16, 2016

SGTM. @kubernetes/kubectl any thoughts on this?

@smarterclayton
Copy link
Contributor

@smarterclayton smarterclayton commented Aug 16, 2016

It's not unreasonable, but we'd need pod security policy to control the user input and we'd probably have to disallow user by name (since we don't allow it for containers - you must specify UID).

@smarterclayton
Copy link
Contributor

@smarterclayton smarterclayton commented Aug 16, 2016

@sttts and @ncdc re exec

@sttts
Copy link
Contributor

@sttts sttts commented Aug 17, 2016

Legitimate use-case

@killdash9
Copy link

@killdash9 killdash9 commented Oct 11, 2016

Any update on this?

@miracle2k
Copy link

@miracle2k miracle2k commented Oct 12, 2016

My app container image is built using buildpacks. I'd like to open a shell. When I do, I am root, and all the env vars are set. But the buildpack-generated environment is not there. If I open a login shell for the app user (su -l u22055) I have my app environment, but now the kubernetes env vars are missing.

@smarterclayton
Copy link
Contributor

@smarterclayton smarterclayton commented Oct 12, 2016

I thought su -l didn't copy env vars? You have to explicitly do the copy
yourself or use a different command.

On Tue, Oct 11, 2016 at 5:26 PM, Michael Elsdörfer <notifications@github.com

wrote:

My app container image is built using buildpacks. I'd like to open a
shell. When I do, I am root, and all the env vars are set. But the
buildpack-generated environment is not there. If I open a login shell for
the app user (su -l u22055) I have my app environment, but now the
kubernetes env vars are missing.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
#30656 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p7sIu20xnja2HsbPUUgD1m4gXqVAks5qzCksgaJpZM4Jk3n0
.

@adarshaj
Copy link

@adarshaj adarshaj commented Oct 12, 2016

@miracle2k - Have you tried su -m -l u22055? -m is supposed to preserve environment variables.

@miracle2k
Copy link

@miracle2k miracle2k commented Oct 14, 2016

@adarshaj @smarterclayton Thanks for the tips. su -m has it's own issues (the home dir is wrong), but I did make it work in the meantime. The point though is - that's why I posted it here - is that I'd like to see "kubectl exec" do the right thing. Maybe even use the user that the docker file defines.

@jsindy
Copy link

@jsindy jsindy commented Nov 30, 2016

Here is an example how I need this functionality.

The official Jenkins image runs as the user Jenkins. I have a persistent disk attached that I need to resize. If kubectl had the --user I could bash in as root and resize2fs. Unfortunately without it it is an extreme pain.

@chrishiestand
Copy link
Contributor

@chrishiestand chrishiestand commented Jan 12, 2017

An additional use case - you're being security conscious so all processes running inside the container are not privileged. But now something unexpectedly isn't working and you want to go in as root to e.g. install debug utilities and figure out what's wrong on the live system.

@SimenB
Copy link
Contributor

@SimenB SimenB commented Jan 12, 2017

Installing stuff for debugging purposes is my use case as well. Currently I ssh into the nodes running kubernetes, and use docker exec directly.

@gaballard
Copy link

@gaballard gaballard commented May 24, 2017

What's the status on this? This functionality would be highly useful

@fabianofranz
Copy link
Contributor

@fabianofranz fabianofranz commented May 26, 2017

I didn't check, but does the --as and --as-group global flags help here? Do they even work with exec? cc @liggitt

@liggitt
Copy link
Member

@liggitt liggitt commented May 26, 2017

I didn't check, but does the --as and --as-group global flags help here? Do they even work with exec? cc @liggitt

No, those have to do with identifying yourself to the kubernetes API, not passing through to inform the chosen uid for the exec call

@whereisaaron
Copy link

@whereisaaron whereisaaron commented Jun 6, 2017

The lack of the user flag is a hassle. Use case is I have a container that runs as an unprivileged user, I mount a volume on it, but the volume folder is not owned by the user. There is no option to mount the volume with specified permissions. I can't use an entrypoint script to change the permissions because that runs as the unprivileged user. I can't use a lifecycle.preStart hook because that runs as the unprivileged user too. kubectl exec -u root could do that, if the '-u' option existed.

I guess though this should be an additional RBAC permission, to allow/block 'exec' as other than the container user.

Ideally the lifeCycle hooks should be able to run as root in the container, even when the container does not. Right now the best alternative is probably to run an init container against the same mount; kind of an overhead to start a separate container and mount volumes, when really I just need a one-line command as root at container start.

@xiangpengzhao
Copy link
Member

@xiangpengzhao xiangpengzhao commented Jun 23, 2017

/sig cli

@skorski
Copy link

@skorski skorski commented Jun 27, 2017

+1 for this feature. Not having this makes debugging things a lot more painful.

@johnjjung
Copy link

@johnjjung johnjjung commented Jul 10, 2017

+1 for this feature. I have to rebuild my docker container and make sure the Docker file has USER root as the last line, then debug, then disable this.

docker command line seems to have a --user flag

@BenAbineriBubble
Copy link

@BenAbineriBubble BenAbineriBubble commented Jul 10, 2017

johnjjung, if you have ssh access to the node you can connect to the container using docker with the user flag which might save you a bit of time.

@johnjjung
Copy link

@johnjjung johnjjung commented Jul 10, 2017

@jiaj12
Copy link

@jiaj12 jiaj12 commented Aug 31, 2017

+1 really a issue, I have to ssh and then exec the docker exec, such annoying

@sttts
Copy link
Contributor

@sttts sttts commented Aug 31, 2017

/cc @frobware

@cristichiru
Copy link

@cristichiru cristichiru commented Apr 9, 2020

In those cases, it seems that the other presented options here, like kubectl plugins might be the only way - assuming there is no access to the docker daemon neither.

@ksiadz
Copy link

@ksiadz ksiadz commented May 5, 2020

+1

@techdragon
Copy link

@techdragon techdragon commented May 7, 2020

Well the KEP template is here https://github.com/kubernetes/enhancements/tree/master/keps/NNNN-kep-template

I figured I'd see how much work it is to write one and... yeah I'm not the person to write this, The template lost me at checklist item one **Pick a hosting SIG.** anyone more familiar with the process want to start the draft? I just want a place to stick my 👍 in support of the proposal as an active Kubernetes user.

This may sound flippant, but I see about a half dozen well coded / scripted / written workarounds to this issue, so clearly there are people who are in a better position to draft proposed technical solutions than me.

@Nowaker
Copy link

@Nowaker Nowaker commented May 7, 2020

I feel like Kubernetes became the new OpenStack where nothing can be achieved in a reasonable timeframe because of the PROCESS.

@whereisaaron
Copy link

@whereisaaron whereisaaron commented May 8, 2020

@VikParuchuri's original use case here is to be able to debug/troubleshoot containers as root, even though the container itself is running as an untrusted user. Good use case, because, if solved, it encourages us all to run containers as non-root users. 🎉

Before you prepare a KEP for docker exec have a quick check that k8s ephemeral debug containers don't address this use case for you.

docker exec --user is only one way to address that use case, and it relies on the docker runtime being used. As k8s moves to containerd, dockerd and friends are optional or not even installed any more, so it is possibly not a forward-looking option?

Another k8s-native way to address this use case is ephemeral debug containers. Say you have a container running as an untrusted user. Debug containers allow you to start a temporary container in the same process space as the target container, but running as root (or whoever). This approach has some significant advantages over the exec approaches, in particular you can bring any debug tooling and utilities you need with you in the image for the debug container. So instead of bloating your target container image with utils and editors etc. just in case you need to exec in (🐑 .., guilty!), you can instead have a nice big swiss army knife debug container image and keep your application images clean. You can use bash in your debug container when your target only has sh. You can even debug containers that have no shell to exec at all, like a single binary containers, or distroless containers.

E.g. use busybox to debug a container as root.

kubectl alpha debug -it ephemeral-demo --image=busybox --target=ephemeral-demo

I think this is arguably a better model, in that it treats the containers as isolated processes, that you 'attach' to for debugging, rather than like mini-VMs you shell into. The disadvantage is I don't think you can inspect the filesystem of the target, unless you can share an external mount or 'empty' mount. You share the process namespace with your target, so you can also access the target container's filesystem, via /proc/$pid/root.

Ephemeral debug containers have already navigated the PROCESS :-) and been implemented. These containers are alpha since ~1.16 and 1.18 kubectl includes the alpha debug command. More info here:

Refs:

@mrbobbytables
Copy link
Member

@mrbobbytables mrbobbytables commented May 8, 2020

Thanks for the thoughtful reply @whereisaaron :) I think that captures things quite well.

I figured I'd see how much work it is to write one and... yeah I'm not the person to write this, The template lost me at checklist item one Pick a hosting SIG. anyone more familiar with the process want to start the draft? I just want a place to stick my 👍 in support of the proposal as an active Kubernetes user.

KEPs can be quite daunting, but I want to provide a little context around them. Kubernetes itself is very large; potential changes have a very large blast radius, both for the contributor base and users. A new feature might seem easy to impliment but has the potential to broadly impact both groups.

We delegate stewardship of parts of the code base to SIGs; and it is through the KEPs that one or more of the SIGs can come to concensus on a feature. Depending on what the feature does, it may go through an API review, evaluated for scalability concerns etc.

All this is to ensure that what is produced has the greatest chance of success and is developed in a way that the SIG(s) would be willing to support it. If the orginal author(s) step away, the responsibility of maintaining it falls to the SIG. If say, a feature was promoted to stable and then flagged for deprecation, it'd be a minium of a year before it could be removed following the deprecation policy.

If there's enough demand for a feature, usually someone that's more familiar with the KEP process will offer to help get it going and shepherd it along, but it still needs someone to drive it.

In any case, I hope that sheds at least a bit of light on why there is a process associated with getting a feature merged. 👍 If you have any questions, please feel free to reach out directly.

@AndrewSav
Copy link

@AndrewSav AndrewSav commented May 8, 2020

The disadvantage is I don't think you can inspect the filesystem of the target, unless you can share an external mount or 'empty' mount.

For me inspecting the filesystem as root, and running utilities that can interact with filesystem as root, is the number one reason of wanting to get support for the requested feature. In short, this suggestion does not solve my problem at all.

@whereisaaron
Copy link

@whereisaaron whereisaaron commented May 10, 2020

The disadvantage is I don't think you can inspect the filesystem of the target

I was wrong about that, because your injected debug container shares the process namespace with your target container, you can access the filesystem of any process in the target container from your debug container. And that would include both the container filesystems and any filesystems mounted into those containers.

Container filesystems are visible to other containers in the pod through the /proc/$pid/root link.

https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/#understanding-process-namespace-sharing

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Aug 8, 2020

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@zerkms
Copy link
Contributor

@zerkms zerkms commented Aug 8, 2020

/remove-lifecycle stale

@AndrewSav
Copy link

@AndrewSav AndrewSav commented Sep 4, 2020

kubectl alpha debug -it ephemeral-demo --image=busybox --target=ephemeral-demo

error: ephemeral containers are disabled for this cluster

@whereisaaron It looks like most cloud providers do not support this, and for on prem we can just go to a node and docker exec into the container. So again, the usefulness seems quite limited.

Also access via /proc/$pid/root is not what I'd like, I would like a direct access not via "side window". For example running utils like apt/apk in the continer is not easy when the root filesystem is not where they expect it.

@bmaehr
Copy link

@bmaehr bmaehr commented Nov 8, 2020

I had a similar problem: I needed to create some directories, links and add permission for the non-root user on an official image deployed by an official helm chart (jenkins).

I was able to solve it by using the exec-as plugin.

@rcny
Copy link

@rcny rcny commented Dec 3, 2020

With planned Docker deprecation and subsequent removal, when will be this addressed? Ephemeral containers are still in alpha. What is the stable alternative without using Docker as CRI?

@gjcarneiro
Copy link

@gjcarneiro gjcarneiro commented Dec 3, 2020

Besides being alpha, ephemeral containers is a lot more complicated to use than simply kubectl exec --user would be.

@xplodwild
Copy link

@xplodwild xplodwild commented Dec 4, 2020

Another usecase for this is manually executing scripts in containers. For example, NextCloud's occ maintenance script requires to be ran as www-data. There is no sudo or similar in the image, and the doc advise to use docker exec -u 33 when in a Docker environment.

@morremeyer
Copy link

@morremeyer morremeyer commented Dec 4, 2020

@karimhm
Copy link

@karimhm karimhm commented Jan 25, 2021

4 years have passed and this feature still not implemented. WOW!

@dims
Copy link
Member

@dims dims commented Feb 1, 2021

/close

please see the last comment from Clayton here: #30656 (comment)

@dims dims closed this Feb 1, 2021
@immanuelfodor
Copy link

@immanuelfodor immanuelfodor commented Feb 1, 2021

When there is a KEP opened, please link it back here to let us follow it :)

@AndrewSav
Copy link

@AndrewSav AndrewSav commented Feb 1, 2021

@dims I'm confused, why is this closed? It is not fixed, and it also stated at #30656 (comment) that this is not a case of "won't fix", so why has it been closed?

@dims
Copy link
Member

@dims dims commented Feb 2, 2021

@AndrewSav there is no one working on it and no one willing to work on it. So closing this to reflect reality as by default it is "won't fix". This has gone one for 4 years and don't want to continue giving the impression that this is on anyone's radar since it's not clearly.

@bronger
Copy link

@bronger bronger commented Feb 2, 2021

Anyone willing to push this forward would have to address the “security implications” Clayton mentions. This might make contributors reluctant, so what is meant with that?

I would have thought that if I am allowed to kubectl exec to a pod, I am the full-fledged master of that pod anyway.

@ssup2
Copy link

@ssup2 ssup2 commented Feb 18, 2021

HI. To solve this issue, I'm making a tool called "kpexec".
Please try this and give me feedback. Thanks.

https://github.com/ssup2/kpexec

@kam1kaze
Copy link

@kam1kaze kam1kaze commented Feb 18, 2021

btw, there is a kubectl plugin for that too.

https://github.com/jordanwilson230/kubectl-plugins#kubectl-ssh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.