-
Notifications
You must be signed in to change notification settings - Fork 9.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prometheus docker container shouldn't run as user nobody #3441
Comments
For what reasons? |
|
Sounds good to me. If it's the best practice to give the option to change the UID in order to have control over file owners, wanna send a PR for that? |
It is a curious question. So:
BUT Containers are somewhat different beasts... See:
Basically this instruction does not bind to the specific user but to the specific UID. And you always can easily override it in docker/kubernetes. $ docker run --user 1234:5678 --rm -ti busybox
/ $ id
uid=1234 gid=5678
/ $ So unless you f*cked up somewhere else I find it pretty safe to run a single process container under any Any objections to the above are welcome :) P.S. From what I've seen so far from various vendors people follow both paths (either creating a separate user or using 65534[debian-based/alpine] or 99[centos-based]) |
Kubernetes does not (at least not yet) allow you to change the primary GID of a pod/container. There are some patches making their way through the process but AFAIK they are not part of any released version of Kubernetes. Also, "just" changing the UID/GID that the primary process runs as isn't always sufficient. Depending on the security model, changing the ownership of some files may be necessary as part of the startup process. In any case, choosing 99 as the default UID/GID isn't the best idea since it's likely that a low number may be in use by other things in the overall system. If we're going to stick with a fixed UID/GID it'd be better to use something semi-random like 9090. |
Let's assume we have two containers running in the system (single or multi-host, no matter), with volumes attached and both of them use the same pair of uid:gids (say 5555:5555). Hence the attached volumes' files are recursively owned by 5555:5555. What are the possible implications in this case? |
I think the correct UID would be
UID 99 is the default nobody in Centos |
In our docker compose, I was able to use ":user 1234:5678" concept as mentioned above along with our Ansible deploy play to set the directory permissions on anything volume mounted. Container runs perfectly |
@silentpete do you have an example of the docker-compose with the user uid/gid configuration? I can set the |
Hello @nemo83 ,
Can see the user in the container with
which I copied out of: https://github.com/silentpete/pg-h.io/blob/master/docker-compose.yml Hope this helps, PS, I created a 9090:9090 (user and group) on my host, then set the volume mounted in directory to have the correct permissions for the data being saved out of the container. |
The main problem w/ running containers as non-root is K8s. How do you guys solve the fact that k8s persistent volumes are generally mounted as root-writeable only? It seems to me that the general trend is to let users specify their UIDs, either via remapping in docker, or |
Setting Can't say anything for openshift though. I hope they have got some solution for that anyway. |
Another issue with fsGroup is that it operates at pod level only, so it forces all mounts to that gid which may not be ideal. Openshift manages UID/GID itself - it has an admission controller I believe which overrides runAsUser / fsGroup. But unfort. it will forbid your pod entirely if you set those values, so building a single pod definition to work across different k8s flavors isn't possible w/ non-root users (as far as I can tell). |
The solution that I've seen/used is to use an init container. Perhaps not ideal but it works.
|
I have this example: the mounted config So i guess a little bit elaborated |
I was just bit by this, Nomad doesn't have a way to set up the bind users like this so I am having to set up a bind volume manually, kind of defeating the point of using the container. |
Ran into this issue this week when trying to run prometheus within OpenShift, was testing things out with open-telemetry and wanted to run prometheus as a deployment with the quay.io/prometheus/prometheus:v2.43.0 image. Setting a user with securityContext is not permitted, at least not anything lower than uid: 1001030000 As per OpenShift Guidlines https://docs.openshift.com/container-platform/3.11/creating_images/guidelines.html#openshift-specific-guidelines under the section All that would be required to not get permission denied is to add
I built my own image with
and this worked. Would it be a alright to add it to the Dockerfile, I can make a PR if needed. |
@ohritz please open a merge request for that, i would support it |
i created one with your suggestions: #12728 |
@marvinnitz18 I'm sorry I never got to making a PR, I honestly forgot about it. I see you made you took over, thanks! |
@ohritz no worries, lets see if it can get merged |
Hi, Red Hat itself is publishing a patched version of the prometheus image which is suitable for rootless execution on OpenShift. Here's a link to their latest image at the time this comment was published. On lines
|
Hello from the bug scrub. #12728 is currently under review, so hopefully this will be addressed soon. |
-fixes the persistent volume is not writable problem... prometheus/prometheus#3441
Perhaps we should rephrase this ticket as (and update the MR to) "Support docker The exact user the dockerfile builds/runs as doesn't matter if we can set e.g. |
With the update to 2.0, the Docker container was modified so that Prometheus runs as the user "nobody" (UID/GID 99/99) (#2859). While it's good that Prometheus is no longer running as root, it's not the best idea either to run as "nobody". There are many examples of Docker containers out there that allow changing the UID/GID at runtime that could be borrowed from. At a minimum a different static UID/GID should be used.
The text was updated successfully, but these errors were encountered: