-
Notifications
You must be signed in to change notification settings - Fork 14.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ephemeral containers volume mount permissions differ from normal containers #32351
Comments
@howardjohn: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig storage cc @verb |
cc @verb does this need to be resolved before 1.23, since ephemeral containers is beta this release? /milestone v1.23 |
Thanks for pinging this, I missed it. Investigating... /assign |
I'm having trouble reproducing this. The projected volumes doc says that file mode should be set to the The owner is sometimes taken from the container's
Then the token gets the mode (owner is correct, but mode is not the
But if I try to reference the volume twice using different
Then the container gets the mode (in both containers):
Now if I add an ephemeral container to
Then I see the same file ownership as before:
So this differs, but is it the incorrect behavior for ephemeral containers? I'm not sure. I'm also not sure what behavior you're seeing for your first test case ( @howardjohn Could you provide more succinct test cases with pods rather than deployments, and include the output of @msau42 @jpeeler Any idea what's the intended pod-vs-container securityContext behavior for file ownership of projected volumes? |
t2 reproducer: apiVersion: v1
kind: Pod
metadata:
name: shell
spec:
containers:
- args:
- /bin/sleep
- infinity
image: howardjohn/alpine-shell
name: shell
securityContext:
runAsUser: 1338
- args:
- /bin/sleep
- infinity
image: howardjohn/alpine-shell
name: ephemeral
securityContext:
runAsUser: 1337
runAsGroup: 1337
volumeMounts:
- mountPath: /var/run/secrets/tokens
name: token
volumes:
- name: token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: test
expirationSeconds: 43200
path: token Testing we can access the token
With ephemeral containers: apiVersion: v1
kind: Pod
metadata:
name: shell
spec:
containers:
- args:
- /bin/sleep
- infinity
image: howardjohn/alpine-shell
name: shell
securityContext:
runAsUser: 1338
volumes:
- name: token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: test
expirationSeconds: 43200
path: token attach ephemeral container: name: ephemeral
securityContext:
runAsUser: 1337
runAsGroup: 1337
image: howardjohn/alpine-shell
args:
- /bin/sleep
- infinity
volumeMounts:
- mountPath: /var/run/secrets/tokens
name: token Testing access:
|
There is special logic to handle setting permissions for projected service account tokens. I wonder if the logic is not handling ephemeral containers properly. kubernetes/kubernetes#89193 |
The logic in https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/2451-service-account-token-volumes/README.md#file-permission (implemented in kubernetes/kubernetes#89193) requires visibility to all containers at pod creation time. If all non-ephemeral containers run as a unified, non-root uid, the token will only be readable by that uid. |
Thanks for the test cases and the links to the KEP, they made it much easier to figure out what's going on here. It sounds like everything is working as intended. The KEP even mentions that ephemeral containers are excluded from the heuristic. The only question is whether we should update the projected volume docs to mention the special handling of tokens. @howardjohn you mentioned that fsGroup works as you expected. Is that an acceptable work around for your use case? |
Not really, since it impacts all volumes. Basically the use case is a sidecar wants access to a token. With setting fsGroup at the pod level, it impacts the main application's volumes which typically breaks everything. The changes in KEP 2451 fix this, but not with ephemeral containers. That being said, I do not personally have a real use case for this, unless the volume could also be added dynamically as well, so if this is the intended behavior that is ok for me |
is running the ephemeral container as a non-0 uid that is different than the non-0 uids of the other containers common? |
I highly doubt it. For context on how I found this, I was experimenting with running a service mesh sidecar as an ephemeral container. That was almost entirely just an experiment and clearly an abuse of what ephemeral containers were design for though - I was mostly just playing around with things |
since this is not a bug, switching type to documentation and removing from milestone |
I recommend opening a documentation issue against k/website that explains the docs change to make here. Ideally: be reasonably thorough, as the technical details here seem quite subtle. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
/transfer website |
Cool, the transfer worked. Here's a summary for k/website: Problem: Projected Volumes have specially handling for Proposed Solution: Update projected volumes doc to explain special behavior:
Page to Update: |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/sig docs |
/triage accepted |
What happened?
Working: a pod with multiple containers, mounting a projected volume mount. Note the different UID/GID
This works since 1.19+. On 1.18 and old we need
fsGroup: 1337
as well.Not working: doing the same with ephemeral containers
Then attach ephemeral container:
When doing this, accessing the token from the ephemeral containers gives a permission denied.
If we add
fsGroup: 1337
to the pod spec, it does work.What did you expect to happen?
File permissions are the same on ephemeral containers and normal containers
How can we reproduce it (as minimally and precisely as possible)?
See above
Anything else we need to know?
No response
Kubernetes version
Cloud provider
OS version
Install tools
Container runtime (CRI) and and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: