-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubelet: set terminationMessagePath perms to 0660 #108076
base: master
Are you sure you want to change the base?
kubelet: set terminationMessagePath perms to 0660 #108076
Conversation
Hi @skrobul. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/ok-to-test
/triage accepted
/priority important-longterm
/retest |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/retest |
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code looks reasonable assuming this is what we want. And tests are of course missing :)
However, I don't see much activity here to know what sig-node thinkgs about this. I'd recommend you, again, to join sig-node to have their interest and also ask if this should have a (very simple) KEP and feature gate or not. I think it probably should have.
If @SergeyKanzhelev or someone from sig-node can have a look and answer here, without the need to join sig-node, maybe that is simpler for you @skrobul ? :)
According to the docs in order to become member, I need to show "multiple contributions" before being added to the organisation. This is my very first contribution, so not exactly there yet. |
@skrobul wait, that is mixing several different things. First, that link is for sig-docs, that is the Single Interest Group (SIG) about documentation. I'm talking about sig-node. Secondly, you don't need to join any github org to join the weekly meetings. You can find the sig-node info here: https://github.com/kubernetes/community/tree/master/sig-node. Anyone can join and discuss the topics they want. To add topics to the agenda, you just need to subscribe to the sig-node mailing list, and then you add your topic to the next weeks agenda (if there isn't one created, you can create one). Then, you join at appropiate date and time on zoom, discuss with the community the problem and solution you propose, they will tell you if a KEP is needed, a feature gate, etc. (my guess is that a very lightweight KEP will be needed. But hopefully it isn't, let's see!). All of this should be explained there and/or the meeting agenda google docs link. |
ohh that makes more sense now, thank you! I will do some reading and join soon. Once again thank you for taking time to write that up, appreciate it. |
@skrobul are you still working on this? What was outcome of the discussion in SIG Node meeting (I think I missed that meeting myself)? |
Link: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#bookmark=id.8ats7bythcn4 There is a recording available. I cannot check recording now, but from what I recollect we discussed that there might be scenarios that can be broken by this. Something around external tool writing termination message about oom kill or similar. I think suggestion was that it will be a breaking change and we will need to run the KEP process to work ensure we did a due diligence. But my recollection may be faulty. |
Thanks @SergeyKanzhelev. I quickly checked the recording and you indeed suggested writing a KEP. I think @rata was inclined in the same direction. My feeling about this is similar: KEP would allow exploring potential risks and alternative solutions in a well-defined manner plus prevent breaking existing users (with feature gate) /cc @dchen1107 |
No, I have done everything I could at this stage
The outcome as far as I remember was that:
|
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
} else if pod.Spec.SecurityContext != nil && pod.Spec.SecurityContext.RunAsUser != nil { | ||
containerUid = int(*pod.Spec.SecurityContext.RunAsUser) | ||
} else { | ||
containerUid = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also consider SecurityContext.RunAsNonRoot
here and below?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as far as I understand the RunAsNonRoot
does not influence the UID selection, it merely enables a validation that it's non-root, is that not the case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's true, but this change would set UID to 0 (root) even if RunAsNonRoot is true. It's may be ok, but looked suspicious to me. What would happen if container runs as non-root, but containerLogPath owner is root?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The container would not run at all if the RunAsNonRoot
was set to true, but the pod.Spec.SecurityContext.RunAsUser
was set unset. In other words, as far as I understand there is no way for the container to start as non-root without changing the RunAsUser
either on container or pod level. What am I missing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about it. I thought that RunAsNonRoot
and RunAsUser
are independent options. By default user UID and/or username is taken from the image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is the code I'm talking about: https://github.com/kubernetes/kubernetes/blob/release-1.29/pkg/kubelet/kuberuntime/kuberuntime_container.go#L326
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
What type of PR is this?
/kind bug
What this PR does / why we need it:
Currently, kubelet creates a world-readable and world-writeable empty files in
/var/lib/kubelet/pods/{podUID}/containers/{containerName}/{containerId}
. These are meant to be written by the process in containers when container is terminated.Originally, this file was created with
0644
, then despite security concerns, it was changed to0666
in #31839. This was completed to allow containers running as non-root to write termination messages. Later on, in 2019 this has been highlighted as a security vulnerability in Kubernetes Security Audit Report in #81116.This PR changes termination log file mode to
0660
which is the best of both worlds - it removes world-writable file, yet still allows the container user and it's group to write the termination message.Which issue(s) this PR fixes:
Related (fixes only part) #81116
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: