-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
set proper file permission for projected service account volume #89193
Conversation
Hi @zshihang. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @mikedanese @liggitt @tallclair |
pkg/volume/volume_linux.go
Outdated
}) | ||
} | ||
|
||
func changeFilePermission(filename string, fsGroup *int64, readonly bool, info os.FileInfo) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/assign @gnufied
more eyes the better :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this behavior protected by any feature gate? When we added the new fsgroup change policy behavior, we guarded the change to this method with a feature gate so we could disable it if it caused regressions to existing behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding tests, we do have some testing fsgroup + projected volumes, but I don't think they cover the projected service account volume:
https://github.com/kubernetes/kubernetes/blob/master/test/e2e/common/projected_configmap.go#L59
https://github.com/kubernetes/kubernetes/blob/master/test/e2e/common/projected_downwardapi.go
https://github.com/kubernetes/kubernetes/blob/master/test/e2e/common/projected_secret.go
We don't have any e2es covering persistent volumes with runAsUser set (only fsgroup). And for fsgroup, it's only set in some basic tests like: https://github.com/kubernetes/kubernetes/blob/master/test/e2e/storage/testsuites/volumes.go#L177
I thought we had added fsgroup tests to subpath due to various regression issues, but I don't see fsgroup being set in those tests. I will dig further to figure out what happened to them.
/ok-to-test |
/retest |
1 similar comment
/retest |
/lgtm |
/approve (Just propagating @mikedanese and @msau42 lgtm/approvals; I haven't actually reviewed this) |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: lavalamp, msau42, zshihang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
New changes are detected. LGTM label has been removed. |
@zshihang: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I missed this earlier.
podutil.VisitContainers(&pod.Spec, podutil.InitContainers|podutil.Containers, func(container *v1.Container, containerType podutil.ContainerType) bool { | ||
runAsUser, ok := securitycontext.DetermineEffectiveRunAsUser(pod, container) | ||
// One container doesn't specify user or there are more than one | ||
// non-root UIDs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there might be a couple problems with this logic, if I'm reading it correctly:
- A container could set the user in the container image. To get the image user, you can use the
getImageUser(...)
function, but you'd need to plumb through the runtime and export the method. Also note that the username can't be depended on. - A root user can only be ignored if it also has CAP_DAC_OVERRIDE (although it's included by default).
Even with these precautions, it's still possible that the container could drop permissions at runtime, but I think it should be acceptable to just cover that case by documentation (i.e. use FSGroup)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, looks like I was misreading the code... I think this says that if the user isn't explicitly specified in all the containers (and the explicit users don't match) then return nil
. In that case, I think the code is correct but the comment is incorrect (since it doesn't care about non-root).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
using getImageUser
would require a lot of changes in kubelet in terms of the image pulling and volume mounts.
Workaround kubernetes/kubernetes#82573 - this got fixed with kubernetes/kubernetes#89193 starting with Kubernetes 1.19
Workaround kubernetes/kubernetes#82573 - this got fixed with kubernetes/kubernetes#89193 starting with Kubernetes 1.19
Workaround kubernetes/kubernetes#82573 - this got fixed with kubernetes/kubernetes#89193 starting with Kubernetes 1.19
What type of PR is this?
/kind feature
What this PR does / why we need it:
implement the KEP kubernetes/enhancements#1598.
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: