-
Notifications
You must be signed in to change notification settings - Fork 39.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Warn / Thread through steps for permissions fixing (pod security policy / fsGroup) #73924
Conversation
addresses #73925 by making the details around how file owners are set explicit in the kubelet. also makes explicit the decision of wether or not setup for file ownership attempt is being made. |
/sig node |
778e056
to
1ac6165
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: chrislovecnm, jayunit100 If they are not already assigned, you can assign the PR to them by writing The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/assign gnufied |
I like this idea of informing the user when we aren't changing permissions, but I am not sure if logging the message is right way to go about it. We already log an event when mount is successful:
We could extend this message to include information about permission bits. Another thing is - just modifying CSI plugin isn't a good answer. We should make sure the change affects all volume plugins. |
@gnufied ... agree that events are useful. Logs are typically sent to storage vendors when there is a problem though, hence they are equally important , albeit for a different use case then the end-user use case.... |
Updated the volume_unsupported call so that theres a warning there as well... that handles Next step... better Events.That said am happy to plumb this into eventing (events are definetly useful, for example - to developers working on CSI plugins, or apps that havent yet triaged a storage bug)... In any case, if its a hard requirement, preferably lets do the event plumbing in a follow on PR?. If we have events... do we need logs ?Absolutely, in my opinion, here's why (let me know if im missing something here - but this ive seen alot of outages, on all cloud platforms, even including GKE and other managed providers, and their not pretty, and typically can have symptoms for 12 or 24 hours before/after the occurrence). The reason IMO that just broadcasting events isn't particularly useful for CSI related issues is that, during outages (we just experienced one) ... they cycle, get cleaned, are stored remotely, arent aleratable via log aggreagators, arent in a timeseries w/ other granular info, and so on), so events - although useful to developers during theoretical testing - are really not super useful for production forensics, or at least, not in the current traditional data center / vendor interaction models. Example scenarios
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice!
@jayunit100: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What are the next steps for this ? Would like to move this forward but if no interest ill close it. Needs a little rebase also, ill do that based on feedback. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Creating a companion issue now. As of now you cant see what the kubelet is actually doing when its mounting w/ perms. So its a minor bug i guess.