New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warn / Thread through steps for permissions fixing (pod security policy / fsGroup) #73924

Open
wants to merge 3 commits into
base: master
from

Conversation

@jayunit100
Copy link
Member

jayunit100 commented Feb 11, 2019

Creating a companion issue now. As of now you cant see what the kubelet is actually doing when its mounting w/ perms. So its a minor bug i guess.


Made kubelet mount / fsgroup and PsP setup logging more explicit.

@jayunit100

This comment has been minimized.

Copy link
Member Author

jayunit100 commented Feb 11, 2019

addresses #73925 by making the details around how file owners are set explicit in the kubelet. also makes explicit the decision of wether or not setup for file ownership attempt is being made.

@k8s-ci-robot k8s-ci-robot added size/S and removed size/XS labels Feb 11, 2019

@jayunit100

This comment has been minimized.

Copy link
Member Author

jayunit100 commented Feb 11, 2019

/sig node

@jayunit100 jayunit100 force-pushed the jayunit100:patch-9 branch from 7eb9505 to 914b887 Feb 12, 2019

@jayunit100 jayunit100 force-pushed the jayunit100:patch-9 branch from 914b887 to 8919030 Feb 12, 2019

@k8s-ci-robot k8s-ci-robot added size/M and removed size/S labels Feb 12, 2019

@jayunit100 jayunit100 force-pushed the jayunit100:patch-9 branch from 8919030 to 92c9124 Feb 12, 2019

@k8s-ci-robot k8s-ci-robot added size/S and removed size/M labels Feb 12, 2019

@k8s-ci-robot k8s-ci-robot added size/M and removed size/S labels Feb 12, 2019

@jayunit100 jayunit100 force-pushed the jayunit100:patch-9 branch 3 times, most recently from 778e056 to 1ac6165 Feb 12, 2019

@jayunit100 jayunit100 changed the title Warn about permissions fixing (pod security pol) Warn / Thread through steps for permissions fixing (pod security pol) Feb 12, 2019

@jayunit100 jayunit100 changed the title Warn / Thread through steps for permissions fixing (pod security pol) Warn / Thread through steps for permissions fixing (pod security policy / fsGroup) Feb 12, 2019

@jayunit100 jayunit100 force-pushed the jayunit100:patch-9 branch from fd3bd41 to 07e7e9a Feb 12, 2019

@chrislovecnm
Copy link
Member

chrislovecnm left a comment

/lgtm

@jayunit100 jayunit100 force-pushed the jayunit100:patch-9 branch from 8f59b92 to b2cf683 Feb 12, 2019

@k8s-ci-robot k8s-ci-robot added the lgtm label Feb 12, 2019

@chrislovecnm
Copy link
Member

chrislovecnm left a comment

/lgtm
/approve

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Feb 12, 2019

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: chrislovecnm, jayunit100
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: gnufied

If they are not already assigned, you can assign the PR to them by writing /assign @gnufied in a comment when ready.

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@chrislovecnm

This comment has been minimized.

Copy link
Member

chrislovecnm commented Feb 12, 2019

/assign gnufied

@gnufied

This comment has been minimized.

Copy link
Member

gnufied commented Feb 12, 2019

I like this idea of informing the user when we aren't changing permissions, but I am not sure if logging the message is right way to go about it. We already log an event when mount is successful:

  Normal   SuccessfulMountVolume  14s                kubelet, dev-hekumar-cinder2-nrr-1  MountVolume.SetUp succeeded for volume "default-token-456gk"

We could extend this message to include information about permission bits. Another thing is - just modifying CSI plugin isn't a good answer. We should make sure the change affects all volume plugins.

@jayunit100

This comment has been minimized.

Copy link
Member Author

jayunit100 commented Feb 12, 2019

@gnufied ... agree that events are useful. Logs are typically sent to storage vendors when there is a problem though, hence they are equally important , albeit for a different use case then the end-user use case....

@k8s-ci-robot k8s-ci-robot removed the lgtm label Feb 13, 2019

@jayunit100

This comment has been minimized.

Copy link
Member Author

jayunit100 commented Feb 13, 2019

Updated the volume_unsupported call so that theres a warning there as well... that handles aws, azure cinder configmap, csi, downardapi, emptydir diskmgr mounter flocker gce git local portworx secret storageos and vsphere . Would prefer to keep the scope of this PR specifically to logs as that is the key thing for vendor communications, which is really the priority ( for me at least :) ).

Next step... better Events.

That said am happy to plumb this into eventing (events are definetly useful, for example - to developers working on CSI plugins, or apps that havent yet triaged a storage bug)... In any case, if its a hard requirement, preferably lets do the event plumbing in a follow on PR?.

If we have events... do we need logs ?

Absolutely, in my opinion, here's why (let me know if im missing something here - but this ive seen alot of outages, on all cloud platforms, even including GKE and other managed providers, and their not pretty, and typically can have symptoms for 12 or 24 hours before/after the occurrence). The reason IMO that just broadcasting events isn't particularly useful for CSI related issues is that, during outages (we just experienced one) ... they cycle, get cleaned, are stored remotely, arent aleratable via log aggreagators, arent in a timeseries w/ other granular info, and so on), so events - although useful to developers during theoretical testing - are really not super useful for production forensics, or at least, not in the current traditional data center / vendor interaction models.

Example scenarios

  • You have a vendor storage outage, and a large portion of your cluster goes down, including possibly etcd itself.
  • You have mounts that fail periodically over a period of a week, and want to send all kubelet logs for that time frame to a vendor for SLA etc.
  • Note if a storage vendor goes out - you may not even have the etcd event storage available at all (unless events are permanantly put in the kubelet somewhere ? i dont think they are though).

@k8s-ci-robot k8s-ci-robot added the lgtm label Feb 13, 2019

@chrislovecnm
Copy link
Member

chrislovecnm left a comment

/lgtm

@ericbannon
Copy link

ericbannon left a comment

nice!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment