Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update RBAC file #3

Closed
pohly opened this issue Dec 11, 2018 · 17 comments
Closed

update RBAC file #3

pohly opened this issue Dec 11, 2018 · 17 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@pohly
Copy link
Contributor

pohly commented Dec 11, 2018

The RBAC file was copied unmodified from driver-registrar. Is it still correct?

The introduction still refers to "external provisioner" (was already broken when creating that file initially for driver-registrar).

@msau42
Copy link
Collaborator

msau42 commented Dec 11, 2018

Looking through the code I think it only needs CSIDriver create and delete permissions

cc @gnufied who is working on testing this in our e2es

@pohly
Copy link
Contributor Author

pohly commented Dec 11, 2018 via email

@gnufied
Copy link

gnufied commented Dec 17, 2018

/assign

@k8s-ci-robot
Copy link
Contributor

@gnufied: GitHub didn't allow me to assign the following users: gnufied.

Note that only kubernetes-csi members and repo collaborators can be assigned.
For more information please see the contributor guide

In response to this:

/assign

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ferdinandhuebner
Copy link

The current RBAC definitions don't include permissions to create the CSIDriver CRD which are created on startup:

_, err = crdv1beta1client.Create(k8scsicrd.CSIDriverCRD())

At the moment, the cluster-driver-registrar exits with os.Exit(1) if it doesn't have the permission to create the CRD. I'm not quite sure if it would make sense to ignore that error and only exit if it is unable to register the CSIDriver object.

From the current CSI documentation, it is unclear to me who the responsible party for creating the CRDs is. The HostPath example mentions that the CRDs need to be created manually by a kubernetes administrator.

@msau42
Copy link
Collaborator

msau42 commented Jan 9, 2019

@ferdinandhuebner good catch, our intention is that the CRDs have to be created by the deployment tool or an administrator beforehand.

@saad-ali wdyt of the sidecar trying to install the CRD?

@msau42
Copy link
Collaborator

msau42 commented Jan 9, 2019

Discussed a bit with @saad-ali wrt sidecar installing the CRD. We need to think through potential version skew issues, if we add new fields to the CRD and want to update the CRD definition. Also we should be consistent with whatever we do for CSINodeInfo too. Right now, we're leaning towards not having the sidecar install the CRD.

pohly pushed a commit to pohly/cluster-driver-registrar that referenced this issue Jan 24, 2019
davidhalter added a commit to cloudscale-ch/csi-cloudscale that referenced this issue Jan 28, 2019
…r needs more permissions

His comment:
cluster-driver-registrar currently needs permissions to create the CSIDriver
CRD see kubernetes-csi/cluster-driver-registrar#3
@pohly
Copy link
Contributor Author

pohly commented Feb 28, 2019

The v1.0.1 release was tagged without fixing the RBAC rules contained in that release. We should prepare a release v1.1.0 which no longer installs the CRD and has correct RBAC rules.

@msau42 The master branch has that fixed and looks like it could be turned into v1.1.0.

@msau42
Copy link
Collaborator

msau42 commented Feb 28, 2019

There are other big changes coming to cluster driver registrar soon, namely that the crd is going to be replaced by an intree beta object. Do you think it's worth it to have a 1.1, or move straight to a 2.0? The crds were technically alpha so should we consider breaking them to warrant a major version bump?

@pohly
Copy link
Contributor Author

pohly commented Feb 28, 2019 via email

@msau42
Copy link
Collaborator

msau42 commented Feb 28, 2019

How about cherry picking #19 to release 1.0 and cut a new 1.0.x?

@pohly
Copy link
Contributor Author

pohly commented Mar 1, 2019 via email

@pohly pohly mentioned this issue Mar 1, 2019
@pohly
Copy link
Contributor Author

pohly commented Mar 1, 2019 via email

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 30, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 29, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants