Skip to content
This repository has been archived by the owner on Jan 17, 2020. It is now read-only.

Enable RBAC and support using ServiceAccounts #40

Closed
pwittrock opened this issue Feb 13, 2018 · 5 comments
Closed

Enable RBAC and support using ServiceAccounts #40

pwittrock opened this issue Feb 13, 2018 · 5 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@pwittrock
Copy link

As a CRD developer, in production I run my controller in a Namespace with a ServiceAccount and RBAC rules configured for it. I want the integration tests for my controller to mimic production as closely as possible and catch errors if the RBAC rules are incorrectly setup. To do this I need:

  • RBAC to be enabled in the cluster
  • To create a Namespace in the cluster and setup RBAC rules for my controller
  • To get a Config (credentials) authorized as the ServiceAccount
@hoegaarden
Copy link
Contributor

We discussed in the SIG-testing-commons meeting yesterday the longer term goal for this framework: It should have a API to ask for a k8s cluster (similar or a layer around the cluster API). A user should then be able to choose which cluster should be deployed and probably the default will be some sort of kubeadm DIND deployment. Other deployment options might also be available. The spec about that is in the works.

Now when you say you want "to mimic production as closely as possible" probably implementing this in our strategy (bringing up some binaries) is the wrong way to do it, but for that case the kubeadm DIND deployment should be used.

In case we add more and more things (RBAC, ...) to the current implementation and supporting them, will we end up rewriting something like kubeadm?

cc: @jamesjoshuahill @marun @timothysc @apelisse

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 22, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 22, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants