Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Check RBAC permissions and add docs #150

Open
ghostsquad opened this issue Aug 19, 2022 · 9 comments
Open

Check RBAC permissions and add docs #150

ghostsquad opened this issue Aug 19, 2022 · 9 comments
Assignees
Labels
documentation Improvements or additions to documentation

Comments

@ghostsquad
Copy link

ghostsquad commented Aug 19, 2022

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: null
  name: k6-operator-manager-role
rules:
- apiGroups:
  - apps
  resources:
  - deployments
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - batch
  resources:
  - jobs
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - ""
  resources:
  - pods
  - pods/log
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - list
  - get
  - watch

This seems dangerous. Can documentation be provided to show how these permissions may be more restricted?

@yorugac yorugac added documentation Improvements or additions to documentation good first issue Good for newcomers labels Sep 23, 2022
@yorugac yorugac changed the title RBAC Permissions seem excessive Check RBAC permissions and add docs Dec 14, 2022
@yorugac
Copy link
Collaborator

yorugac commented Dec 14, 2022

Just a note; it's quite likely that using operator with --out cloud option and without it can have different lists of RBAC. If so, it makes sense to check all RBAC and make separate lists.

@yorugac yorugac self-assigned this Dec 14, 2022
@yorugac yorugac removed the good first issue Good for newcomers label Dec 14, 2022
@realHarter
Copy link

Are there any news yet? I'm looking for a way to deploy the operator + all tests in a single namespace with no access rights outside this namespace.

@yorugac
Copy link
Collaborator

yorugac commented Sep 1, 2023

@realHarter, this issue is more about validation and documentation rather than any specific changes. But it's quite possible to deploy k6-operator and the tests in the same namespace as it is. Have you encountered an issue while trying to do that?

@realHarter
Copy link

@yorugac I haven't tested alot yet. But so far it's working for me after rewriting all ClusterRoles to Roles, declaring the crds as namespaced and setting the WATCH_NAMESPACE env variable to the single namespace I'm deploying too.

@yorugac
Copy link
Collaborator

yorugac commented Sep 4, 2023

Thanks for details. A guide for such a setup would likely be nice 👍

@ichasepucks
Copy link

ichasepucks commented Sep 13, 2023

@yorugac when you say

But it's quite possible to deploy k6-operator and the tests in the same namespace as it is.

Were you expecting this to be done by changing configuration as @realHarter did or is there a cleaner way with kustomize to do this? I'm just playing with kustomize for the first time today so I haven't yet determined if this is possible.

FWIW, we are provided namespaces which we have admin access to but we cannot access many cluster wide resources. This cluster level access, such as being able to create a namespace and create cluster roles, seems to be expected with the normal installation methods. Is that correct? If yes, then it seems like the pattern used to rewrite this all into a single namespace would be the path forward. Some chance I may be able to work on this and submit a PR for this.

@rafedbenjeddou
Copy link

@yorugac I haven't tested alot yet. But so far it's working for me after rewriting all ClusterRoles to Roles, declaring the crds as namespaced and setting the WATCH_NAMESPACE env variable to the single namespace I'm deploying too.

Hello @realHarter, can you provide me the manifest file with this changes please ? thank you

@FloGro3
Copy link
Contributor

FloGro3 commented May 8, 2024

I can confirm @realHarter's approach. Well I didn't have to adapt the CRDs because in the latest helm release they already have the scope "Namespaced". The only things that needed to be changed was to change the clusterRole to role (just change Kind from ClusterRole to Role and add the namespace (and also adapt the connected clusterBinding -> same approach):

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: {{ include "k6-operator.fullname" . }}-manager-role
  namespace: {{- include "k6-operator.namespace" . }}
...

Setting the env variable was also necessary. I added that to the deployment.yaml so that it is set automatically for each env the k6-operator is deployed to. Just under "env" on the "manager" container add this:

env:
  - name: WATCH_NAMESPACE
    value: {{- include "k6-operator.namespace" . }}

From my pov the WATCH_NAMESPACE env variable and its function could be documented better. Maybe worth a small section in the readme? :) @yorugac

@yorugac
Copy link
Collaborator

yorugac commented May 14, 2024

@FloGro3, thanks for sharing the details here 🙌

the WATCH_NAMESPACE env variable and its function could be documented better. Maybe worth a small section in the readme?

Indeed; we obviously forgot to document it 😄 Added it now, thanks!

By the way, all available options for Helm can be found over here:
https://github.com/grafana/k6-operator/blob/main/charts/k6-operator/README.md
It's an auto-generated documentation for values.yaml file. So you don't need to tinker with the template directly: it can be done via manager.env option.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

6 participants