-
Notifications
You must be signed in to change notification settings - Fork 377
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] ClusterRole needs ability to view namespace in order to pick up dashboards in other namespace #1010
Comments
Hi @Yaytay , to be honest, I don't understand the issue. But at my old company I used to use the scan-all function and we didn't add any list namespace rbac rules. This since the operator on performs get on the grafanadashboard resource. As long as you have added the RBAC setting as described in https://github.com/grafana-operator/grafana-operator/tree/955daa6abb9b6efe85d921a783eeffd51853fae4/deploy/cluster_roles it should be okay. In short, create a RBAC rules that gives the opreator access to read grafanadashboard in your entire cluster. |
Hi @NissesSenap , |
Ignoring the bitnami helm chart, this is the Grafana resource that is created and that does not work without namespace access: apiVersion: v1
items:
- apiVersion: integreatly.org/v1alpha1
kind: Grafana
metadata:
creationTimestamp: "2023-05-10T10:37:21Z"
generation: 1
labels:
app.kubernetes.io/instance: telemetry
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana-operator
helm.sh/chart: grafana-operator-2.8.2
name: telemetry-grafana-operator-grafana
namespace: grafana
resourceVersion: "9408"
uid: bdb3230d-bbc1-46ea-a64b-fa86a4db5ecf
spec:
baseImage: docker.io/bitnami/grafana:9.5.1-debian-11-r4
client:
preferService: true
timeout: 5
config:
alerting:
enabled: false
analytics:
check_for_updates: false
reporting_enabled: false
log:
level: warn
mode: console
security:
admin_password: T0p-Secret
admin_user: admin
disable_gravatar: false
server:
root_url: https://grafana.localtest.me
users:
default_theme: dark
dashboardLabelSelector:
- matchLabels:
app: grafana
dashboardNamespaceSelector:
matchLabels:
partition: new-hope
deployment:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/component: grafana
app.kubernetes.io/instance: telemetry
app.kubernetes.io/name: grafana-operator
topologyKey: kubernetes.io/hostname
weight: 1
containerSecurityContext:
allowPrivilegeEscalation: false
privileged: false
runAsGroup: 0
runAsNonRoot: true
runAsUser: 1001
labels:
app.kubernetes.io/instance: telemetry
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana-operator
helm.sh/chart: grafana-operator-2.8.2
replicas: 1
securityContext:
fsGroup: 1001
runAsGroup: 0
runAsNonRoot: true
runAsUser: 1001
supplementalGroups: []
skipCreateAdminAccount: false
ingress:
enabled: true
hostname: grafana.localtest.me
ingressClassName: nginx
path: /
pathType: Prefix
tlsEnabled: true
tlsSecretName: grafana.local-tls
jsonnet:
libraryLabelSelector:
matchLabels:
app.kubernetes.io/instance: telemetry
livenessProbeSpec:
failureThreshold: 6
initialDelaySeconds: 120
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
readinessProbeSpec:
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits: {}
requests: {}
service:
labels:
app.kubernetes.io/instance: telemetry
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana-operator
helm.sh/chart: grafana-operator-2.8.2
type: ClusterIP
status:
message: success
phase: reconciling
previousServiceName: grafana-service
kind: List
metadata:
resourceVersion: "" |
So I have managed to reproduce your issue. I installed the operator doing W0511 06:36:54.127339 1 reflector.go:324] pkg/mod/k8s.io/client-go@v0.24.3/tools/cache/reflector.go:167: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:system:controller-manager" cannot list resource "namespaces" in API group "" at the cluster scope
E0511 06:36:54.127398 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.24.3/tools/cache/reflector.go:167: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:system:controller-manager" cannot list resource "namespaces" in API group "" at the cluster scope My yaml looks like this. apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: "2023-05-11T06:06:48Z"
labels:
kubernetes.io/metadata.name: system
name: system
----
apiVersion: integreatly.org/v1alpha1
kind: Grafana
metadata:
labels:
app.kubernetes.io/instance: telemetry
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana-operator
helm.sh/chart: grafana-operator-2.8.2
name: telemetry-grafana-operator-grafana
namespace: system
spec:
baseImage: docker.io/bitnami/grafana:9.5.1-debian-11-r4
client:
preferService: true
timeout: 5
config:
alerting:
enabled: false
analytics:
check_for_updates: false
reporting_enabled: false
log:
level: warn
mode: console
security:
admin_password: admin
admin_user: admin
disable_gravatar: false
server:
root_url: https://grafana.localtest.me
users:
default_theme: dark
dashboardLabelSelector:
- matchLabels:
app: grafana
dashboardNamespaceSelector:
matchLabels:
partition: new-hope
deployment:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/component: grafana
app.kubernetes.io/instance: telemetry
app.kubernetes.io/name: grafana-operator
topologyKey: kubernetes.io/hostname
weight: 1
containerSecurityContext:
allowPrivilegeEscalation: false
privileged: false
runAsGroup: 0
runAsNonRoot: true
runAsUser: 1001
labels:
app.kubernetes.io/instance: telemetry
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana-operator
helm.sh/chart: grafana-operator-2.8.2
replicas: 1
securityContext:
fsGroup: 1001
runAsGroup: 0
runAsNonRoot: true
runAsUser: 1001
supplementalGroups: []
skipCreateAdminAccount: false
ingress:
enabled: false
hostname: grafana.localtest.me
ingressClassName: nginx
path: /
pathType: Prefix
tlsEnabled: true
tlsSecretName: grafana.local-tls
jsonnet:
libraryLabelSelector:
matchLabels:
app.kubernetes.io/instance: telemetry
resources:
limits: {}
requests: {}
service:
labels:
app.kubernetes.io/instance: telemetry
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana-operator
helm.sh/chart: grafana-operator-2.8.2
type: ClusterIP
---
apiVersion: integreatly.org/v1alpha1
kind: GrafanaDashboard
metadata:
name: grafana-dashboard-from-url
namespace: hello
labels:
app: grafana
spec:
url: https://raw.githubusercontent.com/integr8ly/grafana-operator/v4/deploy/examples/remote/grafana-dashboard.json
---
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: "2023-05-11T06:25:34Z"
labels:
kubernetes.io/metadata.name: hello
partition: new-hope
name: hello I will take a look at your PR when I get time. |
Over the weekend someone did some work on this for the bitnami chart: It looks like scanAllNamespaces and scanNamespaces need watch and list, but I think that dashboardNamespaceSelector needs get as well. |
It looks like I'm wrong and get is not required at all, though as get and list/watch provide the same access to data I'm not sure it makes much difference and it just comes down to precisely how the operator makes the request. |
Solved in #1041 |
Describe the bug
ClusterRole needs ability to view namespace in order to pick up dashboards in other namespace
My aim is to have the operator search for dashboards in a subset of all namespaces, where that subset isn't known in advance.
I think the only way to do this (on v4, at least) is with the dashboardNamespaceSelector and dashboardLabelSelectors and they appear to need rights on the namespace resources.
Version
docker.io/bitnami/grafana-operator:4.10.0-debian-11-r5
To Reproduce
The operator logs will display:
And grafana never picks up the dashboard.
The bitnami chart has created a ClusterRole that matches the one documented in this repo:
If I use "kubectl edit" to change the ClusterRole by inserting "- namespaces" under "- pods" the error stops appearing in the logs and my dashboard appears.
Expected behavior
The dashboard should appear in grafana.
Suspect component/Location where the bug might be occurring
Either I've screwed up my setup, or the v4 docs for ClusterRoles should be updated to reflect the need for namespaces access in this circumstance.
Runtime (please complete the following information):
Additional context
There was a PR for the bitnami chart that was blocked because the documentation in this repo says it isn't necessary.
I'd like to have a documented way to achieve my aims, whether that's by changing the ClusterRole or changing the operator config.
If you can confirm the rights required on the namespaces (I've tested with get, list & watch, but don't know that it needs all of them) and the circumstances under which this is required (I suspect it's when dashboardNamespaceSelector are used, but I don't know) I'd be happy to submit a PR for the docs.
The text was updated successfully, but these errors were encountered: