Description
Component(s)
opamp
What happened?
Description
The OTEL operator will generate a serviceAccount, clusterRole, and clusterRoleBinding automatically in the app logic. This is useful when the serviceAccount is not provided in the OpenTelemetryCollector custom resource. However, there is a bug in the operator: even when a serviceAccount is provided, the operator will still try to create a clusterRole and clusterRoleBinding for the serviceAccount, even though there is already a clusterRole and clusterRoleBinding attached to the provided serviceAccount.
This will cause unsync issue when the project is gitOps-ed with argo because
- the clusterRole and clusterRoleBinding generated by the operator will inherited all the labels from the OpenTelemetryCollector which contains a ArgoCD-tracked label
- there is no
ownerReference
in the enerated RBAC resources. and also given the clusterRole and clusterRoleBinding are cluster-scoped resources which is not alloed/recommended to have aownerReference
attached.
Steps to Reproduce
The OpenTelemetryCollector i have
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: collector-deployment
namespace: monitoring
labels:
argocd.argoproj.io/instance: observability-otel-operator-configs
spec:
mode: deployment
podSecurityContext:
runAsUser: 1000
runAsGroup: 1000
seccompProfile:
type: RuntimeDefault
observability:
metrics:
enableMetrics: true
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
serviceAccount: opentelemetry-collector
config:
.... some otel configs ...
existing clusertRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: opentelemetry-collector
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: opentelemetry-collector
subjects:
- kind: ServiceAccount
name: opentelemetry-collector
namespace: monitoring
existing clusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: opentelemetry-collector
rules:
- apiGroups:
- ""
resources:
- endpoints
- nodes/proxy
- nodes/stats
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
- namespaces
- namespaces/status
- nodes
- nodes/spec
- pods
- pods/status
- replicationcontrollers
- replicationcontrollers/status
- resourcequotas
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- pods
- namespaces
- services
verbs:
- get
- watch
- list
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- replicasets
- statefulsets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
- replicasets
verbs:
- get
- list
- watch
- nonResourceURLs:
- /metrics
verbs:
- get
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- get
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
Expected Result
- Automatically generate a
ServiceAccount
,ClusterRole
, andClusterRoleBinding
for eachOpenTelemetryCollector
CR. - Skip generating these resources if a custom
serviceAccount
is specified in the CR (spec.serviceAccount
).
Actual Result
Even when a serviceAccount
is explicitly provided, the operator:
- Still generates a
ClusterRole
andClusterRoleBinding
, associated with the providedserviceAccount
. - Does not detect or skip generation if the service account already has RBAC permissions via existing
ClusterRoleBinding
.
This causes unnecessary and potentially conflicting RBAC objects in the cluster.
generated clusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: '2025-06-16T10:37:02Z'
labels:
app.kubernetes.io/component: opentelemetry-collector
app.kubernetes.io/instance: monitoring.collector-deployment
app.kubernetes.io/managed-by: opentelemetry-operator
app.kubernetes.io/name: collector-deployment-monitoring-cluster-role
app.kubernetes.io/part-of: opentelemetry
app.kubernetes.io/version: latest
argocd.argoproj.io/instance: observability-eyre-otel-operator-configs
tyroTeam: observability
name: collector-deployment-monitoring-cluster-role
rules:
- apiGroups:
- ''
resources:
- events
- namespaces
- namespaces/status
- nodes
- nodes/spec
- pods
- pods/status
- replicationcontrollers
- replicationcontrollers/status
- resourcequotas
- services
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- replicasets
- statefulsets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- daemonsets
- deployments
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- batch
resources:
- jobs
- cronjobs
verbs:
- get
- list
- watch
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- pods
- namespaces
verbs:
- get
- watch
- list
- apiGroups:
- apps
resources:
- replicasets
verbs:
- get
- watch
- list
- apiGroups:
- ''
resources:
- nodes
verbs:
- get
- watch
- list
generated clusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: '2025-06-16T10:37:02Z'
labels:
app.kubernetes.io/component: opentelemetry-collector
app.kubernetes.io/instance: monitoring.collector-deployment
app.kubernetes.io/managed-by: opentelemetry-operator
app.kubernetes.io/name: collector-deployment-monitoring-collector
app.kubernetes.io/part-of: opentelemetry
app.kubernetes.io/version: latest
argocd.argoproj.io/instance: observability-eyre-otel-operator-configs
name: collector-deployment-monitoring-collector
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: collector-deployment-monitoring-cluster-role
subjects:
- kind: ServiceAccount
name: opentelemetry-collector
namespace: monitoring
Kubernetes Version
v1.32.3-eks-4096722
Operator version
0.124.0
Collector version
0.124.0
Environment information
Log output
Additional context
No response