You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened?
Installed the operator in 2 namespaces, and each of the tries to modify ClusterRole/monitoring-edit for his own need.
Did you expect to see something different?
I expected each operator to create his own ClusterRole
How to reproduce it (as minimally and precisely as possible):
Install 2 operators in 2 different namespaces. and see that metadata.labels["olm.owner.namespace"] is keep changing.
its not a problem if you install operator with the same version, but it is a problem if you have different versions.
each operator tries to configure his own permissions.
for example openshift-monitoring will conflict with your own operator for that ClusterRole.
Environment
openshift 4.8.19
Prometheus Operator version: v0.47.0
Kubernetes version information:
v1.21.4+6438632
Kubernetes cluster kind:
Openshift UPI installation on vmware
Manifests:
Prometheus Operator Logs:
no errors and nothing about ClusterRole.
(im in airgapped network so it make take a while to get out the logs)
Anything else we need to know?:
I think the solution would be to create ClusterRole with name $(NAMESPACE)-monitoring-edit, this will avoid conflicts.
yet using the operator fails openshift-monitoring because of that ClusterRole (even when the operator is namespaced)
The text was updated successfully, but these errors were encountered:
What happened?
Installed the operator in 2 namespaces, and each of the tries to modify
ClusterRole/monitoring-edit
for his own need.Did you expect to see something different?
I expected each operator to create his own
ClusterRole
How to reproduce it (as minimally and precisely as possible):
Install 2 operators in 2 different namespaces. and see that
metadata.labels["olm.owner.namespace"]
is keep changing.its not a problem if you install operator with the same version, but it is a problem if you have different versions.
each operator tries to configure his own permissions.
for example
openshift-monitoring
will conflict with your own operator for thatClusterRole
.Environment
openshift 4.8.19
Prometheus Operator version:
v0.47.0
Kubernetes version information:
v1.21.4+6438632
Kubernetes cluster kind:
Openshift UPI installation on vmware
Manifests:
Prometheus Operator Logs:
no errors and nothing about
ClusterRole
.(im in airgapped network so it make take a while to get out the logs)
Anything else we need to know?:
I think the solution would be to create
ClusterRole
with name$(NAMESPACE)-monitoring-edit
, this will avoid conflicts.yet using the operator fails
openshift-monitoring
because of thatClusterRole
(even when the operator is namespaced)The text was updated successfully, but these errors were encountered: