-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prometheus Cluster Monitoring - custom configuration description improvements #12500
Comments
I was able to change following this approach: Export the CRD object:
Change configuration in prometheus-k8s-rules.yaml file. Replace the CRD object:
Check the ConfigMap for auto change by operator:
|
nvm, the above method didn't work. operator reconciled that object to the previous state. |
I was able to disable prometheusOperator and then it worked.
Comment out this section:
|
I have been exploring this issue and have had no luck getting the approach @shah-zobair mentions to work. It seems that disabling the prometheusOperator section of this configmap only results in some default settings being used by the operator. So far the only workable solution I have found is to disable the operator entirely with We really need a mechanism to disable parts of this operator without disabling the whole thing. |
Results of research from my side: I've tested CMO as a way to monitor entire cluster, including applications monitoring, self-service capabilities, high availability. As a result, there are two ways of achieving it:
|
How to customize the config file /etc/prometheus.yaml, It look like in this image registry.redhat.io/openshift3/ose-prometheus-config-reloader , How can I customized this image? |
I'm also looking to customize prometheus.yaml like @zhangchl007 is requesting. We need a way to use remote_write in prometheus. |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Which section(s) is the issue in?
https://docs.openshift.com/container-platform/3.11/install_config/prometheus_cluster_monitoring.html
https://github.com/openshift/openshift-docs/blob/master/install_config/monitoring/configuring-openshift-cluster-monitoring.adoc
https://github.com/openshift/openshift-docs/blob/master/install_config/monitoring/update-and-compatibility-guarantees.adoc
What needs fixing?
Description how to pause resetting Monitoring stack to default state is misleading.
I've installed Cluster Monitoring Operator from openshift-ansible 3.10, 3.11 branches.
In Openshift, AppVersion object is not visible. Operators control state of objects using ControllerRevision objects, see below:
Add example (or some links) how to inject custom configuration
As for now I was able to stop resetting Prometheus to default state by manually deleting ClusterMonitoringOperator and PrometheusOperator instances (and then reconfiguring CRDs with my configuration). This is probably not the most elegant way to provide custom configuration. I understand that per se customized setup is not supported, however some example how to inject it without Operators deletion will be greatly appreciated here.
The text was updated successfully, but these errors were encountered: