diff --git a/modules/monitoring-creating-new-alerting-rules.adoc b/modules/monitoring-creating-new-alerting-rules.adoc index d574f48b89c7..65cf95e6c7d6 100644 --- a/modules/monitoring-creating-new-alerting-rules.adoc +++ b/modules/monitoring-creating-new-alerting-rules.adoc @@ -23,7 +23,7 @@ These alerting rules trigger alerts based on the values of chosen metrics. .Procedure -. Create a new YAML configuration file named `example-alerting-rule.yaml` in the `openshift-monitoring` namespace. +. Create a new YAML configuration file named `example-alerting-rule.yaml`. . Add an `AlertingRule` resource to the YAML file. The following example creates a new alerting rule named `example`, similar to the default `Watchdog` alert: @@ -34,24 +34,30 @@ apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: example - namespace: openshift-monitoring + namespace: openshift-monitoring # <1> spec: groups: - name: example-rules rules: - - alert: ExampleAlert # <1> - for: 1m # <2> - expr: vector(1) # <3> + - alert: ExampleAlert # <2> + for: 1m # <3> + expr: vector(1) # <4> labels: - severity: warning # <4> + severity: warning # <5> annotations: - message: This is an example alert. # <5> + message: This is an example alert. # <6> ---- -<1> The name of the alerting rule you want to create. -<2> The duration for which the condition should be true before an alert is fired. -<3> The PromQL query expression that defines the new rule. -<4> The severity that alerting rule assigns to the alert. -<5> The message associated with the alert. +<1> Ensure that the namespace is `openshift-monitoring`. +<2> The name of the alerting rule you want to create. +<3> The duration for which the condition should be true before an alert is fired. +<4> The PromQL query expression that defines the new rule. +<5> The severity that alerting rule assigns to the alert. +<6> The message associated with the alert. ++ +[IMPORTANT] +==== +You must create the `AlertingRule` object in the `openshift-monitoring` namespace. Otherwise, the alerting rule is not accepted. +==== . Apply the configuration file to the cluster: + diff --git a/modules/monitoring-managing-core-platform-alerting-rules.adoc b/modules/monitoring-managing-core-platform-alerting-rules.adoc index 04eb637b2040..60b7fa166b92 100644 --- a/modules/monitoring-managing-core-platform-alerting-rules.adoc +++ b/modules/monitoring-managing-core-platform-alerting-rules.adoc @@ -18,6 +18,8 @@ For example, you can change the `severity` label for an alert from `warning` to * New alerting rules must be based on the default {product-title} monitoring metrics. +* You must create the `AlertingRule` and `AlertRelabelConfig` objects in the `openshift-monitoring` namespace. + * You can only add and modify alerting rules. You cannot create new recording rules or modify existing recording rules. * If you modify existing platform alerting rules by using an `AlertRelabelConfig` object, your modifications are not reflected in the Prometheus alerts API. diff --git a/modules/monitoring-modifying-core-platform-alerting-rules.adoc b/modules/monitoring-modifying-core-platform-alerting-rules.adoc index 7311d1a7e9ac..dc6aa289722a 100644 --- a/modules/monitoring-modifying-core-platform-alerting-rules.adoc +++ b/modules/monitoring-modifying-core-platform-alerting-rules.adoc @@ -16,7 +16,7 @@ For example, you can change the severity label of an alert, add a custom label, .Procedure -. Create a new YAML configuration file named `example-modified-alerting-rule.yaml` in the `openshift-monitoring` namespace. +. Create a new YAML configuration file named `example-modified-alerting-rule.yaml`. . Add an `AlertRelabelConfig` resource to the YAML file. The following example modifies the `severity` setting to `critical` for the default platform `watchdog` alerting rule: @@ -27,22 +27,28 @@ apiVersion: monitoring.openshift.io/v1 kind: AlertRelabelConfig metadata: name: watchdog - namespace: openshift-monitoring + namespace: openshift-monitoring # <1> spec: configs: - - sourceLabels: [alertname,severity] <1> - regex: "Watchdog;none" <2> - targetLabel: severity <3> - replacement: critical <4> - action: Replace <5> + - sourceLabels: [alertname,severity] # <2> + regex: "Watchdog;none" # <3> + targetLabel: severity # <4> + replacement: critical # <5> + action: Replace # <6> ---- -<1> The source labels for the values you want to modify. -<2> The regular expression against which the value of `sourceLabels` is matched. -<3> The target label of the value you want to modify. -<4> The new value to replace the target label. -<5> The relabel action that replaces the old value based on regex matching. +<1> Ensure that the namespace is `openshift-monitoring`. +<2> The source labels for the values you want to modify. +<3> The regular expression against which the value of `sourceLabels` is matched. +<4> The target label of the value you want to modify. +<5> The new value to replace the target label. +<6> The relabel action that replaces the old value based on regex matching. The default action is `Replace`. Other possible values are `Keep`, `Drop`, `HashMod`, `LabelMap`, `LabelDrop`, and `LabelKeep`. ++ +[IMPORTANT] +==== +You must create the `AlertRelabelConfig` object in the `openshift-monitoring` namespace. Otherwise, the alert label will not change. +==== . Apply the configuration file to the cluster: +