Skip to content

Commit

Permalink
Merge pull request #75015 from eromanova97/OBSDOCS-952
Browse files Browse the repository at this point in the history
OBSDOCS-952: Clarify openshift-monitoring namespace requirement for m…
  • Loading branch information
michaelryanpeter committed Apr 25, 2024
2 parents 9422c55 + b60ef6a commit 3394eef
Show file tree
Hide file tree
Showing 3 changed files with 38 additions and 24 deletions.
30 changes: 18 additions & 12 deletions modules/monitoring-creating-new-alerting-rules.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ These alerting rules trigger alerts based on the values of chosen metrics.
.Procedure

. Create a new YAML configuration file named `example-alerting-rule.yaml` in the `openshift-monitoring` namespace.
. Create a new YAML configuration file named `example-alerting-rule.yaml`.

. Add an `AlertingRule` resource to the YAML file.
The following example creates a new alerting rule named `example`, similar to the default `Watchdog` alert:
Expand All @@ -34,24 +34,30 @@ apiVersion: monitoring.openshift.io/v1
kind: AlertingRule
metadata:
name: example
namespace: openshift-monitoring
namespace: openshift-monitoring # <1>
spec:
groups:
- name: example-rules
rules:
- alert: ExampleAlert # <1>
for: 1m # <2>
expr: vector(1) # <3>
- alert: ExampleAlert # <2>
for: 1m # <3>
expr: vector(1) # <4>
labels:
severity: warning # <4>
severity: warning # <5>
annotations:
message: This is an example alert. # <5>
message: This is an example alert. # <6>
----
<1> The name of the alerting rule you want to create.
<2> The duration for which the condition should be true before an alert is fired.
<3> The PromQL query expression that defines the new rule.
<4> The severity that alerting rule assigns to the alert.
<5> The message associated with the alert.
<1> Ensure that the namespace is `openshift-monitoring`.
<2> The name of the alerting rule you want to create.
<3> The duration for which the condition should be true before an alert is fired.
<4> The PromQL query expression that defines the new rule.
<5> The severity that alerting rule assigns to the alert.
<6> The message associated with the alert.
+
[IMPORTANT]
====
You must create the `AlertingRule` object in the `openshift-monitoring` namespace. Otherwise, the alerting rule is not accepted.
====

. Apply the configuration file to the cluster:
+
Expand Down
2 changes: 2 additions & 0 deletions modules/monitoring-managing-core-platform-alerting-rules.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ For example, you can change the `severity` label for an alert from `warning` to

* New alerting rules must be based on the default {product-title} monitoring metrics.
* You must create the `AlertingRule` and `AlertRelabelConfig` objects in the `openshift-monitoring` namespace.
* You can only add and modify alerting rules. You cannot create new recording rules or modify existing recording rules.
* If you modify existing platform alerting rules by using an `AlertRelabelConfig` object, your modifications are not reflected in the Prometheus alerts API.
Expand Down
30 changes: 18 additions & 12 deletions modules/monitoring-modifying-core-platform-alerting-rules.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ For example, you can change the severity label of an alert, add a custom label,
.Procedure

. Create a new YAML configuration file named `example-modified-alerting-rule.yaml` in the `openshift-monitoring` namespace.
. Create a new YAML configuration file named `example-modified-alerting-rule.yaml`.

. Add an `AlertRelabelConfig` resource to the YAML file.
The following example modifies the `severity` setting to `critical` for the default platform `watchdog` alerting rule:
Expand All @@ -27,22 +27,28 @@ apiVersion: monitoring.openshift.io/v1
kind: AlertRelabelConfig
metadata:
name: watchdog
namespace: openshift-monitoring
namespace: openshift-monitoring # <1>
spec:
configs:
- sourceLabels: [alertname,severity] <1>
regex: "Watchdog;none" <2>
targetLabel: severity <3>
replacement: critical <4>
action: Replace <5>
- sourceLabels: [alertname,severity] # <2>
regex: "Watchdog;none" # <3>
targetLabel: severity # <4>
replacement: critical # <5>
action: Replace # <6>
----
<1> The source labels for the values you want to modify.
<2> The regular expression against which the value of `sourceLabels` is matched.
<3> The target label of the value you want to modify.
<4> The new value to replace the target label.
<5> The relabel action that replaces the old value based on regex matching.
<1> Ensure that the namespace is `openshift-monitoring`.
<2> The source labels for the values you want to modify.
<3> The regular expression against which the value of `sourceLabels` is matched.
<4> The target label of the value you want to modify.
<5> The new value to replace the target label.
<6> The relabel action that replaces the old value based on regex matching.
The default action is `Replace`.
Other possible values are `Keep`, `Drop`, `HashMod`, `LabelMap`, `LabelDrop`, and `LabelKeep`.
+
[IMPORTANT]
====
You must create the `AlertRelabelConfig` object in the `openshift-monitoring` namespace. Otherwise, the alert label will not change.
====

. Apply the configuration file to the cluster:
+
Expand Down

0 comments on commit 3394eef

Please sign in to comment.