Skip to content

Commit

Permalink
[DOCS] Alert creation delay (#3667)
Browse files Browse the repository at this point in the history
(cherry picked from commit 24595b5)
  • Loading branch information
lcawl authored and mergify[bot] committed Mar 13, 2024
1 parent 2672056 commit 2bf4e80
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 0 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Conditions for each rule can be applied to specific metrics relating to the inve
You can choose the aggregation type, the metric, and by including a warning threshold value, you can be
alerted on multiple threshold values based on severity scores. When creating the rule, you can still get
notified if no data is returned for the specific metric or if the rule fails to query {es}.
You can also set advanced options such as the number of consecutive runs that must meet the rule conditions before an alert occurs.

In this example, Kubernetes Pods is the selected inventory type. The conditions state that you will receive
a critical alert for any pods within the `ingress-nginx` namespace with a memory usage of 95% or above
Expand Down
3 changes: 3 additions & 0 deletions docs/en/observability/metrics-threshold-alert.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,9 @@ When you select *Alert me if a group stops reporting data*, the rule is triggere
If you include the same field in both your **Filter** and your **Group by**, you may receive fewer results than you're expecting. For example, if you filter by `cloud.region: us-east`, then grouping by `cloud.region` will have no effect because the filter query can only match one region.
==============================================

In the *Advanced options*, you can change the number of consecutive runs that must meet the rule conditions before an alert occurs.
The default value is `1`.

[discrete]
[[action-types-metrics]]
== Action types
Expand Down
1 change: 1 addition & 0 deletions docs/en/observability/slo-burn-rate-alert.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ To create your SLO burn rate rule:
. Set your long lookback period under *Lookback period (hours)*. Your short lookback period is set automatically.
. Set your *Burn rate threshold*. Under this field, you'll see how long you have until your error budget is exhausted.
. Set how often the condition is evaluated in the *Check every* field.
. Optionally, change the number of consecutive runs that must meet the rule conditions before an alert occurs in the *Advanced options*.

[discrete]
[[action-types-slo]]
Expand Down

0 comments on commit 2bf4e80

Please sign in to comment.