Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOCS] Alert creation delay #3667

Merged
merged 3 commits into from
Mar 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ Conditions for each rule can be applied to specific metrics relating to the inve
You can choose the aggregation type, the metric, and by including a warning threshold value, you can be
alerted on multiple threshold values based on severity scores. When creating the rule, you can still get
notified if no data is returned for the specific metric or if the rule fails to query {es}.
You can also set advanced options such as the number of consecutive runs that must meet the rule conditions before an alert occurs.

In this example, Kubernetes Pods is the selected inventory type. The conditions state that you will receive
a critical alert for any pods within the `ingress-nginx` namespace with a memory usage of 95% or above
Expand Down
3 changes: 3 additions & 0 deletions docs/en/observability/metrics-threshold-alert.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,9 @@ When you select *Alert me if a group stops reporting data*, the rule is triggere
If you include the same field in both your **Filter** and your **Group by**, you may receive fewer results than you're expecting. For example, if you filter by `cloud.region: us-east`, then grouping by `cloud.region` will have no effect because the filter query can only match one region.
==============================================

In the *Advanced options*, you can change the number of consecutive runs that must meet the rule conditions before an alert occurs.
The default value is `1`.

[discrete]
[[action-types-metrics]]
== Action types
Expand Down
1 change: 1 addition & 0 deletions docs/en/observability/slo-burn-rate-alert.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ To create your SLO burn rate rule:
. Set your long lookback period under *Lookback period (hours)*. Your short lookback period is set automatically.
. Set your *Burn rate threshold*. Under this field, you'll see how long you have until your error budget is exhausted.
. Set how often the condition is evaluated in the *Check every* field.
. Optionally, change the number of consecutive runs that must meet the rule conditions before an alert occurs in the *Advanced options*.

[discrete]
[[action-types-slo]]
Expand Down
Loading