Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test/extended: check if there is less than two alerts fired #22513

Merged
merged 1 commit into from May 6, 2019

Conversation

paulfantom
Copy link
Contributor

@paulfantom paulfantom commented Apr 9, 2019

In #22512 we are checking if prometheus is firing Watchdog alert which should be the only one fired. This test ensures there are no more firing alerts present.

/CC: @brancz @mxinden @s-urbaniak @squat @metalmatze

@openshift-ci-robot openshift-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Apr 9, 2019
Copy link
Contributor

@mxinden mxinden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a bunch for looking into this! That is very helpful.

@@ -150,6 +150,20 @@ var _ = g.Describe("[Feature:Prometheus][Conformance] Prometheus", func() {
return true, nil
})).NotTo(o.HaveOccurred())
})
g.It("should report less than two alerts firing", func() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most of our alerting rules have a for time set. Thereby they can only ever possibly fire after a certain uptime of the cluster.

I don't think waiting for a specific time to pass is an option. Is it possible to run this test at the very end, to increase the probability, that most for ranges are reached.

(For anyone not familiar with Prometheus alerting rules: https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/)

test/extended/prometheus/prometheus.go Outdated Show resolved Hide resolved
@paulfantom
Copy link
Contributor Author

paulfantom commented Apr 11, 2019

/hold

@paulfantom paulfantom force-pushed the only_one_alert branch 2 times, most recently from df7cef8 to c2f35c1 Compare April 11, 2019 09:26
@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 4, 2019
@paulfantom
Copy link
Contributor Author

/test e2e-aws-upgrade

@brancz
Copy link
Contributor

brancz commented May 6, 2019

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label May 6, 2019
@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: brancz, metalmatze, paulfantom

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit 4dd9d7c into openshift:master May 6, 2019
@paulfantom paulfantom deleted the only_one_alert branch May 6, 2019 11:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants