New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1866873: update MCDDrainErr to reduce cardinality & fix nodename #2044
Bug 1866873: update MCDDrainErr to reduce cardinality & fix nodename #2044
Conversation
@kikisdeliveryservice: This pull request references Bugzilla bug 1866873, which is invalid:
Comment In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/bugzilla refresh |
@kikisdeliveryservice: This pull request references Bugzilla bug 1866873, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
ce258a0
to
74c93c6
Compare
@kikisdeliveryservice: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
This will give cluster time to retry drain several times before sending an alert to minimize false positives.
0a414b1
to
a330d1d
Compare
/retest |
/skip |
@kikisdeliveryservice: This pull request references Bugzilla bug 1866873, which is valid. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@@ -20,9 +20,10 @@ spec: | |||
rules: | |||
- alert: MCDDrainError | |||
expr: | | |||
mcd_drain > 0 | |||
mcd_drain_err > 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this react to labels changing? Does that reset the clock? Do we alert if at least one node has been reporting an error for every time in the past 15m, or does it need a single node to report an error for the full 15m? I've reread the alert docs, but am still not clear on this :/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nothing here has changed how our alerts fundamentally work.
These are alerts from mcd which are per node. So any mcd alert is based on the status of the underlying node.
In this case if the drain error for the specific node is still present after the 15 minutes (which means failing after many, many retries) it will fire instead of immediately firing after very the first failure. This means that we have a better signal as to whether or not the error is transient/self-recovering before we send anything.
If another node somehow also started failing to drain, there would be a separate alert associated with that node, etc...
We plan on adding more in the future this is intended to fix current behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall lgtm, just one question: since you removed the "failing drain time" from the reporting directly, will users see that the timeout was 15m somehow? In the screenshot I just see the name. If not maybe its worth reporting somewhere that 15m is when we report this alert.
It's actually super standard to have a delay before firing an alert to make sure the alert is true other ocp operators also do so without further info as that's avail in the mcd logs.. it was my mistake to not have one initially. I can add docs to this to clarify, but I dont think explicitly saying so in the alert makes sense. Failing drain time was a timestamp (yikes) so wasn't providing any signal and was causing the alert to actually be super confusing to users as the value of the alert was always changing. |
ack, thanks! |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kikisdeliveryservice, yuqi-zhang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@kikisdeliveryservice: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest Please review the full test history for this PR and help us cut down flakes. |
@kikisdeliveryservice: All pull requests linked via external trackers have merged: Bugzilla bug 1866873 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cherry-pick release-4.6 |
@kikisdeliveryservice: new pull request created: #2292 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This fix does the following:
This fixes some Drain metric pain points have have been going on forever..
see: https://bugzilla.redhat.com/show_bug.cgi?id=1866873#c3 for screenshot