Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug 1866873: update MCDDrainErr to reduce cardinality & fix nodename #2044

Conversation

kikisdeliveryservice
Copy link
Contributor

@kikisdeliveryservice kikisdeliveryservice commented Aug 31, 2020

This fix does the following:

  • fixes bug where node name was missing for MCDDrainErr
  • reduces cardinality of MCDDrainErr via set err messages & value
  • adds a duration to the metric (15m) to give time for the failed drain to resolve
  • fixes labels on other metrics (minor)

This fixes some Drain metric pain points have have been going on forever..
see: https://bugzilla.redhat.com/show_bug.cgi?id=1866873#c3 for screenshot

@openshift-ci-robot openshift-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Aug 31, 2020
@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 31, 2020
@kikisdeliveryservice kikisdeliveryservice requested review from runcom and removed request for cgwalters, runcom and ericavonb August 31, 2020 23:31
@kikisdeliveryservice kikisdeliveryservice changed the title [WIP] update MCDDrainErr to reduce cardinality & fix nodename [WIP] Bug 1866873: update MCDDrainErr to reduce cardinality & fix nodename Aug 31, 2020
@openshift-ci-robot openshift-ci-robot added the bugzilla/severity-low Referenced Bugzilla bug's severity is low for the branch this PR is targeting. label Aug 31, 2020
@openshift-ci-robot
Copy link
Contributor

@kikisdeliveryservice: This pull request references Bugzilla bug 1866873, which is invalid:

  • expected the bug to target the "4.6.0" release, but it targets "---" instead

Comment /bugzilla refresh to re-evaluate validity if changes to the Bugzilla bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

[WIP] Bug 1866873: update MCDDrainErr to reduce cardinality & fix nodename

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added the bugzilla/invalid-bug Indicates that a referenced Bugzilla bug is invalid for the branch this PR is targeting. label Aug 31, 2020
@kikisdeliveryservice
Copy link
Contributor Author

/bugzilla refresh

@openshift-ci-robot openshift-ci-robot added bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. and removed bugzilla/invalid-bug Indicates that a referenced Bugzilla bug is invalid for the branch this PR is targeting. labels Aug 31, 2020
@openshift-ci-robot
Copy link
Contributor

@kikisdeliveryservice: This pull request references Bugzilla bug 1866873, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.6.0) matches configured target release for branch (4.6.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

In response to this:

/bugzilla refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot
Copy link
Contributor

@kikisdeliveryservice: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/verify 0a414b151ac4f9270f08e10a2a5a0fd013280e83 link /test verify
ci/prow/okd-e2e-aws 0a414b151ac4f9270f08e10a2a5a0fd013280e83 link /test okd-e2e-aws
ci/prow/e2e-ovn-step-registry 0a414b151ac4f9270f08e10a2a5a0fd013280e83 link /test e2e-ovn-step-registry
ci/prow/e2e-upgrade 0a414b151ac4f9270f08e10a2a5a0fd013280e83 link /test e2e-upgrade

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@kikisdeliveryservice
Copy link
Contributor Author

/retest

@kikisdeliveryservice
Copy link
Contributor Author

/skip

@kikisdeliveryservice kikisdeliveryservice changed the title [WIP] Bug 1866873: update MCDDrainErr to reduce cardinality & fix nodename Bug 1866873: update MCDDrainErr to reduce cardinality & fix nodename Oct 28, 2020
@openshift-ci-robot openshift-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Oct 28, 2020
@openshift-ci-robot
Copy link
Contributor

@kikisdeliveryservice: This pull request references Bugzilla bug 1866873, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.7.0) matches configured target release for branch (4.7.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

In response to this:

Bug 1866873: update MCDDrainErr to reduce cardinality & fix nodename

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@@ -20,9 +20,10 @@ spec:
rules:
- alert: MCDDrainError
expr: |
mcd_drain > 0
mcd_drain_err > 0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this react to labels changing? Does that reset the clock? Do we alert if at least one node has been reporting an error for every time in the past 15m, or does it need a single node to report an error for the full 15m? I've reread the alert docs, but am still not clear on this :/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing here has changed how our alerts fundamentally work.

These are alerts from mcd which are per node. So any mcd alert is based on the status of the underlying node.

In this case if the drain error for the specific node is still present after the 15 minutes (which means failing after many, many retries) it will fire instead of immediately firing after very the first failure. This means that we have a better signal as to whether or not the error is transient/self-recovering before we send anything.

If another node somehow also started failing to drain, there would be a separate alert associated with that node, etc...

We plan on adding more in the future this is intended to fix current behavior.

Copy link
Contributor

@yuqi-zhang yuqi-zhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall lgtm, just one question: since you removed the "failing drain time" from the reporting directly, will users see that the timeout was 15m somehow? In the screenshot I just see the name. If not maybe its worth reporting somewhere that 15m is when we report this alert.

@kikisdeliveryservice
Copy link
Contributor Author

kikisdeliveryservice commented Oct 28, 2020

Overall lgtm, just one question: since you removed the "failing drain time" from the reporting directly, will users see that the timeout was 15m somehow? In the screenshot I just see the name. If not maybe its worth reporting somewhere that 15m is when we report this alert.

It's actually super standard to have a delay before firing an alert to make sure the alert is true other ocp operators also do so without further info as that's avail in the mcd logs.. it was my mistake to not have one initially. I can add docs to this to clarify, but I dont think explicitly saying so in the alert makes sense. Failing drain time was a timestamp (yikes) so wasn't providing any signal and was causing the alert to actually be super confusing to users as the value of the alert was always changing.

@yuqi-zhang
Copy link
Contributor

ack, thanks!
/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Oct 28, 2020
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: kikisdeliveryservice, yuqi-zhang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [kikisdeliveryservice,yuqi-zhang]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-robot
Copy link
Contributor

openshift-merge-robot commented Oct 28, 2020

@kikisdeliveryservice: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/okd-e2e-aws a330d1d link /test okd-e2e-aws

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit dd9050c into openshift:master Oct 29, 2020
@openshift-ci-robot
Copy link
Contributor

@kikisdeliveryservice: All pull requests linked via external trackers have merged:

Bugzilla bug 1866873 has been moved to the MODIFIED state.

In response to this:

Bug 1866873: update MCDDrainErr to reduce cardinality & fix nodename

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@kikisdeliveryservice
Copy link
Contributor Author

/cherry-pick release-4.6

@openshift-cherrypick-robot

@kikisdeliveryservice: new pull request created: #2292

In response to this:

/cherry-pick release-4.6

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/severity-low Referenced Bugzilla bug's severity is low for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants