Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug 2092442: drain_controller: slow down retries for failing nodes #3178

Merged

Conversation

yuqi-zhang
Copy link
Contributor

In our old daemon logic, we used to retry failing drains after 1
minute for the first 5 failures, then we would retry after 5 minutes
until the 1hr timeout is reached.

In practice, this means that the daemon retries after
5 * (retry sleep + drain timeout) = 12.5 minutes, which is a bit
more resource saving.

The controller has a bit more difficulty matching this, since the
retries can happen at odd intervals (faster if the node gets requeued
by something else, or slower if there are many nodes competing for
the limited queue parallelism). So there is less value of doing so.

This is an example implementation of time based slowdown in retries,
but the usefulness may not be worth it.

@openshift-ci openshift-ci bot added bugzilla/severity-low Referenced Bugzilla bug's severity is low for the branch this PR is targeting. bugzilla/invalid-bug Indicates that a referenced Bugzilla bug is invalid for the branch this PR is targeting. labels Jun 3, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 3, 2022

@yuqi-zhang: This pull request references Bugzilla bug 2092442, which is invalid:

  • expected the bug to target the "4.11.0" release, but it targets "---" instead

Comment /bugzilla refresh to re-evaluate validity if changes to the Bugzilla bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

Bug 2092442: drain_controller: slow down retries for failing nodes

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 3, 2022
@yuqi-zhang
Copy link
Contributor Author

/bugzilla refresh
/hold

@sergiordlr I think even with this patch, the times will be non-deterministic. Either way, I think we should modify the tests to not check for specific retry timings

@openshift-ci openshift-ci bot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. and removed bugzilla/invalid-bug Indicates that a referenced Bugzilla bug is invalid for the branch this PR is targeting. labels Jun 3, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 3, 2022

@yuqi-zhang: This pull request references Bugzilla bug 2092442, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.11.0) matches configured target release for branch (4.11.0)
  • bug is in the state NEW, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

Requesting review from QA contact:
/cc @sergiordlr

In response to this:

/bugzilla refresh
/hold

@sergiordlr I think even with this patch, the times will be non-deterministic. Either way, I think we should modify the tests to not check for specific retry timings

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot requested a review from sergiordlr June 3, 2022 01:00
@cgwalters
Copy link
Member

Either way, I think we should modify the tests to not check for specific retry timings

Strong agree.

@yuqi-zhang
Copy link
Contributor Author

/test e2e-agnostic-upgrade

@sinnykumari
Copy link
Contributor

needs local rebase

@openshift-ci openshift-ci bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 10, 2022
@yuqi-zhang yuqi-zhang force-pushed the controller-drain-add-retry-backoff branch from c4d8e45 to 943af11 Compare June 10, 2022 21:24
@openshift-ci openshift-ci bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 10, 2022
@@ -319,7 +326,15 @@ func (ctrl *Controller) syncNode(key string) error {
ctrl.logNode(node, "initiating drain")
if err := drain.RunNodeDrain(drainer, node.Name); err != nil {
ctrl.logNode(node, "Drain failed. Waiting 1 minute then retrying. Error message from drain: %v", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

w are now waiting for either drainRequeueDelay or drainRequeueFailingDelay amount of time. Lets move this log inside if so that correct waiting time gets logged.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, for some reason I thought I pushed the fix a few days ago, but I guess I never actually ran the push command. Should be fixed now

In our old daemon logic, we used to retry failing drains after 1
minute for the first 5 failures, then we would retry after 5 minutes
until the 1hr timeout is reached.

In practice, this means that the daemon retries after
5 * (retry sleep + drain timeout) = 12.5 minutes, which is a bit
more resource saving.

The controller has a bit more difficulty matching this, since the
retries can happen at odd intervals (faster if the node gets requeued
by something else, or slower if there are many nodes competing for
the limited queue parallelism). So there is less value of doing so.

This is an example implementation of time based slowdown in retries,
but the usefulness may not be worth it.
@yuqi-zhang yuqi-zhang force-pushed the controller-drain-add-retry-backoff branch from 943af11 to cf69f01 Compare June 16, 2022 22:32
@sinnykumari
Copy link
Contributor

/lgtm
Jerry, feel free to remove hold.

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Jun 21, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 21, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: sinnykumari, yuqi-zhang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [sinnykumari,yuqi-zhang]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@yuqi-zhang
Copy link
Contributor Author

/hold cancel

Should be a safe PR to get in. Thanks for the reviews!

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jun 22, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 23, 2022

@yuqi-zhang: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-ci openshift-ci bot merged commit e595ae9 into openshift:master Jun 23, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jun 23, 2022

@yuqi-zhang: All pull requests linked via external trackers have merged:

Bugzilla bug 2092442 has been moved to the MODIFIED state.

In response to this:

Bug 2092442: drain_controller: slow down retries for failing nodes

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/severity-low Referenced Bugzilla bug's severity is low for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants