-
Notifications
You must be signed in to change notification settings - Fork 406
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 2092442: drain_controller: slow down retries for failing nodes #3178
Bug 2092442: drain_controller: slow down retries for failing nodes #3178
Conversation
@yuqi-zhang: This pull request references Bugzilla bug 2092442, which is invalid:
Comment In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/bugzilla refresh @sergiordlr I think even with this patch, the times will be non-deterministic. Either way, I think we should modify the tests to not check for specific retry timings |
@yuqi-zhang: This pull request references Bugzilla bug 2092442, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Strong agree. |
/test e2e-agnostic-upgrade |
needs local rebase |
c4d8e45
to
943af11
Compare
@@ -319,7 +326,15 @@ func (ctrl *Controller) syncNode(key string) error { | |||
ctrl.logNode(node, "initiating drain") | |||
if err := drain.RunNodeDrain(drainer, node.Name); err != nil { | |||
ctrl.logNode(node, "Drain failed. Waiting 1 minute then retrying. Error message from drain: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
w are now waiting for either drainRequeueDelay or drainRequeueFailingDelay amount of time. Lets move this log inside if
so that correct waiting time gets logged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, for some reason I thought I pushed the fix a few days ago, but I guess I never actually ran the push command. Should be fixed now
In our old daemon logic, we used to retry failing drains after 1 minute for the first 5 failures, then we would retry after 5 minutes until the 1hr timeout is reached. In practice, this means that the daemon retries after 5 * (retry sleep + drain timeout) = 12.5 minutes, which is a bit more resource saving. The controller has a bit more difficulty matching this, since the retries can happen at odd intervals (faster if the node gets requeued by something else, or slower if there are many nodes competing for the limited queue parallelism). So there is less value of doing so. This is an example implementation of time based slowdown in retries, but the usefulness may not be worth it.
943af11
to
cf69f01
Compare
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sinnykumari, yuqi-zhang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold cancel Should be a safe PR to get in. Thanks for the reviews! |
@yuqi-zhang: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@yuqi-zhang: All pull requests linked via external trackers have merged: Bugzilla bug 2092442 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
In our old daemon logic, we used to retry failing drains after 1
minute for the first 5 failures, then we would retry after 5 minutes
until the 1hr timeout is reached.
In practice, this means that the daemon retries after
5 * (retry sleep + drain timeout) = 12.5 minutes, which is a bit
more resource saving.
The controller has a bit more difficulty matching this, since the
retries can happen at odd intervals (faster if the node gets requeued
by something else, or slower if there are many nodes competing for
the limited queue parallelism). So there is less value of doing so.
This is an example implementation of time based slowdown in retries,
but the usefulness may not be worth it.