New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1867604: Set maxUnavailable on ds/tuned to 10% #149
Bug 1867604: Set maxUnavailable on ds/tuned to 10% #149
Conversation
This daemonset isn't critical for ensuring availability so allow up to 10% to be updated at once
@smarterclayton What'd you say was a reasonable maxUnavailable for daemonsets like these? |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jmencak, sdodson The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
/retest Please review the full test history for this PR and help us cut down flakes. |
6 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retitle Bug 1867604: Set maxUnavailable on ds/tuned to 10% |
@sdodson: All pull requests linked via external trackers have merged: . Bugzilla bug 1867604 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This daemonset isn't critical for ensuring availability so allow up to
10% to be updated at once
On a 250 node cluster we're seeing about 5 pods per minute
upgrading from 4.5.4 to 4.5.5 which isn't horrible, but we can surely
upgrade more than one at a time.