New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube-proxy iptables min-sync-period default 1sec #92836
Conversation
/sig-network |
@BenTheElder pointed out that there had been a default originally ( When Tim wrote I think Is it the very best possible value for every cluster? Would /lgtm |
It's been a while, but ISTR deciding that 10s was a compromise
decision. We had lots of reports of people whose iptables runs were
4-5 seconds each, so setting 10s and burst=2 targetted 5s. If you
have a steady stream of changes, this will devolve into 1 run per 10s.
I didn't want to do a dynamic controller without REALLY needing to.
If we're going to lower this, we should probably track how long
iptables runs are actually taking and use that to guide this value
dynamically, so we can't pile up requests. We also don't want to run
iptables-restore continuously.
…On Mon, Jul 6, 2020 at 1:14 PM Dan Winship ***@***.***> wrote:
@BenTheElder pointed out that there had been a default originally (2s), but it was removed in #36332. But it was reverted because it was causing serious problems, and following links from there, it seems like this was just because the original implementation of minSyncPeriod was broken. (eg, it blocked the informer thread while waiting until it was allowed to do an update, rather than doing the updates in a different thread)
When Tim wrote BoundedFrequencyRunner in #46266 he set a 10s default for certain cluster templates/setup scripts (eg cluster/gce/gci/configure-helper.sh) but didn't change the command-line/config default.
I think 1s is a much better default value than either 0s or 10s; it works much more smoothly than 0s in the case where you get a bunch of updates all at once (eg, deleting a large deployment), but unlike 10s it's not noticeably slower when you have steady-but-not-overwhelming updates.
Is it the very best possible value for every cluster? Would 2s be better than 1s? I dunno. But 10s, while maybe appropriate for some clusters, seems way too high as a default, given that most(?) people are currently surviving with 0s. (That said, this PR does not change the minSyncPeriod for any of the cases that are currently setting it to 10s, since they're overriding the default anyway.)
/lgtm
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
Currently kube-proxy defaults the min-sync-period for iptables to 0. However, as explained by Dan Winship, "With minSyncPeriod: 0, you run iptables-restore 100 times. With minSyncPeriod: 1s , you run iptables-restore once. With minSyncPeriod: 10s , you also run iptables-restore once, but you might have to wait 10 seconds first"
/retest |
And most of that time was probably spent waiting to get the iptables lock. We've made huge improvements to the iptables situation in the last few years. I don't think old data points are valid any more. But I don't currently have any new data points to offer...
Note that this PR does not lower the value for anyone; it raises the default value from |
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I considered this but thought maybe it would break someone. In hindsight, that's pretty clearly the wrong choice.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aojea, thockin The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest Review the full test history for this PR. Silence the bot with an |
/retest Review the full test history for this PR. Silence the bot with an |
3 similar comments
/retest Review the full test history for this PR. Silence the bot with an |
/retest Review the full test history for this PR. Silence the bot with an |
/retest Review the full test history for this PR. Silence the bot with an |
What type of PR is this?
/kind failing-test
/kind flake
What this PR does / why we need it:
Currently kube-proxy defaults the min-sync-period for
iptables to 0. However, as explained by Dan Winship,
"With minSyncPeriod: 0, you run iptables-restore 100 times.
With minSyncPeriod: 1s , you run iptables-restore once.
With minSyncPeriod: 10s , you also run iptables-restore once,
but you might have to wait 10 seconds first"
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
This is causing issues in KIND jobs, when there are multiple endpoints updates, the kube-proxy fails to acquire the iptables lock
Does this PR introduce a user-facing change?:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: