Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

only clean up iptables chains periodically in large clusters #110334

Merged
merged 3 commits into from Jun 29, 2022

Conversation

danwinship
Copy link
Contributor

@danwinship danwinship commented Jun 1, 2022

What type of PR is this?

/kind cleanup
/kind feature

What this PR does / why we need it:

Followup to #110328 (which this branch includes). This makes it so that in "large" clusters, we only clean up stale service chains once a minute, rather than doing it on every sync. In very large clusters, this cuts several seconds off each syncProxyRules run.

Since the chains being removed by this code are not referenced by any other rules, it doesn't matter if we let them sit around for a while before we get around to deleting them; they're not going to have any effect on packet processing either way.

However, since it could be confusing if an admin does iptables-save and finds rules that are no longer in use (eg, a KUBE-SEP- chain for a pod that exited), I made it so that it only switches from "synchronous deletion" to "periodic deletion" in "large" clusters, using the same definition of "large" as we use for deciding when to start eliminating comments from the iptables rules.

Which issue(s) this PR fixes:

none

Does this PR introduce a user-facing change?

In "large" clusters, kube-proxy in iptables mode will now sometimes
leave unused rules in iptables for a while (up to `--iptables-sync-period`)
before deleting them. This improves performance by not requiring it to
check for stale rules on every sync. (In smaller clusters, it will still
remove unused rules immediately once they are no longer used.)

(The threshold for "large" used here is currently "1000 endpoints" but
this is subject to change.)

/sig network
/priority important-longterm

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. kind/feature Categorizes issue or PR as related to a new feature. sig/network Categorizes an issue or PR as relevant to SIG Network. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Jun 1, 2022
@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Jun 1, 2022

@danwinship: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jun 1, 2022
@k8s-ci-robot k8s-ci-robot requested review from dcbw and justinsb Jun 1, 2022
@k8s-ci-robot k8s-ci-robot added area/ipvs approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Jun 1, 2022
@aojea
Copy link
Member

aojea commented Jun 2, 2022

just to try to understand these series of PR, what is the scale problem we are addressing?
memory consumption? time to sync penalty of running iptables-save?

@danwinship
Copy link
Contributor Author

danwinship commented Jun 2, 2022

Speed. iptables-save gets really slow when you have very very many iptables rules.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 24, 2022
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 24, 2022
pkg/util/iptables/save_restore.go Outdated Show resolved Hide resolved
pkg/util/iptables/save_restore.go Outdated Show resolved Hide resolved
@@ -404,6 +405,8 @@ var iptablesCleanupOnlyChains = []iptablesJumpChain{
{utiliptables.TableFilter, kubeServicesChain, utiliptables.ChainInput, "kubernetes service portals", []string{"-m", "conntrack", "--ctstate", "NEW"}},
}

var iptablesCleanupPeriod = time.Minute
Copy link
Member

@aojea aojea Jun 28, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if we only do the cleanup during the syncPeriod ?

iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 1s
  syncPeriod: 30s

this way it can be parameterizable

Copy link
Contributor Author

@danwinship danwinship Jun 29, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense

@aojea
Copy link
Member

aojea commented Jun 28, 2022

+1
Just one consideration, to tie the behavior to the syncPeriod option instead of adding a new period
Also, I think we should have a test to check that -X rules are added or removed

/assign @thockin

danwinship added 3 commits Jun 29, 2022
Turn this into a generic "large cluster mode" that determines whether
we optimize for performance or debuggability.
"iptables-save" takes several seconds to run on machines with lots of
iptables rules, and we only use its result to figure out which chains
are no longer referenced by any rules. While it makes things less
confusing if we delete unused chains immediately, it's not actually
_necessary_ since they never get called during packet processing. So
in large clusters, make it so we only clean up chains periodically
rather than on every sync.
@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jun 29, 2022
@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Jun 29, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: danwinship

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@danwinship
Copy link
Contributor Author

danwinship commented Jun 29, 2022

ok, rebased for iptables-counters fixes, changed to use syncPeriod as the timeout, and added unit tests.

I pulled in c12da17 from #110268, which serves the purpose here of confirming that chain deletion behavior is unchanged in "small" clusters. Then I renamed TestEndpointCommentElision to TestSyncProxyRulesLargeClusterMode and extended it to test the behavior of service deletion when largeClusterMode is triggered to confirm that as well.

@danwinship danwinship changed the title WIP only clean up iptables chains periodically in large clusters only clean up iptables chains periodically in large clusters Jun 29, 2022
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 29, 2022
@aojea
Copy link
Member

aojea commented Jun 29, 2022

/lgtm

Just add a mention to the release note that this behavior is controlled by the syncPeriod option, and that largeCluster is the one that has more than 1000 endpoints

@k8s-ci-robot k8s-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Jun 29, 2022
@aojea
Copy link
Member

aojea commented Jun 29, 2022

k8s.io/kubernetes/pkg/controller/volume/attachdetach/reconciler: Test_Run_OneVolumeDetachFailNodeWithReadWriteOnce expand_less

/test pull-kubernetes-unit

@k8s-ci-robot k8s-ci-robot merged commit f045fb6 into kubernetes:master Jun 29, 2022
14 checks passed
@k8s-ci-robot k8s-ci-robot added this to the v1.25 milestone Jun 29, 2022
@danwinship danwinship deleted the iptables-fewer-saves branch Jun 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/ipvs cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. kind/feature Categorizes issue or PR as related to a new feature. lgtm Indicates that a PR is ready to be merged. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/network Categorizes an issue or PR as relevant to SIG Network. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants