-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
iptables proxy reorg in preparation for minimizing iptables-restore #110266
iptables proxy reorg in preparation for minimizing iptables-restore #110266
Conversation
@danwinship: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
3e0cfe6
to
cfecfb4
Compare
/retest-required |
cfecfb4
to
15953b3
Compare
895624c
to
81c55fb
Compare
We figure out early on whether we're going to end up outputting no endpoints, so update the metrics then. (Also remove a redundant feature gate check; svcInfo already checks the ServiceInternalTrafficPolicy feature gate itself and so svcInfo.InternalPolicyLocal() will always return false if the gate is not enabled.)
Part of reorganizing the syncProxyRules loop to do: 1. figure out what chains are needed, mark them in activeNATChains 2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS 3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP) This fixes the handling of the endpoint chains. Previously they were handled entirely at the top of the loop. Now we record which ones are in use at the top but don't create them and fill them in until the bottom.
Part of reorganizing the syncProxyRules loop to do: 1. figure out what chains are needed, mark them in activeNATChains 2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS 3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP) This fixes the handling of the SVC and SVL chains. We were already filling them in at the end of the loop; this fixes it to create them at the bottom of the loop as well.
Part of reorganizing the syncProxyRules loop to do: 1. figure out what chains are needed, mark them in activeNATChains 2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS 3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP) This fixes the handling of the EXT chain.
Part of reorganizing the syncProxyRules loop to do: 1. figure out what chains are needed, mark them in activeNATChains 2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS 3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP) This fixes the jump rules for internal traffic. Previously we were handling "jumping from kubeServices to internalTrafficChain" and "adding masquerade rules to internalTrafficChain" in the same place.
Part of reorganizing the syncProxyRules loop to do: 1. figure out what chains are needed, mark them in activeNATChains 2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS 3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP) This moves the FW chain creation to the end (rather than having it in the middle of adding the jump rules for the LB IPs).
81c55fb
to
367f18c
Compare
/retest-required |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
args
used to be an optimization - I wonder if we have accidentally pessimized by breaking apart blocks that used to share one args slice?
Overall I see what this does and why it makes the next step possible. I find it a little harder to follow, personally, but I might just be too close to the current code. There are so many variables that are set in one place and then used hundreds of lines away.
We should look for ways to make this easier to read, still.
Thanks for good, self-contained commits. Exemplary.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: danwinship, thockin The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
3 similar comments
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
yeah, this is why i filed #109481 |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
This just moves code around in
pkg/proxy/iptables/proxier.go
with no real change to the logic of what rules we output. But the end result is that the loop insyncProxyRules
is broken up into 3 phases:activeNATChains
KUBE-SERVICES
/KUBE-EXTERNAL-SERVICES
/KUBE-NODEPORTS
jumping to the servicePort-specific chainsKUBE-SVC-*
,KUBE-SVL-*
,KUBE-EXT-*
,KUBE-FW-*
,KUBE-SEP-*
)A followup PR (#110268) will then change it so that we skip the third step if the servicePort in question hasn't changed since the last sync, so as to minimize the input to iptables-restore.
Which issue(s) this PR fixes:
none
Does this PR introduce a user-facing change?
/sig network
/priority important-longterm