-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix hostport duplicate chain names #55153
Conversation
/ok-to-test |
@@ -198,3 +198,16 @@ func TestHostportManager(t *testing.T) { | |||
assert.EqualValues(t, true, port.closed) | |||
} | |||
} | |||
|
|||
func TestGetHostportChain(t *testing.T) { | |||
m := make(map[string]int) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: what about map[string]bool
?
@@ -247,7 +248,7 @@ func (hm *hostportManager) closeHostports(hostportMappings []*PortMapping) error | |||
// WARNING: Please do not change this function. Otherwise, HostportManager may not be able to | |||
// identify existing iptables chains. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does that mean by changing the chain name, we are orphaning the old chain/rule? Is that acceptable or anyway to workaround?
@kubernetes/sig-network-pr-reviews
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a commit to clean up these old chains/rules as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact, thought about this again, I wonder why it would be an issue changing this function. Can't we assume upon node/kubelet upgrade, all iptables rules will not be retained because node has been restarted? Am I suggesting an unnecessary cleanup? Would be great to have guidance from @thockin and @freehan
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we assume upon node/kubelet upgrade, all iptables rules will not be retained because node has been restarted?
😕 Is this true? I don't think everyone will restart node upon upgrade. Is this the required step of upgrading to the next release or something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At least that is what happens with gce/upgrade.sh :)
I'm not aware of any supported per-system-component upgrade mechanism in k8s yet, not sure if someone already support that...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is better. It's low cost.
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, seems to be a viable solution. Some comments about testing.
} | ||
|
||
// TODO remove this, please refer https://github.com/kubernetes/kubernetes/pull/55153 | ||
func getBugyHostportChain(id string, pm *PortMapping) utiliptables.Chain { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be worth to also add an one-line description about the issue?
"-A KUBE-HP-63UPIDJXVRSZGSUZ -m comment --comment \"pod1_ns1 hostport 8081\" -s 10.1.1.2/32 -j KUBE-MARK-MASQ": true, | ||
"-A KUBE-HP-63UPIDJXVRSZGSUZ -m comment --comment \"pod1_ns1 hostport 8081\" -m udp -p udp -j DNAT --to-destination 10.1.1.2:81": true, | ||
"-A KUBE-HP-WFBOALXEP42XEMJK -m comment --comment \"pod3_ns1 hostport 8443\" -s 10.1.1.4/32 -j KUBE-MARK-MASQ": true, | ||
"-A KUBE-HP-WFBOALXEP42XEMJK -m comment --comment \"pod3_ns1 hostport 8443\" -m tcp -p tcp -j DNAT --to-destination 10.1.1.4:443": true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we validate the cleanup logic in unit test so that we will have more confidences?
@@ -142,7 +143,7 @@ func writeLine(buf *bytes.Buffer, words ...string) { | |||
// this because IPTables Chain Names must be <= 28 chars long, and the longer | |||
// they are the harder they are to read. | |||
func hostportChainName(pm *PortMapping, podFullName string) utiliptables.Chain { | |||
hash := sha256.Sum256([]byte(string(pm.HostPort) + string(pm.Protocol) + podFullName)) | |||
hash := sha256.Sum256([]byte(strconv.Itoa(int(pm.HostPort)) + string(pm.Protocol) + podFullName)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For hostport_syncer
, is the cleanup logic originally baked in? Might be great to verify it in unit test as well.
@MrHohn Addressed all your comments, PTAL |
assert.True(t, ok) | ||
// check KUBE-HOSTPORTS chain should be cleaned up | ||
hostportChain, ok := natTable.chains["KUBE-HOSTPORTS"] | ||
assert.True(t, ok, "%s %v", string(hostportChain.name)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
assert.True(t, ok, "%s", string(hostportChain.name))
assert.True(t, ok) | ||
// check pod1's rules in KUBE-HOSTPORTS chain should be cleaned up | ||
hostportChain, ok := natTable.chains["KUBE-HOSTPORTS"] | ||
assert.True(t, ok, "%s %v", string(hostportChain.name)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit:
assert.True(t, ok, "%s", string(hostportChain.name))
@@ -274,6 +275,11 @@ func (f *fakeIPTables) restore(restoreTableName utiliptables.Table, data []byte, | |||
} | |||
} | |||
_, _ = f.ensureChain(tableName, chainName) | |||
if !strings.Contains(allLines, "-X "+string(chainName)) { | |||
if err := f.FlushChain(tableName, chainName); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I don't quite get the logic here, why should we flush the chain when there is no -X CHAIN_NAME
? Explanation would be appreciated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I should update here to just flush user defined chains.
The --noflush option for iptables-restore doesn't work for user-defined chains, such as TESTCHAIN, only builtin chains. https://unix.stackexchange.com/questions/134687/how-to-combine-iptables-rulesets
I did a test to confirm this. See below example, the POSTROUTING chain didn't get flush, but KUBE-NODEPORT chain did.
[root@kubernetes-master vagrant]# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N KUBE-HP-5N7UH5JAXCVP5UJR
-N KUBE-NODEPORT
-A POSTROUTING -p tcp -m comment --comment "pod3_ns1 hostport 8443" -m tcp --dport 8443 -j KUBE-HP-5N7UH5JAXCVP5UJR
-A KUBE-NODEPORT -p tcp -m comment --comment "pod3_ns1 hostport 8443" -m tcp --dport 8443 -j KUBE-HP-5N7UH5JAXCVP5UJR
[root@kubernetes-master vagrant]# cat nat.log
# Generated by iptables-save v1.4.21 on Mon Nov 13 03:14:59 2017
*nat
:PREROUTING ACCEPT [65:3900]
:INPUT ACCEPT [65:3900]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-HP-5N7UH5JAXCVP5UJR - [0:0]
:KUBE-NODEPORT - [0:0]
COMMIT
# Completed on Mon Nov 13 03:14:59 2017
[root@kubernetes-master vagrant]# iptables-restore --table=nat --noflush < nat.log
[root@kubernetes-master vagrant]# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N KUBE-HP-5N7UH5JAXCVP5UJR
-N KUBE-NODEPORT
-A POSTROUTING -p tcp -m comment --comment "pod3_ns1 hostport 8443" -m tcp --dport 8443 -j KUBE-HP-5N7UH5JAXCVP5UJR
And this behavior is needed to cleanup rules in KUBE-NODEPORT in my tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the investigation, now I get it. Interesting that this behavior is not documented in iptables manual.
Though I just learned that user-defined chains won't always be flushed, unless they are explicitly mentioned in the input of iptables-restore --noflush
, as what below comment is referring to:
kubernetes/pkg/kubelet/network/hostport/hostport_syncer.go
Lines 263 to 267 in f281404
// We must (as per iptables) write a chain-line for it, which has | |
// the nice effect of flushing the chain. Then we can remove the | |
// chain. | |
writeLine(natChains, existingNATChains[chain]) | |
writeLine(natRules, "-X", chainString) |
In short I believe your fix will work as expected, but the implementation of fakeIPTables.restore()
still seems problematic as it flushes more than it should...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops I scan too fast, looks like that is exactly what you have implemented. Could you update it to just flush user defined chains? Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Also added a test TestRestoreFlushRules
for fackIPtables.restore()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks!
@@ -247,7 +248,7 @@ func (hm *hostportManager) closeHostports(hostportMappings []*PortMapping) error | |||
// WARNING: Please do not change this function. Otherwise, HostportManager may not be able to | |||
// identify existing iptables chains. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact, thought about this again, I wonder why it would be an issue changing this function. Can't we assume upon node/kubelet upgrade, all iptables rules will not be retained because node has been restarted? Am I suggesting an unnecessary cleanup? Would be great to have guidance from @thockin and @freehan
b4da86d
to
58f44a9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, appreciate your works!
/lgtm
This might worth a release note. /assign @thockin |
@MrHohn: the
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@@ -82,6 +82,7 @@ type Table string | |||
const ( | |||
TableNAT Table = "nat" | |||
TableFilter Table = "filter" | |||
TableMangle Table = "mangle" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this part of the same PR? Seems unrelated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's used by NewFakeIPTables
to cache builtin chains.
@@ -177,6 +178,8 @@ func (hm *hostportManager) Remove(id string, podPortMapping *PodPortMapping) (er | |||
chainsToRemove := []utiliptables.Chain{} | |||
for _, pm := range hostportMappings { | |||
chainsToRemove = append(chainsToRemove, getHostportChain(id, pm)) | |||
// TODO remove this, please refer https://github.com/kubernetes/kubernetes/pull/55153 | |||
chainsToRemove = append(chainsToRemove, getBugyHostportChain(id, pm)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/Bugy/Buggy/
@@ -247,7 +248,7 @@ func (hm *hostportManager) closeHostports(hostportMappings []*PortMapping) error | |||
// WARNING: Please do not change this function. Otherwise, HostportManager may not be able to | |||
// identify existing iptables chains. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is better. It's low cost.
@@ -177,6 +178,8 @@ func (hm *hostportManager) Remove(id string, podPortMapping *PodPortMapping) (er | |||
chainsToRemove := []utiliptables.Chain{} | |||
for _, pm := range hostportMappings { | |||
chainsToRemove = append(chainsToRemove, getHostportChain(id, pm)) | |||
// TODO remove this, please refer https://github.com/kubernetes/kubernetes/pull/55153 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... after release 1.9
This needs an associated issue and a |
/lgtm |
Comments are addressed and an associated issue is created. /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: chenchun, MrHohn, thockin Associated issue: 55771 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
Automatic merge from submit-queue (batch tested with PRs 54436, 53148, 55153, 55614, 55484). If you want to cherry-pick this change to another branch, please follow the instructions here. |
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. remove duplicate function getBuggyHostportChain **What this PR does / why we need it**: remove `TODO remove this after release 1.9, please refer #55153 function `getBuggyHostportChain` does bad conversion on HostPort from int32 to string, now that `getHostportChain` does right, we remove function `getBuggyHostportChain` . **Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: Fixes # **Special notes for your reviewer**: **Release note**: ```release-note NONE ```
Fixes bad conversion from int32 to string. Without this patch, getHostportChain/hostportChainName generates the same chain names for ports 57119/55429/56833 of the same pod.
closes #55771