-
Notifications
You must be signed in to change notification settings - Fork 258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube proxy (1.21.2): unknown option --random-fully #1229
Comments
I'm confused, does this only happen with cilium + kube-proxy ipvs mode? You said
but we have tested canal and iptables mode on RHEL8 and not seen any issues. |
I'm also at a loss as to why kube-proxy would be attempting to use --random-fully; this is supposed to be auto-detected and only enabled if the detected iptables version is >= 1.6.2: https://github.com/kubernetes/kubernetes/blob/master/pkg/util/iptables/iptables.go#L164-L166 Either way, I believe this is going to be an upstream issue and not RKE2 specific, as we're just packaging the upstream kube-proxy. |
I did some tests and don't seems to be related to the CNI being used or to the kube-proxy mode, because I found the same error with both canal + iptables mode (rke2 standard setup) and cilium + ipvs mode (my setup). |
Do you have something else on your host that's adding rules with --random-fully? As I mentioned earlier, we first qualified RHEL8 in #16 and it has been part of our QA matrix ever since, and we haven't seen this issue. |
Maybe something related to ip6tables
Testbed - single node (server) setup (rke2 standard: canal + kube-proxy iptables mode) Vagrant box: bento/centos-8 (virtualbox, 202105.25.0) 4GB ram, 2 vcpu
|
sh-4.2# iptables -A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
iptables: No chain/target/match by that name.
sh-4.2#
sh-4.2# ip6tables -A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
ip6tables v1.8.5 (nf_tables): unknown option "--random-fully" This seems like the root cause - this version of iptables should support --random-fully. What do you get from |
That test was made inside kube-proxy container |
What do you get if you run those two commands on your host? |
|
According to my investigations, when using iptables mode nft, we require v1.8.6 or greater to use |
The fix for this will need backports to 19, 20, and 21 release branches as well @manuelbuil |
After updating the version the result is the same. Therefore, my theory was wrong. ip6tables-nft does not include the --random-fully flag for unkown reasons when built with buildroot. If I build iptables from source, it works and when comparing with how buildroot is building it, it looks exactly the same |
iptables 1.8.5 seems to support ip6tables nft + random-fully. I also tried to compile iptables from source and it works as expected centos 7 (3.10.0-1160.31.1.el7.x86_64)
---> ok centos 8 (4.18.0-305.7.1.el8_4.x86_64)
---> ok simple run of
|
Thanks for checking @bovy89! According to our latest investigations, the issue is in iptables itself when building statically. It does not call the function init_extensions6() for the case that defined(NO_SHARED_LIBS) is set. We will create a patch today to verify this |
Bug report in netfilter: https://bugzilla.netfilter.org/show_bug.cgi?id=1550 |
The patch will probably get more attention if you send it to the netfilter-devel mailing list http://vger.kernel.org/vger-lists.html#netfilter-devel They probably don't accept patches attached to bugzilla and they would expect some commit message. But if you are not comfortable with using /cc @erikwilson |
An iptables patch has been submitted and accepted, thanks for the info @vadorovsky! |
This new image fixes the bug described in rancher#1229 Signed-off-by: manuel <manuel@suse.com>
Both on: * static-pod (kubernetes image) * helm chart (kube-proxy image) This new image fixes the bug described in rancher#1229 Signed-off-by: manuel <manuel@suse.com>
Both on: * static-pod (kubernetes image) * helm chart (kube-proxy image) This new image fixes the bug described in rancher#1229 Signed-off-by: manuel <manuel@suse.com>
Both on: * static-pod (kubernetes image) * helm chart (kube-proxy image) This new image fixes the bug described in rancher#1229 Signed-off-by: manuel <manuel@suse.com>
Both on: * static-pod (kubernetes image) * helm chart (kube-proxy image) This new image fixes the bug described in rancher#1229 Signed-off-by: manuel <manuel@suse.com>
Both on: * static-pod (kubernetes image) * helm chart (kube-proxy image) This new image fixes the bug described in rancher#1229 Signed-off-by: manuel <manuel@suse.com>
Both on: * static-pod (kubernetes image) * helm chart (kube-proxy image) This new image fixes the bug described in rancher#1229 Signed-off-by: manuel <manuel@suse.com>
Hi, is there any chance of getting that fixed in 1.21? |
It should be fixed in 1.21.3. Can you please check? Thanks! |
Seems to be fixed, thanks. |
Validated on master branch commit
|
Environmental Info:
RKE2 Version:
rke2 version v1.21.2+rke2r1 (d58ad61)
go version go1.16.4b7
Node(s) CPU architecture, OS, and Version:
Cluster Configuration:
Describe the bug:
see full log below
Steps To Reproduce:
I was able to reproduce this error on a simple RKE2 setup (rpm setup with canal and kube-proxy iptables mode)
Additional context / logs:
kube-proxy ipvs mode logs:
kube-proxy iptables mode logs:
The text was updated successfully, but these errors were encountered: