-
Notifications
You must be signed in to change notification settings - Fork 39k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
iptables are spammed with entries in the KUBE-FIREWALL chain #82361
Comments
/sig network |
@caseydavenport Any ideas on this one? |
/triage unresolved Comment 🤖 I am a bot run by vllry. 👩🔬 |
/assign @danwinship |
What version of iptables do you have installed, and if 1.8.x, is it running in legacy or nft mode? (If it's iptables 1.8.1, try upgrading to 1.8.2 or 1.8.3.)
|
I'm running into a similar issue in an environment that runs CentOS 7. I can provide any backing information too if needed from my side. I am also using flannel without canal. |
@mnaser as Dan said above, the iptables version information would be helpful. It would also help if you could check if other chains/rules are having this problem, or only the KUBE-FIREWALL chain. |
This is happening to 6 of my nodes, all Debian buster (3 are masters, 3 are workers) on a freshly kube-admin bootstrapped v1.15.3 cluster w/ Cilium 1.6, with no kube-proxy.
Oh, and |
You’ll want to use iptables in legacy mode instead:
#71305 has more details. |
We have the same versions as @bradfitz . We will now try the workaround as @praseodym suggested and observer further. |
I'm not sure how using the wrong mode would cause this, unless nft mode was implementing " |
@danwinship, see #71305 for the details. In a nutshell, AIUI: there are a mix of iptables binaries being run in different modes, of which your host iptables is only one. |
Can we close this as a dup of #71305 ? |
@thockin we have applied the workaround and use iptables-legacy right now. The issue with duplicate rules did not occur so far. However, as far as I understood iptables-nft should be supported in the future and I'm not quite sure whether this issue occurs as a result of using both (legacy & nft) at the same time or because of a bug in iptables-nft? |
I am familiar with #71305. But the iptables rule in question here is only created by kubelet, so it wouldn't be affected by having kube-proxy use the wrong mode. Even if you were running containerized kubelet and kubelet was using the wrong mode, it still wouldn't fail in this way. It looks like in iptables-nft 1.8.2,
as compared with iptables-legacy or iptables-nft 1.8.3:
still, this is the opposite of the bug that would be needed to cause the infinite rule propagation problem... Given that switching to iptables-legacy fixed the problem though, it seems like this is an iptables-nft bug. |
What we could observe was that all (?) cali*-chains were held in This also caused a warning when doing |
@caseydavenport @danwinship @bowei Do we need to have a special sig-net call to strategize on this? It's suddenly feeling a LOT more urgent... |
OK, I was testing the wrong rule above: this does break with iptables-nft 1.8.2:
So if you are using iptables 1.8.1 or 1.8.2 in nft mode, you will get infinite firewall drop rules. (It is fixed in 1.8.3.) This is not a mixing-nft-and-legacy bug, it's just "iptables 1.8.1-1.8.2 has bugs that make it not work with kubernetes". |
I assume you meant Unfortunately Debian buster doesn't yet seem to have updated to 1.8.3 or backported the fix to 1.8.2:
|
So TL;DR here is that Debian Buster is just broken for Kubernetes without enabling iptables legacy alternative. |
@wjentner for calico to use the nft backend you need to explicitly activate it with |
@danwinship If you aren't able to handle this issue, consider unassigning yourself and/or adding the 🤖 I am a bot run by vllry. 👩🔬 |
/remove-triage unresolved |
@wjentner I see you have filed a bug with Debian. I don't really know Debian process very well... do you know what's likely to happen next? Is there some reason 1.8.3 is only in "buster-backports"? Is that like a temporary testing stage and the fixed package will eventually make its way into buster? |
@danwinship I don't know either. So far I have not received any reply. I also don't know why 1.8.3 is not in the stable release yet but to cite backports.debian.org:
Source: https://backports.debian.org/ Right now we are using the workaround to use iptables-legacy and so far we did not experience any further problems. |
Oh, also, while iptables 1.8.2 continues to be problematic, the specific |
@danwinship excellent, thank you! |
1.17 |
/close This should be fixed now that @danwinship PR was merged #82966 |
@aojea: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Actually, no. #71305 was fixed by #82966. But this bug was fixed shortly pre-1.17 by #81517, so it probably should have been closed anyway. It's not fixed in 1.16 and earlier, and #81517 is large enough to be dubious as a backport. I had originally wanted to at least commit a warning to older branches, except that it's tricky to do because you can't just check the iptables version number or you'll get a false positive on RHEL 8. 🙁. I guess if we get more reports of people hitting this on older releases we can worry about doing something there. |
Due to [1] we need to deploy iptables from backports on Buster, to avoid extremely long and repetitive iptables chains/rules that affects performance. [1] kubernetes/kubernetes#82361 Bug: T287238 Change-Id: I7c321c6988fe4a2009d50bc51bf38f1dac53137b
TL;DR:
iptables versions
1.8.1
and1.8.2
have a bug causing rules of typeto be infinitely added to the
KUBE-FIREWALL
chain.These iptable versions are currently in use in Debian buster.
Workaround:
Change alternatives to use
iptables-legacy
instead ofiptables-nft
:Also, be aware of: #71305
Solution:
Update the iptables package to
>=1.8.3
. At the time of writing this version is not distributed in the Debian stable package-list but distributed in buster-backports. There is no backport-fix for1.8.2
or1.8.1
available. A bug report has been filed already.Thanks to @danwinship, the next kubernetes version (1.17) will contain a fix regarding the particular KUBE-FIREWALL spamming bug. See #81517 for more details.
Original description:
What happened: Our master node is spammed with iptable entries* in the KUBE-FIREWALL chain which eventually increases the load.
Showing the first few entries:
Right now we have 8576 entries. Before removing them the first time we had >30k entries. While observing the iptables we learned that every 1-2 seconds a new rule is added.
What you expected to happen:
No duplicate entries.
How to reproduce it (as minimally and precisely as possible): We use a single node environment for testing setup with kubeadm. We have upgraded several times.
Anything else we need to know?:
Environment:
kubectl version
):cat /etc/os-release
):uname -a
):canal
The text was updated successfully, but these errors were encountered: