You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our Grafana dashboard shows these nodes consistently face reconciliation errors
Kilo logs clearly point to an error in an iptables call.
{"caller":"mesh.go:262","component":"kilo","error":"failed to reconcile rules: failed to check if rule exists: failed to populate chains for table \"filter\": running [/sbin/iptables -t filter -S --wait]: exit status 1: iptables v1.8.4 (nf_tables): table `filter' is incompatible, use 'nft' tool.\n\n","level":"error","ts":"2024-06-27T14:54:04.579160946Z"}
We also see that our other nodes frequently have segmentation faults in the iptables binary, which correlate with times when kilo calls iptables:
{"caller":"mesh.go:262","component":"kilo","error":"failed to reconcile rules: failed to check if rule exists: failed to populate chains for table \"nat\": running [/sbin/iptables -t nat -S --wait]: exit status -1: ","level":"error","ts":"2024-06-27T14:17:50.343571655Z"}
The libnftnl.so.11.3.0 file is not present on the host, it only exists in containers (find found it under /var/lib/containerd/ or /run/containerd). In fact, we found this file in the kilo container at /usr/lib/libnftnl.so.11.3.0.
Cursiously, running /sbin/iptables -t nat -S --wait from a shell inside the kilo container (docker.io/squat/kilo:0.6.0) works without causing a segfault or an error 🤔
The Kilo container image ships with iptables 1.8.4, which is a little old. iptables has seen some recent updates addressing the use "nft" tool instead error, and our other containers touching networking (mostly kube-router) use iptables v1.8.9. Under the hood, kube-router containers run ipset v7.17 while kilo provides ipset v7.6. Since everything seems to run smoothly with kube-router, I think upgrading the kilo image to ship these versions would help. Is there any reason to keep these old versions?
I see the kilo image relies on alpine. Maybe bumping it to Alpine v3.18 (seems like latest versions have a few annoying bugs - see cloudnativelabs/kube-router#1678) will help?
The text was updated successfully, but these errors were encountered:
TPXP
changed the title
Kilo containers bundle an outdated version of iptables causing reconciliation errors
Kilo images bundle an outdated version of iptables causing reconciliation errors
Jun 27, 2024
Thank you for the extremely detailed report. There is no reason for the old iptables packages, there simply haven't been error reports for the packages. The package versions are old because we are using old alpine base images. This should be easily addressed with a base image update.
@TPXP can you try the newest Kilo tag to see if that fixes your issue? The newest tag will be 0122dec8f16a61518dd02899501a8e8756387b76. Should be built soon!
Hello,
We are running kilo on a cluster with two different node pools, one of which has the following system details:
Our Grafana dashboard shows these nodes consistently face reconciliation errors

Kilo logs clearly point to an error in an iptables call.
We also see that our other nodes frequently have segmentation faults in the
iptables
binary, which correlate with times when kilo callsiptables
:The
libnftnl.so.11.3.0
file is not present on the host, it only exists in containers (find found it under/var/lib/containerd/
or/run/containerd
). In fact, we found this file in the kilo container at/usr/lib/libnftnl.so.11.3.0
.Cursiously, running
/sbin/iptables -t nat -S --wait
from a shell inside the kilo container (docker.io/squat/kilo:0.6.0
) works without causing a segfault or an error 🤔The Kilo container image ships with iptables 1.8.4, which is a little old. iptables has seen some recent updates addressing the
use "nft" tool instead
error, and our other containers touching networking (mostly kube-router) use iptables v1.8.9. Under the hood, kube-router containers runipset v7.17
while kilo providesipset v7.6
. Since everything seems to run smoothly with kube-router, I think upgrading the kilo image to ship these versions would help. Is there any reason to keep these old versions?I see the kilo image relies on alpine. Maybe bumping it to Alpine v3.18 (seems like latest versions have a few annoying bugs - see cloudnativelabs/kube-router#1678) will help?
The text was updated successfully, but these errors were encountered: