-
Notifications
You must be signed in to change notification settings - Fork 670
Another app is currently holding the xtables lock. Perhaps you want to use the -w option? #3828
Comments
The whole log is that one line? Generally iptables commands in Weave Net are run with the |
Yes, that was the full log. Indeed, the auto detection seems to chose legacy:
The detection commands returns the following results:
Running the
I use |
Could you try setting |
I just tried setting The weave-npc pod loop crash with an error similar to #3816
When I set back the IPTABLES_BACKEND env to legacy, weave starts |
I have found some code lines in the Some iptables commands runs outside of the weave application within the launch.sh. This could be the reason of the empty log file |
Thanks @nesc58 - you are agreeing with what I said at #3828 (comment) |
Sorry my mistake, we were talking about different parts of the script. |
@Congelli501 I meant to try setting it to the correct one for your machine. Sounds like "legacy" was correct. |
I don't expect the parameter will be added. The man pages of
A lot of the |
Ah, thanks.
It wouldn't be wrong but the |
Closing as fixed by #3835 |
What you expected to happen?
Weave starts normally, on the first try
What happened?
On a busy cluster (pod waiting to be created on a node) weave takes a lot of tries to start (fail at start, goes into CrashLoopBackOff status).
The "weave" container of the weave pod of the node shows the following log:
After killing the weave pod a lot of time (to retry after a CrashLoopBackOff), weave end up starting up, and then run normally.
On one of my node, it took me 20 minutes to make weave start.
How to reproduce it?
Setup a busy kubernetes node (100 pods) and reboot it. The node will start with a lot of assigned pod waiting to start and won't be able to start weave.
Anything else we need to know?
This bug occurred after a cluster upgrade (reinstall).
I didn't have this issue on my previous cluster (Ubuntu 18.04 amd64, Kubernetes v1.13.11, Weave 2.5.2)
Cloud provider: bare metal
Versions:
Logs:
Network:
Seems ok, same network as with our previous cluster.
Troubleshooting?
It seems the problems come from iptables being locked (used) by other kubernetes processes.
I saw a lot of portmap commands beeing run (100+ commands sometimes) by kubelet.
Instead of retrying on an iptables lock error or waiting its turn (with the
--wait
option), it crashes, making it hard to make it start (the iptables command needs to work on the first try).I can't explain why my previous cluster was working fine though.
The text was updated successfully, but these errors were encountered: