-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"iptables-save: command not found" upgrading from v1.25.7+k3s1 to v1.26.3+k3s1 on minimized Ubuntu Server #7291
Comments
@rbrtbnfgl maybe you could comment on whether the iptables-save/restore commands are necessary if the user never had it to begin with. IFAIK the commands are only relevant in the |
The used install script wasn't the one on master? |
The idea was to add them to the install script because kube-router doesn't have logic in it to properly clean up old rules (this is #7251). We now just have the installer wipe out all the KUBE- rules so that K3s starts up fresh. However it seems like for some reason the checks to confirm that the Line 970 in 257fa2c
@rlipscombe can you confirm the output of |
It would be irritating if we had to store the path to the iptables-save and iptables-restore binaries as returned from |
It would presumably have been the one at v1.26.3+k3s1.
At this point, I've installed iptables on the relevant node, but if I install a completely bare (minimized) Ubuntu 22.04 to a VM, I can see that If I install the I see that the |
Yes, that's the version you ran when you got the error message. The install script is always served off the master branch; it is not versioned. I am still confused as to why you're seeing it, since (as you noted) the commands should either be available to both users, or not at all.
That's fine, and is not the cause of the error you reported. The uninstall script should complete even if the commands are not available. They will of course leave the iptables rules behind, but I'm not sure that we can be expected to clean them up without access to the tools necessary to do so. |
TIL. Thanks. |
But also a completely fresh install triggers
and exits with
In https://docs.k3s.io/installation/requirements there's nothing that tells me explicitly to install So on my freshly installed machine there are no edit:
|
I also get the paths to the iptables scripts, as if the redirection to |
Oh. I wonder... On Ubuntu, |
could you try to download the script and then modify it removing the |
OS: Ubuntu 22.04 As a workaround to reproduce the issue above, moved the iptables-save and iptables-restore:
When using install script from here: Was able to see the error lines:
kubectl get nodes - showed the previous version of the k3s. a "sudo systemctl restart k3s" updated the k3s version of the nodes. When using the install script from here: We can mark this bug as verified - working as expected in the latest install script. |
This is also breaking under Alpine Linux using v1.24.16+k3s1. I see it is now fixed (using |
Environmental Info:
K3s Version:
Upgrading from:
k3s version v1.25.7+k3s1 (f7c20e2)
go version go1.19.6
To:
k3s version v1.26.3+k3s1 (01ea3ff)
go version go1.19.7
Node(s) CPU architecture, OS, and Version:
Ubuntu 22.04.2 LTS (minimized)
Linux roger-nuc1 5.15.0-69-generic #76-Ubuntu SMP Fri Mar 17 17:19:29 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration:
1 server, 4 agents
Describe the bug:
When upgrading from v1.25.7+k3s1 to v1.26.3+k3s1 the upgrader outputs the following:
The warnings about iptables-save and iptables-restore are ... not reassuring. Are they harmful? Should the upgrader check that the commands exist first?
Normal (not minimized) Ubuntu server doesn't have this problem on my other nodes.
The text was updated successfully, but these errors were encountered: