Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"iptables-save: command not found" upgrading from v1.25.7+k3s1 to v1.26.3+k3s1 on minimized Ubuntu Server #7291

Closed
rlipscombe opened this issue Apr 16, 2023 · 13 comments
Assignees

Comments

@rlipscombe
Copy link

Environmental Info:
K3s Version:

Upgrading from:

k3s version v1.25.7+k3s1 (f7c20e2)
go version go1.19.6

To:

k3s version v1.26.3+k3s1 (01ea3ff)
go version go1.19.7

Node(s) CPU architecture, OS, and Version:

Ubuntu 22.04.2 LTS (minimized)
Linux roger-nuc1 5.15.0-69-generic #76-Ubuntu SMP Fri Mar 17 17:19:29 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

Cluster Configuration:

1 server, 4 agents

Describe the bug:

When upgrading from v1.25.7+k3s1 to v1.26.3+k3s1 the upgrader outputs the following:

roger@roger-nuc1:~ % curl -sfL https://get.k3s.io | K3S_URL=https://...:6443 K3S_TOKEN=K... sh -
[INFO]  Finding release for channel stable
[INFO]  Using v1.26.3+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.26.3+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.26.3+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping installation of SELinux RPM
[INFO]  Skipping /usr/local/bin/kubectl symlink to k3s, already exists
[INFO]  Skipping /usr/local/bin/crictl symlink to k3s, already exists
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, already exists
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
sudo: iptables-save: command not found
sudo: iptables-restore: command not found

The warnings about iptables-save and iptables-restore are ... not reassuring. Are they harmful? Should the upgrader check that the commands exist first?

Normal (not minimized) Ubuntu server doesn't have this problem on my other nodes.

@dereknola
Copy link
Contributor

dereknola commented Apr 17, 2023

@rbrtbnfgl maybe you could comment on whether the iptables-save/restore commands are necessary if the user never had it to begin with. IFAIK the commands are only relevant in the k3s-killall.sh script (which the curl/install script generates on install), and was related to a change in iptables chains from a flannel update.

@rbrtbnfgl
Copy link
Contributor

The used install script wasn't the one on master?

@brandond
Copy link
Contributor

brandond commented Apr 17, 2023

The idea was to add them to the install script because kube-router doesn't have logic in it to properly clean up old rules (this is #7251). We now just have the installer wipe out all the KUBE- rules so that K3s starts up fresh.

However it seems like for some reason the checks to confirm that the iptables-save command is available are not working in this environment? Either that or they are available for the roger user but not to the sudo shell, which seems wrong.

k3s/install.sh

Line 970 in 257fa2c

if command -v iptables-save &> /dev/null && command -v iptables-restore &> /dev/null

@rlipscombe can you confirm the output of command -v iptables-save as both your roger user, and as root?

@brandond
Copy link
Contributor

It would be irritating if we had to store the path to the iptables-save and iptables-restore binaries as returned from command -v and pass in the full path to sudo, just because some systems have broken configurations where the iptables commands are in the user's path but not root's.

@rlipscombe
Copy link
Author

rlipscombe commented Apr 18, 2023

The used install script wasn't the one on master?

It would presumably have been the one at v1.26.3+k3s1.

can you confirm the output of command -v iptables-save as both your roger user, and as root?

At this point, I've installed iptables on the relevant node, but if I install a completely bare (minimized) Ubuntu 22.04 to a VM, I can see that command -v iptables-save outputs nothing and returns exit code 1 (as roger and as root).

If I install the iptables package, then they're in /usr/sbin, and are available to both users.

I see that the service_enable_and_start function (since d9f40d4 -- i.e. not in v1.26.3+k3s1) now checks for iptables-save and iptables-restore. I note, however, that the generated killall script still just runs them blindly -- https://github.com/k3s-io/k3s/blob/master/install.sh#L742.

@brandond
Copy link
Contributor

brandond commented Apr 18, 2023

I see that the service_enable_and_start function (since d9f40d4 -- i.e. not in v1.26.3+k3s1) now checks for iptables-save and iptables-restore

Yes, that's the version you ran when you got the error message. The install script is always served off the master branch; it is not versioned.

I am still confused as to why you're seeing it, since (as you noted) the commands should either be available to both users, or not at all.

I note, however, that the generated killall script still just runs them blindly.

That's fine, and is not the cause of the error you reported. The uninstall script should complete even if the commands are not available. They will of course leave the iptables rules behind, but I'm not sure that we can be expected to clean them up without access to the tools necessary to do so.

@rlipscombe
Copy link
Author

The install script is always served off the master branch; it is not versioned.

TIL. Thanks.

@ppuschmann
Copy link

ppuschmann commented Apr 19, 2023

But also a completely fresh install triggers

Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
sh: 972: iptables-save: not found
sh: 972: iptables-restore: not found

and exits with 127.

k3s is then not started automatically after installation.

In https://docs.k3s.io/installation/requirements there's nothing that tells me explicitly to install iptables and I found even (maybe outdated) instructions that purged iptables and installed nftables.

So on my freshly installed machine there are no iptables installed.
What is the recommendation?

edit:
With installed iptables the installations finishes with:

Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
/usr/sbin/iptables-save
/usr/sbin/iptables-restore
# Warning: iptables-legacy tables present, use iptables-legacy-save to see them
/usr/sbin/ip6tables-save
/usr/sbin/ip6tables-restore
[INFO]  systemd: Starting k3s-agent

@rlipscombe
Copy link
Author

I also get the paths to the iptables scripts, as if the redirection to /dev/null at https://github.com/k3s-io/k3s/blob/master/install.sh#L970 isn't working.

@rlipscombe
Copy link
Author

Oh. I wonder... On Ubuntu, iptables-save (etc.) is managed via /etc/alternatives. Does command -v sometimes get tripped up by a symlink that doesn't point anywhere?

@rbrtbnfgl
Copy link
Contributor

could you try to download the script and then modify it removing the &> /dev/null and executing it manually?

@aganesh-suse
Copy link

aganesh-suse commented May 1, 2023

OS: Ubuntu 22.04

As a workaround to reproduce the issue above, moved the iptables-save and iptables-restore:

sudo mv /usr/sbin/iptables-restore /usr/sbin/iptables-restore-bak
sudo mv /usr/sbin/iptables-save /usr/sbin/iptables-save-bak
command -v iptables-save; echo $?
1
command -v iptables-restore; echo $?
1

When using install script from here:
https://raw.githubusercontent.com/k3s-io/k3s/v1.27.1-rc1%2Bk3s1/install.sh

Was able to see the error lines:

sudo: iptables-restore: command not found
sudo: iptables-save: command not found

kubectl get nodes - showed the previous version of the k3s. a "sudo systemctl restart k3s" updated the k3s version of the nodes.

When using the install script from here:
"https://get.k3s.io"
I no longer see the iptables-save and iptables-restore command not found lines while running upgrades.
'kubectl get nodes' shows the correct upgraded k3s version - without having to manually run a k3s service restart.

We can mark this bug as verified - working as expected in the latest install script.

@ntx-ben
Copy link

ntx-ben commented Nov 13, 2023

This is also breaking under Alpine Linux using v1.24.16+k3s1. I see it is now fixed (using command -v) in latest versions (i.e. v1.28.1+k3s1), however it would be great if this fix was back-ported to previous versions where this new iptables stuff was introduced...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

7 participants