Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider cleaning up the kube-proxy-clean-up init container #122

Open
ialidzhikov opened this issue Aug 19, 2022 · 4 comments · May be fixed by #124
Open

Consider cleaning up the kube-proxy-clean-up init container #122

ialidzhikov opened this issue Aug 19, 2022 · 4 comments · May be fixed by #124
Labels
area/networking Networking related kind/cleanup Something that is not needed anymore and can be cleaned up lifecycle/rotten Nobody worked on this for 12 months (final aging stage)

Comments

@ialidzhikov
Copy link
Member

ialidzhikov commented Aug 19, 2022

How to categorize this issue?

/area networking
/kind cleanup

What would you like to be added:
c92b7ad introduces 2 init containers: cilium-kube-proxy-clean-up and kube-proxy-clean-up. Depending on the value of .Values.kubeProxyCleanup one of the init containers is picked up.
As far as I can see .Values.kubeProxyCleanup is always set to cilium-documentation

# Specifies whether to use the approach documented by cilium or kube-proxy to cleanup the iptables from kube-proxy
kubeProxyCleanup: cilium-documentation

which means that always the 1st init container (cilium-kube-proxy-clean-up) is picked up and used.

Currently I don't see when and how the kube-proxy-clean-up init container would be picked up and used.

Why is this needed:
No need of init container that is never used/rendered.

@gardener-robot gardener-robot added area/networking Networking related kind/cleanup Something that is not needed anymore and can be cleaned up labels Aug 19, 2022
@ialidzhikov
Copy link
Member Author

@ScheererJ can you confirm my observation that this init container can be safely removed?

@ialidzhikov ialidzhikov changed the title Consider cleaning up the kube-proxy-clean-upinit container Consider cleaning up the kube-proxy-clean-up init container Aug 19, 2022
@ialidzhikov ialidzhikov linked a pull request Aug 19, 2022 that will close this issue
@ScheererJ
Copy link
Member

@ialidzhikov Your observation is correct that the first of the two cleanup containers is used per default and there is currently no switch exposed to change this at runtime without doing a code change first. Therefore, the container could be safely removed. However, I would not do it. From my point of view, the approach documented by cilium is dangerous as it simply discards rules according to their names. Therefore, I would rather like to have a fallback option, which is the second container.

@ialidzhikov
Copy link
Member Author

From my point of view, the approach documented by cilium is dangerous as it simply discards rules according to their names. Therefore, I would rather like to have a fallback option, which is the second container.

It creates confusion for external contributors and I don't see much value keeping something that never gets executed.
The source code is in VCS - if we remove the init container, it won't get lost and we will be still able to check old revisions if we want to switch back to it.

Right now this init container logs the following warnings/errors:

E0822 12:45:34.286305       1 proxier.go:643] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.4.0-6-cloud-amd64/modules.builtin: no such file or directory" filePath="/lib/modules/5.4.0-6-cloud-amd64/modules.builtin"
I0822 12:45:34.288277       1 proxier.go:653] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I0822 12:45:34.289751       1 proxier.go:653] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I0822 12:45:34.291055       1 proxier.go:653] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0822 12:45:34.292388       1 proxier.go:653] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0822 12:45:34.293648       1 proxier.go:653] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
time="2022-08-22T12:45:34Z" level=warning msg="Running modprobe ip_vs failed with message: `modprobe: WARNING: Module ip_vs not found in directory /lib/modules/5.4.0-6-cloud-amd64`, error: exit status 1"

The reason seems to be that it does not mount /lib/modules from the host. My point is that like everything that is not used, it is about to get outdated (or break).

If you still see value in keeping it, then I can close this issue (and the corresponding draft PR).

@ScheererJ
Copy link
Member

Honestly speaking, I doubt that this init container holds any external contributor back. There are far more components in this repository that are not used in the Gardener setup, but still present as they are part of the upstream cilium distribution, e.g. etcd-operator.

Removing the container will leave it in version control for sure. However, you also need to still have in mind that this container existed in case of an issue to actually look for it. People forget over time...

Let's revisit this topic once @DockToFuture is back beginning of October.

@gardener-robot gardener-robot added the lifecycle/stale Nobody worked on this for 6 months (will further age) label May 4, 2023
@gardener-robot gardener-robot added lifecycle/rotten Nobody worked on this for 12 months (final aging stage) and removed lifecycle/stale Nobody worked on this for 6 months (will further age) labels Jan 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking Networking related kind/cleanup Something that is not needed anymore and can be cleaned up lifecycle/rotten Nobody worked on this for 12 months (final aging stage)
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants