-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes requires conntrack binary #404
Comments
Thanks for the report. I'm not against adding it, but I have some further questions below. This is pointing out a dependency from kubelet (dockershim), but the OS doesn't ship a kubelet binary. How are you installing and running it? I think in general there are two good cases here:
|
In my case I am building an AMI via Packer and copy the kubelet binary to the OS (following the pattern laid out here - https://github.com/awslabs/amazon-eks-ami). If you pull in via RPM how would you choose the version of Kubernetes you want to run? I am not aware of an official Image for kubelet and was looking to not have to manage more component myself and is the reason I copy the binary. CoreOS use to provide images but I can't seem to find those anymore (and not sure conntrack was included in the image). |
We discussed this very briefly during the meeting today. In the meeting it was decided we would ask @cyrus-mc this:
Could you provide conntrack in this same step where you provide I did test out package layering it and it seems it does have some other deps other than just the one RPM so if you do add it in as a binary in the packer step you might have to add more than just the conntrack binary.
|
Additionally, Is the daemon needed too? And anyway, should we ask for split binary packages in order to consume the utility and the daemon separately? |
@dustymabe That is what I am currently doing. When I install kubectl I also install/copy conntrack. I guess it would make more sense to run kubelet via a container. Which is what I use to do on Container Linux when they supplied kubelet images. Don't believe they do that anymore. And I didn't want to get into having to manage my own builds of kubelet images. What I do now works well for me. |
Daemon is not needed. Just the CLI utility. |
Kubernetes publishes the |
Thanks both for the feedback. While I still believe that "all kubelet transitive dependencies on-host" is a bad plan, I have the feeling that |
This would also benefit |
@MartinForReal @hakman my understanding is that both kops and kubeadm are just trying to mirror/satisfy the kubelet dependency, is that right? Or are they also using |
In the case ok Kops, it just satisfies the kubelet dependency. I mentioned this here in case this gets some higher priority. |
In the case of kubeadm, it just satisfies the kubelet dependency. FYI |
Update to my comment above, Kubernetes v1.19+ will stop publishing k8s.gcr.io/hyperkube and leave Kubelet to traditional or container image packagers. In Typhoon, the Kubelet image was spun out of hyperkube and spruced up. Its actively used on Fedora CoreOS clusters. No need for |
@dghubble I think most people will do their first k8s installation with kubeadm and the native kubelet binary, since this is the topmost officially documented way. There is even a copy/paste section for Container Linux that can be used on Fedora CoreOS with minor adoptions. Therefore I think it makes sense to have conntrack included. Also conntrack is so far the only dependency that is needed and not included (hopefully it will stay that way...). |
Its unfortunately not the only dependency. Once the Kubelet runs, next you'd want to use it in a conformant manner. For NFS, Gluster, Ceph, and Azure files, the Kubelet environment requires packages (my links above). The Kubelet iptables tool has to align with kube-proxy too. Its a moving target and could change. Imo, its preferable to tightly control that Kubelet environment as a container image when using a container optimized OS. I view kubeadm as geared for traditional package-based OSes. That curl'd kubelet binary described in their docs would have somewhat worked on CoreOS Container Linux. Because some deps and kubelet-wrapper were shipped in the OS (something we later regretted as I recall). But CoreOS clusters used the hyperkube image in CoreOS supported clusters (e.g. Tectonic). |
I understand... guess I'll have to look into running kubelet as container, I like the idea anyway. Until then I can personally live with It would be nice to have a documented way to run containerized kubelet on Fedora CoreOS, or at least some tutorial to get started. |
Oh, its already done. Nevermind :) |
In the short term, one option is to add the package by default but strip out the systemd unit. This pattern currently breaks anyone who wants the daemon though, as discussed in e.g. the networkd bits coreos/fedora-coreos-config#648 |
coreos/fedora-coreos-tracker#404 https://bugzilla.redhat.com/show_bug.cgi?id=1925698 openshift/machine-config-operator#2421 This will help us work around a believed kernel bug for OpenShift right now. We may remove this later.
The conntrack binary is required for correct cleanup of network namespaces (https://github.com/kubernetes/kubernetes/blob/95a3cd54cf739019b1211163add7247bd31c0ed7/pkg/kubelet/dockershim/network/hostport/hostport_manager.go#L69).
If CoreOS is to be used as an operating system to run Kubernetes this component should be included.
[root@ip-10-36-8-249 kubelet]# rpm-ostree status
State: idle
AutomaticUpdates: disabled
Deployments:
Version: 31.20200210.3.0 (2020-02-24T16:48:02Z)
Commit: 4ea6beed22d0adc4599452de85820f6e157ac1750e688d062bfedc765b193505
GPGSignature: Valid signature by 7D22D5867F2A4236474BF7B850CB390B3C3359C4
The text was updated successfully, but these errors were encountered: