Skip to content
This repository has been archived by the owner on Oct 16, 2020. It is now read-only.

socat missing in stable #1114

Closed
gdhagger opened this issue Feb 10, 2016 · 29 comments
Closed

socat missing in stable #1114

gdhagger opened this issue Feb 10, 2016 · 29 comments

Comments

@gdhagger
Copy link

I'm new to CoreOS so please forgive me if this is down to confusion re: versioning. However...

It appears 'socat' is available currently in the Beta (899) and Alpha (949) images - but it is not in Stable (835.12), despite the release notes stating that it was added (or updated) in 773.1.

Will socat ever make it into stable?

@crawford
Copy link
Contributor

socat is a dependency of the kubernetes kubelet which we aren't shipping in Stable yet. The plan is to ship the kubelet in a container image so that it can be updated out of band of the OS. I'm not sure if we will independently ship socat in the image. We can consider it.

@cescoferraro
Copy link

I am on CoreOS stable (899.13.0) and there is no socat, making impossible to do kubectl port-forward.
IMHO Maybe socat should ship only on Stable for those who do not want to live on the tip of such a fast paced project like Kubernetes.

@crawford
Copy link
Contributor

If you need kubectl and socat, you can put them in a container and run them from there. I'm not sure if it makes sense to ship socat in the OS itself.

@alexanderkiel
Copy link

What was the rationale to put socat into beta? I have the same use case as @cescoferraro. I don't need kubectl on the node. I just like to use kubectl port-forward from my laptop which needs socat on the node.

@gdhagger
Copy link
Author

This was exactly my use case too - not sure if this is even workable with
socat in a container.

On Fri, Mar 25, 2016 at 6:08 AM, Alexander Kiel notifications@github.com
wrote:

What was the rationale to put socat into beta? I have the same use case as
@cescoferraro https://github.com/cescoferraro. I don't need kubectl on
the node. I just like to use kubectl port-forward from my laptop which
needs socat on the node.


You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
#1114 (comment)

@cescoferraro
Copy link

@alexanderkiel thats it! I am using systemd to boot Kubernetes, because its logical on my head. There is a chicken or the egg thing here. Although the official kubernetes documentation advises to start the apiserver from a kubelet inside a container. The kubeletet needs the --api-server flag, and therefore the api-server itself. It will scream out loud at startup while the api-server does not come alive; the docs tells you not to mind those, and, for me, sounds weird specially coming from google.

After reading @crawford answer I was wondering if I could do something like:
docker run -v /usr/bin/socat:/usr/bin/socat busybox "sleep forever"
Would the container binaries work on tha coreos host? Which image should I use to avoid C libraries compatibility? Where does the kubelet expect socat to be? Anywhere on the path?

@crawford
Copy link
Contributor

@alexanderkiel socat made it into Beta because we shipped the kubelet in the OS itself. We came to the conclusion that this was a mistake (coupling the OS and Kubernetes is just too many moving parts to keep track of) and have instead shipped it in an ACI which can be run with rkt. The plan is to remove the kubelet in the coming months, but to reduce dependence, we never shipped it in the Stable channel.

I am not familiar with kubectl's operation and am unclear on the nature of this dependence. Maybe @aaronlevy can chime in.

@cescoferraro
Copy link

its on the kubelet package. I looks for it on the path.
https://github.com/kubernetes/kubernetes/blob/b1cd74bd34c7a603599f55dafbdd05caad6821af/pkg/kubelet/dockertools/manager.go

    containerPid := container.State.Pid
    socatPath, lookupErr := exec.LookPath("socat")
    if lookupErr != nil {
        return fmt.Errorf("unable to do port forwarding: socat not found.")
    }

but it seems that they are unsure how to solve this issue too. But right now we need nsenter and socat to run the kubelet with systemd on coreos. A link for the right binaries would be perfect

// TODO:
//  - match cgroups of container
//  - should we support nsenter + socat on the host? (current impl)
//  - should we support nsenter + socat in a container, running with elevated privs and --pid=host?

@aaronlevy
Copy link

As @crawford mentioned, we are now moving toward running the kubelet from a rkt container (which also contains nsenter + socat).

I haven't explicitly tested kubectl port-forward using the container, but all upstream conformance tests pass in this configuration (however, not sure if they exercise the port-forward functionality). When run using the kubelet-wrapper script we are now shipping, the kubelet is run in the host pid namespace, so this hopefully will just work out-of-box.

The kubelet-wrapper script is currently just in the alpha release, but it can easily be added manually to stable instances. See: https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html#manual-deployment

@alexanderkiel
Copy link

@crawford I can understand that the CoreOS team can't support things like kubelet in the core release. I support the way with the kubelet-wrapper script and will test it in the near future.

@levmichael3
Copy link

Would like to hear how you guys solved this issue. Are you using the port-forwarding with a socat container? I would now try to run a service and expose it with loadbalancer and address it from outside with the external :xxxxxx

@cescoferraro
Copy link

@levmichael3 Download the socat and nsenter binaries from the alpha/beta channel and toss it anywhere on your path of a stable channel coreos installation. But you do not need socat to expose a service with a loadBalancer, you just need the loadBalancer

@pires
Copy link

pires commented Aug 12, 2016

It seems alpha no longer includes socat.

@pires
Copy link

pires commented Aug 12, 2016

kubelet-wrapper takes a lot to download and unfortunately it seems that it must happen on every node bootstrap vs downloading kubelet once and just copy it to VMs.

Simultaneously, I'm not sure how fast does CoreOS release new kubelet versions. I'm using Kubernetes 1.3.5 with kubelet-wrapper 1.3.4. Is there a way to confirm new releases?

@pires
Copy link

pires commented Aug 12, 2016

Node with kubelet-wrapper 1.3.4:

core@node-01 ~ $ rkt list
UUID        APP         IMAGE NAME                      STATE   CREATED     STARTED     NETWORKS
530d6357        flannel     quay.io/coreos/flannel:0.5.5            exited  16 minutes ago  16 minutes ago
73dfe20e        flannel     quay.io/coreos/flannel:0.5.5            running 16 minutes ago  16 minutes ago
a13ffc2f        hyperkube       quay.io/coreos/hyperkube:v1.3.4_coreos.0        running 15 minutes ago  15 minutes ago

Node with kubelet-wrapper 1.3.5:

core@node-02 ~ $ rkt list
UUID        APP     IMAGE NAME              STATE   CREATED     STARTED     NETWORKS
10368211        flannel quay.io/coreos/flannel:0.5.5    running 2 minutes ago   2 minutes ago
d3b98429        flannel quay.io/coreos/flannel:0.5.5    exited  2 minutes ago   2 minutes ago

@jimmycuadra
Copy link

@pires I don't think there's an immediate solution to the problem of the CoreOS kubelet image lagging behind official kubelet images, but euank says in this comment that the long term goal is to upstream all the CoreOS patches so using a different image is no longer required. I feel your pain, kubelet-wrapper is no fun.

@philips
Copy link

philips commented Aug 19, 2016

@pires v1.3.5 is available now.

Can you create a VM image with the kubelet already downloaded through a snapshot? Being able to upgrade the kubelet means we have to pull it out of the image.

Overall, I am going to close this issue because I think it was opened because the kubelet needs socat. But, we no longer ship kubelet in the image. See https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html

@philips philips closed this as completed Aug 19, 2016
@alexanderkiel
Copy link

@philips Is there a way to get socat into the kubelet-wrapper?

@pires
Copy link

pires commented Aug 19, 2016

@philips I used to push kubelet to the VM, after downloading. Now, in order to maintain things on pair with the CoreOS way, I have replaced that with kubelet-wrapper believing it includes socat. Correct me if I'm wrong here.

@philips
Copy link

philips commented Aug 19, 2016

@alexanderkiel socat is in the kubelet-wrapper

@philips
Copy link

philips commented Aug 19, 2016

@pires You could build your own container to run the kubelet if you don't like the CoreOS maintainer images. Would that work for you?

@pires
Copy link

pires commented Aug 19, 2016

Personally, I've never used the bundled kubelet. I just relied on the bundled socat.

@philips it would but for my purpose (kubernetes-vagrant-coreos-cluster repo), it's OK. It just makes provisioning slower (kubelet-wrapper download on every node bootstrap), and again, I'm not sure, without manually testing it myself, if a new Kubernetes release has a matching kubelet-wrapper version. That's why I (and others above) complained about the user-experience.

@aaronlevy
Copy link

aaronlevy commented Aug 19, 2016

@jimmycuadra @pires @alexanderkiel

I think one of the pain points is that people want to use the upstream images, but that doesn't easily work because kubelet-wrapper expects the kubelet to exist as /kubelet in the hyperkube container. This will be resolved when kubernetes/kubernetes#29591 makes it into a release (possibly cherry picked into v1.3).

After that change is available, you should be able to use upstream hyperkube images interchangeably with the coreos hyperkube images & the kubelet-wrapper (with the exception of any fixes we have backported).

As far as the releases of CoreOS hyperkube images: We try and get these out quickly after official releases. In some cases this is same day, in some cases it is a few days. There is room for improvement here - and something we are going to put more effort towards further automating. Because we are backporting a few fixes - we run all of our builds through our own vetting process (e2e tests + conformance tests) before releasing the images themselves -- so this delays it a bit from the upstream release.

@pires another option RE speeding up the boot process would be to distribute the hyperkube image the same way you are distributing the binary, then just pre-cache it in the image store:

rkt fetch --insecure-options=image /path/to/kubelet/container


Ultimately, I think we all believe in distributing our applications as containers (yay Kubernetes!) -- and we think that should extend to the kubelet itself as well (socat as a dependency? It should be in the image!). So I would definitely like to address any user experience issues -- because we do want the option of running the kubelet as a container to be a good experience.

@pires
Copy link

pires commented Aug 19, 2016

@aaronlevy thanks a ton for that insight, that sounds a very simple solution indeed.

@cescoferraro
Copy link

just in case anyone needs the nsenter and socat binaries. https://github.com/cescoferraro/kube

@cescoferraro
Copy link

Right now my team needs some features from the kubernetes ^1.3, so its time for me to start using the kubelet wrapper. I am fiddling around with it but I already got a question that google could not solve for me.

Before I was using exclusively systemd like:

etcd > flanneld > flanneld-docker > kubelet > kube api > kube-.....

Now with the kubelet-wrapper:

kubelet > etcd > kube-api > kube-....

Considering that flanneld depends on etcd, where and how do I run flanneld? As a kubelet manifiest or as systemd unit?

@aaronlevy
Copy link

@cescoferraro we are currently running flanneld as a systemd unit. We've switched to utilizing the CNI network plugins in the kubelet, however, so the configuration changes slightly. You can see an example in our generic install scripts:

https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/controller-install.sh
https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/worker-install.sh

CNI config: https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/worker-install.sh#L332-L343

Docker drop-ins: https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/worker-install.sh#L284-L305

(optional) additional flannel config: https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/worker-install.sh#L264-L272

Also just to note, we are currently working on (for v1.4) self-hosted flannel as a component of kubernetes - so installing the pod network will be as simple as kubectl create -f flannel-manifest.yaml. There is a WIP PR with that functionality here:
kubernetes-retired/bootkube#113
kubernetes-retired/bootkube@9c275e8

@cescoferraro
Copy link

@aaronlevy So if I run flanneld as a systemd unit, I am also supposed to prior run etcd with systemd too.
And you guys are working so I would be able to run flanneld as a manifest, with the CNI plugin? So the only thing I would run with systemd would be the kubelete itself?
Did I get it right?

I am trying to
[Systemd] etcd2 > flanneld > flanneld_docker > kubelet > [Manifests] kube-apiserver > kube-...
As it seems the stabler way to do it

@aaronlevy
Copy link

Sorry, yes, etcd2 is also required for flanneld right now.

In our single-node scripts we simply start it on the same node: https://github.com/coreos/coreos-kubernetes/blob/master/single-node/user-data#L965

For multi-node installations it depends on how it is being launched, but ultimately etcd being started by systemd.

So: launch etcd2, inject network configuration (https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/controller-install.sh#L63-L84), then start flanneld.

And you are correct, the goal we are aiming for is that the host only needs to start the kubelet, and all other pieces will be run as components in the cluster. This will eventually mean etcd also, but currently it is still deployed independently. Then with the work I linked to for self-hosting flannel, it actually uses the kubernetes api as the storage layer (which uses etcd) -- so direct access to etcd from each node will not be necessary.

VincentS added a commit to VincentS/kargo that referenced this issue Mar 2, 2017
Helm (tiller) requires socat to run. CoreOS doesnt supply socat
with their OS (See coreos/bugs#1114) hence
we have to add it to the system during Kubernetes Deployment.
While there is a solution for this issue when running Kubernetes on CoreOS with Rkt
(Kubelet Wrapper https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html)
there is no when running Kubernetes on CoreOS with
Docker. For this reason we compiled a static socat binary we copy
to /opt/bin or another directory and add the PATH to the Kubelet Docker
Container to run Helm (tiller) on the Kubernetes Cluster.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

9 participants