socat missing in stable #1114
Comments
socat is a dependency of the kubernetes kubelet which we aren't shipping in Stable yet. The plan is to ship the kubelet in a container image so that it can be updated out of band of the OS. I'm not sure if we will independently ship socat in the image. We can consider it. |
I am on CoreOS stable (899.13.0) and there is no socat, making impossible to do kubectl port-forward. |
If you need kubectl and socat, you can put them in a container and run them from there. I'm not sure if it makes sense to ship socat in the OS itself. |
What was the rationale to put socat into beta? I have the same use case as @cescoferraro. I don't need kubectl on the node. I just like to use kubectl port-forward from my laptop which needs socat on the node. |
This was exactly my use case too - not sure if this is even workable with On Fri, Mar 25, 2016 at 6:08 AM, Alexander Kiel notifications@github.com
|
@alexanderkiel thats it! I am using systemd to boot Kubernetes, because its logical on my head. There is a chicken or the egg thing here. Although the official kubernetes documentation advises to start the apiserver from a kubelet inside a container. The kubeletet needs the --api-server flag, and therefore the api-server itself. It will scream out loud at startup while the api-server does not come alive; the docs tells you not to mind those, and, for me, sounds weird specially coming from google. After reading @crawford answer I was wondering if I could do something like: |
@alexanderkiel socat made it into Beta because we shipped the kubelet in the OS itself. We came to the conclusion that this was a mistake (coupling the OS and Kubernetes is just too many moving parts to keep track of) and have instead shipped it in an ACI which can be run with rkt. The plan is to remove the kubelet in the coming months, but to reduce dependence, we never shipped it in the Stable channel. I am not familiar with kubectl's operation and am unclear on the nature of this dependence. Maybe @aaronlevy can chime in. |
its on the kubelet package. I looks for it on the path.
but it seems that they are unsure how to solve this issue too. But right now we need nsenter and socat to run the kubelet with systemd on coreos. A link for the right binaries would be perfect
|
As @crawford mentioned, we are now moving toward running the kubelet from a rkt container (which also contains nsenter + socat). I haven't explicitly tested The kubelet-wrapper script is currently just in the alpha release, but it can easily be added manually to stable instances. See: https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html#manual-deployment |
@crawford I can understand that the CoreOS team can't support things like kubelet in the core release. I support the way with the kubelet-wrapper script and will test it in the near future. |
Would like to hear how you guys solved this issue. Are you using the port-forwarding with a socat container? I would now try to run a service and expose it with loadbalancer and address it from outside with the external :xxxxxx |
@levmichael3 Download the socat and nsenter binaries from the alpha/beta channel and toss it anywhere on your path of a stable channel coreos installation. But you do not need socat to expose a service with a loadBalancer, you just need the loadBalancer |
It seems |
Simultaneously, I'm not sure how fast does CoreOS release new |
Node with
Node with
|
@pires I don't think there's an immediate solution to the problem of the CoreOS kubelet image lagging behind official kubelet images, but euank says in this comment that the long term goal is to upstream all the CoreOS patches so using a different image is no longer required. I feel your pain, kubelet-wrapper is no fun. |
@pires v1.3.5 is available now. Can you create a VM image with the kubelet already downloaded through a snapshot? Being able to upgrade the kubelet means we have to pull it out of the image. Overall, I am going to close this issue because I think it was opened because the kubelet needs socat. But, we no longer ship kubelet in the image. See https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html |
@philips Is there a way to get socat into the kubelet-wrapper? |
@philips I used to push |
@alexanderkiel socat is in the kubelet-wrapper |
@pires You could build your own container to run the kubelet if you don't like the CoreOS maintainer images. Would that work for you? |
Personally, I've never used the bundled @philips it would but for my purpose ( |
@jimmycuadra @pires @alexanderkiel I think one of the pain points is that people want to use the upstream images, but that doesn't easily work because kubelet-wrapper expects the kubelet to exist as After that change is available, you should be able to use upstream hyperkube images interchangeably with the coreos hyperkube images & the kubelet-wrapper (with the exception of any fixes we have backported). As far as the releases of CoreOS hyperkube images: We try and get these out quickly after official releases. In some cases this is same day, in some cases it is a few days. There is room for improvement here - and something we are going to put more effort towards further automating. Because we are backporting a few fixes - we run all of our builds through our own vetting process (e2e tests + conformance tests) before releasing the images themselves -- so this delays it a bit from the upstream release. @pires another option RE speeding up the boot process would be to distribute the hyperkube image the same way you are distributing the binary, then just pre-cache it in the image store:
Ultimately, I think we all believe in distributing our applications as containers (yay Kubernetes!) -- and we think that should extend to the kubelet itself as well (socat as a dependency? It should be in the image!). So I would definitely like to address any user experience issues -- because we do want the option of running the kubelet as a container to be a good experience. |
@aaronlevy thanks a ton for that insight, that sounds a very simple solution indeed. |
just in case anyone needs the nsenter and socat binaries. https://github.com/cescoferraro/kube |
Right now my team needs some features from the kubernetes ^1.3, so its time for me to start using the kubelet wrapper. I am fiddling around with it but I already got a question that google could not solve for me. Before I was using exclusively systemd like: etcd > flanneld > flanneld-docker > kubelet > kube api > kube-..... Now with the kubelet-wrapper: kubelet > etcd > kube-api > kube-.... Considering that flanneld depends on etcd, where and how do I run flanneld? As a kubelet manifiest or as systemd unit? |
@aaronlevy So if I run flanneld as a systemd unit, I am also supposed to prior run etcd with systemd too. I am trying to |
Sorry, yes, etcd2 is also required for flanneld right now. In our single-node scripts we simply start it on the same node: https://github.com/coreos/coreos-kubernetes/blob/master/single-node/user-data#L965 For multi-node installations it depends on how it is being launched, but ultimately etcd being started by systemd. So: launch etcd2, inject network configuration (https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/controller-install.sh#L63-L84), then start flanneld. And you are correct, the goal we are aiming for is that the host only needs to start the kubelet, and all other pieces will be run as components in the cluster. This will eventually mean etcd also, but currently it is still deployed independently. Then with the work I linked to for self-hosting flannel, it actually uses the kubernetes api as the storage layer (which uses etcd) -- so direct access to etcd from each node will not be necessary. |
Helm (tiller) requires socat to run. CoreOS doesnt supply socat with their OS (See coreos/bugs#1114) hence we have to add it to the system during Kubernetes Deployment. While there is a solution for this issue when running Kubernetes on CoreOS with Rkt (Kubelet Wrapper https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html) there is no when running Kubernetes on CoreOS with Docker. For this reason we compiled a static socat binary we copy to /opt/bin or another directory and add the PATH to the Kubelet Docker Container to run Helm (tiller) on the Kubernetes Cluster.
I'm new to CoreOS so please forgive me if this is down to confusion re: versioning. However...
It appears 'socat' is available currently in the Beta (899) and Alpha (949) images - but it is not in Stable (835.12), despite the release notes stating that it was added (or updated) in 773.1.
Will socat ever make it into stable?
The text was updated successfully, but these errors were encountered: