New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Port-forwarding not working due to missing socat command #19765
Comments
I tried k8s 1.1.4 and the problem remains. @alex-mohr where exactly is socat being called? how can I fix this? |
cc @ncdc |
@pwais socat needs to be installed on the nodes: kubernetes/pkg/kubelet/dockertools/manager.go Line 1182 in 9fef5f2
|
I am using non containerized k8s via $ kubectl port-forward jenkins-v14-hs1sf 8123:80
I0202 16:16:46.875324 18591 portforward.go:213] Forwarding from 127.0.0.1:8123 -> 80
I0202 16:16:46.875488 18591 portforward.go:213] Forwarding from [::1]:8123 -> 80
I0202 16:16:57.346917 18591 portforward.go:247] Handling connection for 8123
I0202 16:16:57.348187 18591 portforward.go:247] Handling connection for 8123
E0202 16:16:57.389108 18591 portforward.go:318] an error occurred forwarding 8123 -> 80: error forwarding port 80 to pod jenkins-v14-hs1sf_default, uid : exit status 1
E0202 16:16:57.397791 18591 portforward.go:318] an error occurred forwarding 8123 -> 80: error forwarding port 80 to pod jenkins-v14-hs1sf_default, uid : exit status 1
I0202 16:16:57.398114 18591 portforward.go:247] Handling connection for 8123
E0202 16:16:57.425331 18591 portforward.go:318] an error occurred forwarding 8123 -> 80: error forwarding port 80 to pod jenkins-v14-hs1sf_default, uid : exit status 1 |
Hello, I'm running the spark example locally on the 1.22 release. After installing socat, I was able to connect to the zepplin-controller / dashboard. kubectl port-forward zeppelin-controller-0go00 8081:8080 I'm trying to start a sig-local interest group. Since some of you are running kubernetes locally, would you consider joining the kubernetes-sig-local google group? |
I'm seeing this issue running kubectl 1.2.0 on a GKE cluster running 1.2.0. Port forwarding into a pod gives:
|
I'm facing same issue.. where we need install socat..? in kubernetes nodes? |
@cmpmohan yes, socat is needed on the nodes for port forwarding to work. |
fyi, there's a list being compiled here ("New issue Identify, document and maybe remove kubelet dependencies"): |
@ncdc thanks for your response.. we have alternative way to access the zeppelin web portal .. i,e we are accessing them via kubernetes minion IP with port.. its working.. |
This is a problem on CoreOS where the nodes don't have socat. Is there a way to use socat packaged as a Docker image? In CoreOS, programs are generally not installed on the host itself. |
@smarterclayton @kubernetes/sig-node @kubernetes/rh-cluster-infra what do you all think about either switching to running socat from an image or at least making it an option? |
@ncdc - we need to determine if we want to charge usage of |
+1 to charging to the pod if we can make that happen |
Originally I had assumed a good place to put socat and other necessary On Wed, Jun 8, 2016 at 10:14 AM, Andy Goldstein notifications@github.com
|
@smarterclayton - that is a really interesting idea ;-) |
@philips this used to work on CoreOS. what happened? |
If you run the kubelet via CoreOS issue: coreos/bugs#1114 /cc @crawford @aaronlevy |
@jimmycuadra this should work on CoreOS when running the kubelet as a container via the kubelet-wrapper (as @euank mentioned). See: https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html Also agree that longer-term something along the lines of pod infrastructure image would help with required utilities (socat, nsenter), mount tools (nfs, ceph, git), network plugins (flannel, calico, weave). |
Fair points @crawford, and thanks for the additional context. As it applies to this issue, it still seems like the the official kubelet release needs a better story around packaging its dependencies with it in some way. There may be more tweaks that CoreOS has made to the distro-provided kubelet image, but the changes we're talking about would benefit any theoretical system with similar properties, and as stable as these networking tools are, it's probably a good idea to lock down specific versions of them that have been shown to work with the kubelet in e2e tests. |
One option for socat in particular (which would likely apply to most/all other external kubelet executable dependencies) is to add it to the pause container. I'm actually going to look at doing that as a part of #25113 |
I'd like to provide a little more context to your comments about kubelet, dependencies, and the CoreOS kubelet-wrapper, @jimmycuadra.
First of all, the work of packaging dependencies with kubelet is being done with the upstream kuberentes hyperkube image. It is true that we have our own package/distribution of this, but it's based on the same upstream work and we're working to unify them further so that we don't have to carry any patches on top of that hyperkube image. Right now I believe that if you make only small modifications to the kubelet wrapper (and add a symlink to the hyperkube image), you can easily use your own build of that upstream image. Being able to decouple our release has helped us by allowing us to validate it works with CoreOS (testing in that environment specifically) and carrying patches as needed to improve that experience, but the long-term goal here is not to carry our own fork but to make the upstream image work on CoreOS (and everywhere).
That is what the above image is working towards and already mostly accomplishes! We'll continue to try and make it better. There's additional complexities like the fact that there are N ways to run kubelet, which is why I agree shipping socat in the pause image can be worthwhile by solving the problem a layer up, but for CoreOS the way we think it should be solved is exactly that, an official kubelet image which encapsulates dependencies appropriately. |
Sounds like we're all on the same page then. :} Thanks for the details, @euank! |
@jimmycuadra This is what i did to get socat in coreOS and the port forwarding running (not sure if it would help at this stage).
Then tell the kubelet to consider binaries in the /opt/bin directory .. maybe changing it to a socat specific isolated one is better
|
@xynova please please please use kubelet-wrapper https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html |
I am totally in favour of the idea @philips but I have a couple of inconveniences with it at this present moment:
|
Quay.io should move to GCP. However, I agree with having one (optional) image per component. |
This is necessary to make 'kubectl port-forward' work. Details: kubernetes/kubernetes#19765 (comment) The socat blob will need to be pulled into the 'blobs' directory of 'kubo-release' before running 'bosh upload-blobs'. This can be done like so: $ git clone git@github.com:cloudfoundry-incubator/haproxy-boshrelease.git \ /tmp/haproxy-boshrelease $ bosh sync-blobs --dir /tmp/haproxy-boshrelease $ bosh add-blob --dir <PATH_TO_KUBO_RELEASE> \ /tmp/haproxy-boshrelease/blobs/haproxy/socat-1.7.3.2.tar.gz \ socat-1.7.3.2.tar.gz
Issues go stale after 30d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
UPSTREAM: 64028: Tolarate negative values when calculating job scale progress Origin-commit: eed51bb15d34e3f5e63478bbf1a0688190ce04b0
I think #17157 might still be broken. I'm using the spark example w/ ubuntu provider. I'm forwarding 8081 below because I'm running the command on my k8s master, which is already using 8080.
Purportedly 1.1.2 should include the fix in #17157
The text was updated successfully, but these errors were encountered: