Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port-forwarding not working due to missing socat command #19765

Closed
pwais opened this issue Jan 16, 2016 · 33 comments
Closed

Port-forwarding not working due to missing socat command #19765

pwais opened this issue Jan 16, 2016 · 33 comments
Labels
area/os/ubuntu kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.

Comments

@pwais
Copy link

pwais commented Jan 16, 2016

I think #17157 might still be broken. I'm using the spark example w/ ubuntu provider. I'm forwarding 8081 below because I'm running the command on my k8s master, which is already using 8080.

$ which socat
/usr/bin/socat

$ kubectl port-forward zeppelin-controller-oea9w 8081:8080
I0116 11:51:55.986614   22704 portforward.go:213] Forwarding from 127.0.0.1:8081 -> 8080
I0116 11:51:55.986782   22704 portforward.go:213] Forwarding from [::1]:8081 -> 8080
I0116 11:52:03.982497   22704 portforward.go:247] Handling connection for 8081
E0116 11:52:04.116196   22704 portforward.go:318] an error occurred forwarding 8081 -> 8080: error forwarding port 8080 to pod zeppelin-controller-oea9w_default, uid : unable to do port forwarding: socat not found.

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.2", GitCommit:"3085895b8a70a3d985e9320a098e74f545546171", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"1", GitVersion:"v1.1.2", GitCommit:"3085895b8a70a3d985e9320a098e74f545546171", GitTreeState:"clean"}

Purportedly 1.1.2 should include the fix in #17157

@alex-mohr alex-mohr added the area/release-eng Issues or PRs related to the Release Engineering subproject label Jan 21, 2016
@pwais
Copy link
Author

pwais commented Jan 23, 2016

I tried k8s 1.1.4 and the problem remains. @alex-mohr where exactly is socat being called? how can I fix this?

@bgrant0607
Copy link
Member

cc @ncdc

@bgrant0607
Copy link
Member

@pwais socat needs to be installed on the nodes:

// PortForward executes socat in the pod's network namespace and copies

@bgrant0607 bgrant0607 added area/os/ubuntu and removed area/release-eng Issues or PRs related to the Release Engineering subproject labels Feb 2, 2016
@gramic
Copy link

gramic commented Feb 2, 2016

I am using non containerized k8s via ./hack/local-cluster-up.sh and after installing socat, the error changed to "uid : exit status 1"

$ kubectl port-forward jenkins-v14-hs1sf 8123:80
I0202 16:16:46.875324   18591 portforward.go:213] Forwarding from 127.0.0.1:8123 -> 80
I0202 16:16:46.875488   18591 portforward.go:213] Forwarding from [::1]:8123 -> 80
I0202 16:16:57.346917   18591 portforward.go:247] Handling connection for 8123
I0202 16:16:57.348187   18591 portforward.go:247] Handling connection for 8123
E0202 16:16:57.389108   18591 portforward.go:318] an error occurred forwarding 8123 -> 80: error forwarding port 80 to pod jenkins-v14-hs1sf_default, uid : exit status 1
E0202 16:16:57.397791   18591 portforward.go:318] an error occurred forwarding 8123 -> 80: error forwarding port 80 to pod jenkins-v14-hs1sf_default, uid : exit status 1
I0202 16:16:57.398114   18591 portforward.go:247] Handling connection for 8123
E0202 16:16:57.425331   18591 portforward.go:318] an error occurred forwarding 8123 -> 80: error forwarding port 80 to pod jenkins-v14-hs1sf_default, uid : exit status 1

@bgrant0607 bgrant0607 added priority/backlog Higher priority than priority/awaiting-more-evidence. kind/support Categorizes issue or PR as a support question. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Feb 12, 2016
@apple-corps
Copy link

Hello, I'm running the spark example locally on the 1.22 release. After installing socat, I was able to connect to the zepplin-controller / dashboard.

kubectl port-forward zeppelin-controller-0go00 8081:8080
I0414 20:48:18.156878 2293 portforward.go:213] Forwarding from 127.0.0.1:8081 -> 8080
I0414 20:48:18.156952 2293 portforward.go:213] Forwarding from [::1]:8081 -> 8080
I0414 20:48:29.002317 2293 portforward.go:247] Handling connection for 8081
I0414 20:48:29.003358 2293 portforward.go:247] Handling connection for 8081
I0414 20:48:29.460353 2293 portforward.go:247] Handling connection for 8081
I0414 20:48:29.461996 2293 portforward.go:247] Handling connection for 8081
I0414 20:48:29.464332 2293 portforward.go:247] Handling connection for 8081
I0414 20:48:30.359611 2293 portforward.go:247] Handling connection for 8081

I'm trying to start a sig-local interest group. Since some of you are running kubernetes locally, would you consider joining the kubernetes-sig-local google group?

@munnerz
Copy link
Member

munnerz commented Apr 14, 2016

I'm seeing this issue running kubectl 1.2.0 on a GKE cluster running 1.2.0. Port forwarding into a pod gives:

I0414 23:20:53.731580   71324 portforward.go:213] Forwarding from 127.0.0.1:53451 -> 80
I0414 23:20:53.731787   71324 portforward.go:213] Forwarding from [::1]:53451 -> 80
I0414 23:20:58.379950   71324 portforward.go:247] Handling connection for 53451
I0414 23:20:58.381856   71324 portforward.go:247] Handling connection for 53451
E0414 23:20:58.548161   71324 portforward.go:318] an error occurred forwarding 53451 -> 80: error forwarding port 80 to pod acme-3541642747-yuad5_default, uid : unable to do port forwarding: socat not found.
E0414 23:20:58.553493   71324 portforward.go:318] an error occurred forwarding 53451 -> 80: error forwarding port 80 to pod acme-3541642747-yuad5_default, uid : unable to do port forwarding: socat not found.
I0414 23:20:58.554010   71324 portforward.go:247] Handling connection for 53451
E0414 23:20:58.678159   71324 portforward.go:318] an error occurred forwarding 53451 -> 80: error forwarding port 80 to pod acme-3541642747-yuad5_default, uid : unable to do port forwarding: socat not found.

@cmpmohan
Copy link

I'm facing same issue.. where we need install socat..? in kubernetes nodes?

@ncdc
Copy link
Member

ncdc commented May 23, 2016

@cmpmohan yes, socat is needed on the nodes for port forwarding to work.

@dims
Copy link
Member

dims commented May 24, 2016

fyi, there's a list being compiled here ("New issue Identify, document and maybe remove kubelet dependencies"):
#26093

@cmpmohan
Copy link

@ncdc thanks for your response.. we have alternative way to access the zeppelin web portal .. i,e we are accessing them via kubernetes minion IP with port.. its working..

@jimmycuadra
Copy link
Contributor

This is a problem on CoreOS where the nodes don't have socat. Is there a way to use socat packaged as a Docker image? In CoreOS, programs are generally not installed on the host itself.

@ncdc
Copy link
Member

ncdc commented Jun 8, 2016

@smarterclayton @kubernetes/sig-node @kubernetes/rh-cluster-infra what do you all think about either switching to running socat from an image or at least making it an option?

@derekwaynecarr
Copy link
Member

@ncdc - we need to determine if we want to charge usage of socat to the system or the pod if we run it from an image. I think after we have pod-level cgroups, its worth visiting this option more.

@ncdc
Copy link
Member

ncdc commented Jun 8, 2016

+1 to charging to the pod if we can make that happen

@smarterclayton
Copy link
Contributor

Originally I had assumed a good place to put socat and other necessary
utilities would be in the pod infrastructure image.

On Wed, Jun 8, 2016 at 10:14 AM, Andy Goldstein notifications@github.com
wrote:

+1 to charging to the pod if we can make that happen


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#19765 (comment),
or mute the thread
https://github.com/notifications/unsubscribe/ABG_p7KolOQT4sn9PTAwACO-LozkVFTOks5qJs41gaJpZM4HGVgs
.

@derekwaynecarr
Copy link
Member

@smarterclayton - that is a really interesting idea ;-)

@pires
Copy link
Contributor

pires commented Jul 12, 2016

@philips this used to work on CoreOS. what happened?

@euank
Copy link
Contributor

euank commented Jul 12, 2016

If you run the kubelet via kubelet-wrapper on CoreOS, the dependencies (like socat) are included in the rkt fly container the kubelet is run in.

CoreOS issue: coreos/bugs#1114

/cc @crawford @aaronlevy

@aaronlevy
Copy link
Contributor

@jimmycuadra this should work on CoreOS when running the kubelet as a container via the kubelet-wrapper (as @euank mentioned).

See: https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html

Also agree that longer-term something along the lines of pod infrastructure image would help with required utilities (socat, nsenter), mount tools (nfs, ceph, git), network plugins (flannel, calico, weave).

@jimmycuadra
Copy link
Contributor

Fair points @crawford, and thanks for the additional context. As it applies to this issue, it still seems like the the official kubelet release needs a better story around packaging its dependencies with it in some way. There may be more tweaks that CoreOS has made to the distro-provided kubelet image, but the changes we're talking about would benefit any theoretical system with similar properties, and as stable as these networking tools are, it's probably a good idea to lock down specific versions of them that have been shown to work with the kubelet in e2e tests.

@ncdc
Copy link
Member

ncdc commented Jul 21, 2016

One option for socat in particular (which would likely apply to most/all other external kubelet executable dependencies) is to add it to the pause container. I'm actually going to look at doing that as a part of #25113

@euank
Copy link
Contributor

euank commented Jul 21, 2016

I'd like to provide a little more context to your comments about kubelet, dependencies, and the CoreOS kubelet-wrapper, @jimmycuadra.

... relies on a separate distribution of kubelet itself on CoreOS's quay.io account

First of all, the work of packaging dependencies with kubelet is being done with the upstream kuberentes hyperkube image.

It is true that we have our own package/distribution of this, but it's based on the same upstream work and we're working to unify them further so that we don't have to carry any patches on top of that hyperkube image. Right now I believe that if you make only small modifications to the kubelet wrapper (and add a symlink to the hyperkube image), you can easily use your own build of that upstream image.

Being able to decouple our release has helped us by allowing us to validate it works with CoreOS (testing in that environment specifically) and carrying patches as needed to improve that experience, but the long-term goal here is not to carry our own fork but to make the upstream image work on CoreOS (and everywhere).

I think the best way forward is for the official kubelet image to either package system-level dependencies along with itself

That is what the above image is working towards and already mostly accomplishes! We'll continue to try and make it better.

There's additional complexities like the fact that there are N ways to run kubelet, which is why I agree shipping socat in the pause image can be worthwhile by solving the problem a layer up, but for CoreOS the way we think it should be solved is exactly that, an official kubelet image which encapsulates dependencies appropriately.

@jimmycuadra
Copy link
Contributor

Sounds like we're all on the same page then. :} Thanks for the details, @euank!

@xynova
Copy link

xynova commented Oct 26, 2016

@jimmycuadra This is what i did to get socat in coreOS and the port forwarding running (not sure if it would help at this stage).



# Make socat directories
mkdir -p /opt/bin/socat.d/bin /opt/bin/socat.d/lib

# Create socat wrapper
cat << EOF > /opt/bin/socat
#! /bin/bash
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/opt/bin
LD_LIBRARY_PATH=/opt/bin/socat.d/lib:$LD_LIBRARY_PATH exec /opt/bin/socat.d/bin/socat "\$@"
EOF

chmod +x /opt/bin/socat

# Get socat and libraries from the CoreOS toolbox 
toolbox
dnf install -y socat
cp -f /usr/bin/socat /media/root/opt/bin/socat.d/bin/socat
cp -f /usr/lib64/libssl.so.1.0.2h /media/root/opt/bin/socat.d/lib/libssl.so.10
cp -f /usr/lib64/libcrypto.so.1.0.2h /media/root/opt/bin/socat.d/lib/libcrypto.so.10

Then tell the kubelet to consider binaries in the /opt/bin directory .. maybe changing it to a socat specific isolated one is better

# /etc/systemd/system/kubelet.service

# [Unit]
# Description=Kubernetes Kubelet Master
# Documentation=http://kubernetes.io/docs/admin/kubelet

# [Service]
# ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests
# ExecStartPre=/usr/bin/mkdir -p /etc/cni/net.d
# ExecStartPre=/usr/bin/mkdir -p /opt/cni/bin
# Environment="PATH=/opt/bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
# ...

@philips
Copy link
Contributor

philips commented Oct 28, 2016

@xynova please please please use kubelet-wrapper https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html

@xynova
Copy link

xynova commented Oct 28, 2016

I am totally in favour of the idea @philips but I have a couple of inconveniences with it at this present moment:

  • I'm in Australia and unfortunately the connectivity to quay.io is a bit flaky with some internet providers (for some reason sometimes get timeouts or it is very slow)
  • It would be nice if the wrapper could download only the kubelet instead of the hyperkube. Again.. internet speeds are pretty poor around here and a few MB less would help a lot.
  • It would be nice also for those ACIs to be also in GCE which is pretty good in those download speeds.

@pires
Copy link
Contributor

pires commented Oct 28, 2016

Quay.io should move to GCP. However, I agree with having one (optional) image per component.

Amit-PivotalLabs pushed a commit to Amit-PivotalLabs/kubo-release that referenced this issue Jul 3, 2017
This is necessary to make 'kubectl port-forward' work.  Details:
  kubernetes/kubernetes#19765 (comment)

The socat blob will need to be pulled into the 'blobs' directory of
'kubo-release' before running 'bosh upload-blobs'.  This can be done like so:

$ git clone git@github.com:cloudfoundry-incubator/haproxy-boshrelease.git \
    /tmp/haproxy-boshrelease
$ bosh sync-blobs --dir /tmp/haproxy-boshrelease
$ bosh add-blob --dir <PATH_TO_KUBO_RELEASE> \
    /tmp/haproxy-boshrelease/blobs/haproxy/socat-1.7.3.2.tar.gz \
    socat-1.7.3.2.tar.gz
@fejta-bot
Copy link

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 18, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 17, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/os/ubuntu kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
Development

No branches or pull requests