Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes integration doesn't publish service ports to physical host #2445

Closed
dmaze opened this issue Jan 17, 2018 · 14 comments

Comments

Projects
None yet
@dmaze
Copy link

commented Jan 17, 2018

Expected behavior

When I run a Kubernetes NodePort (or LoadBalancer) service, I can reach the service, probably via the host-exposed port on localhost; that is, if kubectl get services says it is a NodePort service on ports 80:31234/TCP, I can curl http://localhost:31234.

Actual behavior

The service starts fine and I can reach it from within the cluster via every expected route, but there's no way to access it from the host.

Information

  • Full output of the diagnostics from "Diagnose & Feedback" in the menu
Docker for Mac: version: 17.12.0-ce-mac45 (a61e84b8bca06b1ae6ce058cdd7beab1520ad622)
macOS: version 10.13.2 (build: 17C205)
logs: /tmp/28E2312A-556A-4863-8921-3D5C1F192311/20180116-212006.tar.gz
[OK]     db.git
[OK]     vmnetd
[OK]     dns
[OK]     driver.amd64-linux
[OK]     virtualization VT-X
[OK]     app
[OK]     moby
[OK]     system
[OK]     moby-syslog
[OK]     kubernetes
[OK]     env
[OK]     virtualization kern.hv_support
[OK]     slirp
[OK]     osxfs
[OK]     moby-console
[OK]     logs
[OK]     docker-cli
[OK]     menubar
[OK]     disk

Diagnostic ID: 28E2312A-556A-4863-8921-3D5C1F192311

  • A reproducible case if this is a bug, Dockerfiles FTW

https://gist.github.com/dmaze/7d2a0b3b8fc45d6a146b13d3aa68f7f6 is a Kubernetes YAML file:

curl -o k8s-services.yaml https://gist.githubusercontent.com/dmaze/7d2a0b3b8fc45d6a146b13d3aa68f7f6/raw/c79e8426c94185dd6b73d6660823c4341f84d119/k8s-services.yaml
kubectl apply -f k8s-services.yaml
kubectl get services
curl http://localhost:3xxxx

More details below.

Steps to reproduce the behavior

  1. Switch to Docker Edge using Homebrew:
    brew cask uninstall docker
    brew cask install docker-edge
    
    Via the OSX Launchpad, launch the edge whale, reset your world, and in the preferences window, turn on Kubernetes support.
  2. Download the gist at https://gist.github.com/dmaze/7d2a0b3b8fc45d6a146b13d3aa68f7f6
  3. Run kubectl apply -f k8s-services.yaml (or whatever you called the gist file)
  4. Run kubectl get services. You will see two services. "np" is a NodePort service; "lb" is a LoadBalancer service. Within the cluster, http://np:8181 will reach one service and http://lb:8282 the other. Both lines will have a second (host) TCP port number as well, in the low 30000's.
  5. Run kubectl get pods. You will see two matching pods, with long names. Copy and paste one of them and run e.g. kubectl describe pod lb-5958db466f-6znvm. You will see it running on some IP address, and you will see it running an HTTP daemon listening on port 8111 (for np, 8222).
  6. Run kubectl describe node docker-for-desktop. Find its InternalIP address (for me it is 192.168.56.3).
  7. Run curl http://localhost:30246, where this is the second port number from either service. This is the thing I most expect to work, but you will get a connection refused.
  8. Try again with any of the pod-local, service, or host ports, on either 127.0.0.1 or the node IP address 192.168.56.3; all will fail.
  9. Run kubectl run --rm -i --tty --image busybox x, and within that shell, wget http://np:8181. Within the cluster, normal paths to reach between services work fine. Exit this shell.
  10. Run screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty to get a shell in the hidden Linux VM. Within this shell, run wget http://localhost:30246, where again this is the second port number from either service. This successfully reaches the service.
@cscheib

This comment has been minimized.

Copy link

commented Jan 18, 2018

I've been having similar issues, my current workaround is to use kubectl port-forward to access services within the cluster. The "service" layer seems to be unusable from the mac host, be it via ClusterIP, NodePort, or LoadBalancer.

The video here shows Elton using a service of type LoadBalancer, and it actually grabs a LoadBalancer IP, and he is able to hit the service a said IP: https://www.youtube.com/watch?v=jWupQjdjLN0

I've tried replicating his procedure to no avail.

So, yea, for now I'm doing the port-forward workaround to hit the port on the pod itself. This workaround doesn't address how you'd hit a deployment with more than 1 pod (i.e. multiple replicas).

@cscheib

This comment has been minimized.

Copy link

commented Jan 18, 2018

more info: if you use the docker-compose format and the "docker stack" command, it appears to do some extra magic to map the NodePort to localhost.

Followed this example: https://github.com/dockersamples/k8s-wordsmith-demo

Even deploying with the k8s manifest after seems to work. I/We could just be using slightly malformed service definitions for this use case. Haven't fully compared/contrasted yet.

@ajeetraina

This comment has been minimized.

Copy link

commented Jan 18, 2018

Yes @dmaze. I can reproduce this issue.
As a workaround, one need to expose it manually with the below command as shown under https://github.com/ajeetraina/docker101/blob/master/play-with-kubernetes/README.md

kubectl expose deployment hello-world --type=NodePort --name=example-service

While using it with docker stack deploy, it automatically handle the port mapping and can be accessed directly using `curl localhost:.

Link: https://github.com/ajeetraina/docker101/blob/master/play-with-kubernetes/docker-for-mac/stack-k8s.md

@cscheib

This comment has been minimized.

Copy link

commented Jan 19, 2018

I do seem to be able to get repeatably working results with a service manifest of type nodeport, or using the expose command that @ajeetraina suggested.

Deploying from a manifest that initially creates a deployment of type LoadBalancer, and changing the type to NodePort (my initial procedure) does not garner the same results... which is fine.

As long as there's a predictable way to expose the services to localhost, I'm good.

@pgayvallet

This comment has been minimized.

Copy link

commented Feb 2, 2018

We are exposing to the host the port as defined in the yaml file, not the one actually displayed as opened on the service when doing kubectl get services

So for the https://gist.github.com/dmaze/7d2a0b3b8fc45d6a146b13d3aa68f7f6 example, ports 8181 & 8282 are opened on the host, respectively for the lb and np services.

@atombender

This comment has been minimized.

Copy link

commented Feb 16, 2018

From what I can tell, the only way to expose ports is to use a service of type LoadBalancer, and this causes the port to be forwarded from all of the host's interfaces. That means you can't run anything on the same port on the host — you can't have Postgres running on the host and under Kubernetes if they have the same port. That is a problem.

It's also weird that Kubernetes stuff becomes available on all the host's interfaces. Surely localhost is sufficient. In fact, by binding to everything it also bypasses the Mac firewall (see issue #729), allowing anyone to connect to exposed LoadBalancer IPs. That is also a problem.

Why doesn't DFM expose the cluster IPs to the host, so that I can talk directly to, say, 10.104.205.62:5432? Is there an issue for this?

@chilicheech

This comment has been minimized.

Copy link

commented Mar 15, 2018

@pgayvallet that gist doesn't seem to work for me. I am able to connect to the lb service on port 8181 on osx via localhost, but not to the np service on 8282...

➜ kubectl get services
NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
lb           LoadBalancer   10.96.65.180     localhost     8181:31642/TCP   1m
np           NodePort       10.100.250.201   <none>        8282:30342/TCP   1m
➜ curl localhost:8181
<html><body><h1>Hello world</h1></body></html>%                                                                                                                                                                                    ➜ curl localhost:8282
curl: (7) Failed to connect to localhost port 8282: Connection refused
➜

Looks like the np port defined in the yaml is not being exposed. The np port displayed as opened by the service is exposed though:

➜  containers curl localhost:30342
<html><body><h1>Hello world</h1></body></html>%                                                                                                                                                                                    ➜

Why can't I connect to np port 8282 on localhost?

@sercheo87

This comment has been minimized.

Copy link

commented May 26, 2018

Execute on terminal
$ kubectl port-forward my-pods-name 8181 and success curl localhost:8181

@jwatte

This comment has been minimized.

Copy link

commented Oct 23, 2018

kubectl port-forward is not a robust way of setting up a local sandbox/development cluster, because it doesn't survive robustly. (Closing the terminal, restarting the machine, etc.)

When running plain containers with docker, the -p port binding mechanism works fine. I would expect NodePort to do the same thing in kubernetes-on-docker, but it doesn't seem to work. Ideally, though, there would be routed networking between service IPs and the localhost interface, just like there is on other OS-es.

@jandubois

This comment has been minimized.

Copy link

commented Nov 16, 2018

@pgayvallet I don't understand why this issue is closed.

Both the mapping to the wrong internal port number as well as mapping it to all interfaces instead of just localhost feel like bugs to me.

@jiayihu

This comment has been minimized.

Copy link

commented Mar 16, 2019

I strongly agree with @jandubois , it took me 2 hours to find this issue and the work-around. So far, the experience with Docker Desktop as starter has been awful.

@MrBuddyCasino

This comment has been minimized.

Copy link

commented Jun 11, 2019

Same here. Back to Minikube.

@sagneta

This comment has been minimized.

Copy link

commented Jun 21, 2019

I realize this is closed but I still hit the issue. I noticed that for some reason the deployment came up with ClusterIP. You want NodePort. Change the deployment to use that in the yaml. I used the kubernetes dashboard web app to do it. Using ClusterIP obviously it is available only within the cluster. NodePort will proxy to the outside world. This issue can happen if you deploy without a deployment yaml file as I did.

@alextanhongpin

This comment has been minimized.

Copy link

commented Jun 22, 2019

I encountered this issue recently, and surprisingly it works on one machine but not on another. Here's what I observed on my working machine:

  1. It doesn't have minikube installed, so the only context is docker-for-desktop. So I uninstalled minikube on the other machine using the steps kubernetes/minikube#1043 (comment).
  2. It doesn't work, but at least I cleared minikube. The other observation is that the status of Kubernetes on the machine that is not working is showing Kubernetes is starting permanently. I tried to resolve that through #2990. The only solution that worked for me is to reset to factory defaults, since restart, reset kubernetes cluster, reset disk image options are not working.

The status shown now is Kubernetes is running. I tried deploying my service and I can now call it. This is how my service definition looks like:

apiVersion: v1
kind: Service
metadata:
  name: go-server-service
spec:
  type: NodePort
  ports:
  - protocol: TCP
    port: 8080
  selector:
    app: go-server

Here's what I see with the command k get svc (note that I use the alias k=kubectl:

$ k get svc
NAME                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
go-server-service   NodePort    10.99.164.31   <none>        8080:31422/TCP   8m
kubernetes          ClusterIP   10.96.0.1      <none>        443/TCP        11m

Here's the output when I make a curl:

$ curl localhost:31422
{"message": "hello world"}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.