Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker swarm mode - published ports are not exposed #26817

Closed
JettJones opened this issue Sep 22, 2016 · 35 comments
Closed

Docker swarm mode - published ports are not exposed #26817

JettJones opened this issue Sep 22, 2016 · 35 comments

Comments

@JettJones
Copy link

Description

When exposing ports in swarm mode, the container ports are not exposed. This is unexpected because similar commands in docker-compose or docker run do expose the related ports.

This issue may affect any existing docker workflows moving to swarm mode, using docker-compose bundle or manually.

I saw this in a larger setup, using consul/registrator for logstash service discovery, when moving a local (docker-compose.yml) setup to a cloud provider using swarm mode. Registrator uses exposed port mappings in reporting service config, so this issue results in missing routes.

logstash is an interesting example because:

  1. It doesn't expose any ports by default.
  2. The ports it will use are specified by configuration.
  3. It's a docker library image.

Steps to reproduce the issue:
0. (this reproduction is running on a single node swarm-mode cluster)

  1. docker service create -p "5799:5799" logstash logstash --verbose "input { http { port => 5799 } }"
  2. Lookup the service id with docker ps | grep logstash
  3. docker inspect -f "{{.HostConfig.PortMappings}}" {service_id}

Describe the results you received:

map[]

Describe the results you expected:
Compare this to

  1. docker run -d --name run-logstash -p "5799:5799" logstash --verbose "input { http { port => 5799 } }"
  2. `docker inspect -f "{{.HostConfig.PortMappings}}" run-logstash
map[5799/tcp:[{ 5799}]]

I expect the exposed port to be visible on the container.

Output of docker version:

>docker --version
Docker version 1.12.0, build 8eab29e

Output of docker info:

>docker info
Containers: 3
 Running: 2
 Paused: 0
 Stopped: 1
Images: 27
Server Version: 1.12.1
Storage Driver: aufs
 Root Dir: /mnt/sda1/var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 59
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge overlay host null
Swarm: active
 NodeID: ez8xid0lsy0v0zy4uuqlok84x
 Is Manager: true
 ClusterID: dqaihqex33qmuvtqsacpwt9jz
 Managers: 1
 Nodes: 1
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot interval: 10000
  Heartbeat tick: 1
  Election tick: 3
 Dispatcher:
  Heartbeat period: 5 seconds
 CA configuration:
  Expiry duration: 3 months
 Node Address: 192.168.99.100
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.4.17-boot2docker
Operating System: Boot2Docker 1.12.1 (TCL 7.2); HEAD : ef7d0b4 - Thu Aug 18 21:18:06 UTC 2016
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 995.9 MiB
Name: vbox
ID: 6NC5:OVFX:HZU6:NKW5:7N52:35ZR:KXZ2:MNIF:DFUB:4KRT:FBHL:E6PZ
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 44
 Goroutines: 139
 System Time: 2016-09-22T13:30:25.020549288Z
 EventsListeners: 1
Registry: https://index.docker.io/v1/
Labels:
 provider=virtualbox
Insecure Registries:
 127.0.0.0/8

Additional environment details (AWS, VirtualBox, physical, etc.):
Reproduced locally, running VirtualBox on Windows.

@thaJeztah
Copy link
Member

This is expected; port publishing for services works different than for "regular" containers;

When creating a service, ports of the containers backing the service are themselves not published directly, but go through the built-in load balancing for Swarm mode.

Inspecting individual containers, therefore doesn't show published ports, but inspecting the service will show the ports that are published;

docker service inspect --pretty web

ID:     7flqz62cbevmyy0tekcv1xve7
Name:       web
Mode:       Replicated
 Replicas:  3
Placement:
UpdateConfig:
 Parallelism:   1
 On failure:    pause
ContainerSpec:
 Image:     nginx:alpine
Resources:
Ports:
 Protocol = tcp
 TargetPort = 80
 PublishedPort = 80

Also see https://docs.docker.com/engine/swarm/ingress/#/publish-a-port-for-a-service

Note that docker-compose does not support creating services in Swarm mode

@thaJeztah
Copy link
Member

I'll close this issue, because this works as expected, but feel free to continue the discussion

@JettJones
Copy link
Author

I think that there's enough inconsistency here that this is still a bug. Probably my issue title goes too far in assuming the solution, what if we called it 'swarm mode needs an option to expose container ports'.

Lets take this step by step, let me know if one of these has a bad assumption:

  • If the image does not expose port 8080, after running service create --publish 8080:8080 the swarm routing layer cannot reach the container(s).**
  • In docker-compose and docker run, an identical --publish 8080:8080 will expose that port, and the container will start and route successfully.
  • Fine, when I control the Dockerfile, I can work around this locally by adding an EXPOSE command to the dockerfile.
  • However, I used logstash in my example to show that there are images provided by docker which do not work in swarm mode, because swarm mode lacks functionality to specify which ports to expose.
  • Fine, I can work around that too, by making a new Dockerfile from logstash just for exposing the port.
  • However, I will need a new image for every port I want to expose, and I will need to provide hosting for all those images.

Hopefully that makes sense why the export omission is a pain.

If your with me that this is an issue, what could be done for this issue?

  • Maybe I'm missed something in my reading of service create - feedback welcome if there's already an option to expose ports.
  • Have swarm mode --publish always expose the container port. (this request, originally)
  • Add new command line flags to service create and service update, to expose ports.
  • If no change is made to swarm-mode - images like logstash will need to be updated in docker/library to add swarm support, or documentation should be added to explain the expected path for making new images for each variant of exposed port.

Note that docker-compose does not support creating services in Swarm mode

I believe bundle is specifically for creating swarm services https://docs.docker.com/compose/bundles/ or is that functionality going away?

References

If the image does not expose port 8080, after running service create --publish 8080:8080 the swarm routing layer cannot reach the container(s).**

Here's the example I was using locally to show this, a basic web service showing what happens if no port is exposed.

Dockerfile:

FROM node
RUN npm install -g http-server
WORKDIR /shared
CMD http-server
> docker build -t test .
> docker create -p "8080:8080" --name node test
> docker run -d -p "8080:8080" --name node-run test

output showing repeated restarts:

> docker service ps node
ID                         NAME         IMAGE  NODE     DESIRED STATE  CURRENT STATE          ERROR
bc8qeyi797g30m0x9khbz9mab  node.1      node   default  Ready          Ready 15 hours ago
8duydqecsy2w24clx4hs0q4dw   \_ node.1  node   default  Shutdown       Complete 15 hours ago
7llyijfih9avkm3aycj5ddoyi   \_ node.1  node   default  Shutdown       Complete 15 hours ago
bpaanxxzpvoubl4yvyc7w30p2   \_ node.1  node   default  Shutdown       Complete 15 hours ago
4tmiolp250z13g0zqbkq5n7z7   \_ node.1  node   default  Shutdown       Complete 15 hours ago

Adding EXPOSE 8080 to the dockerfile and service create succeeds.

@thaJeztah
Copy link
Member

thaJeztah commented Sep 22, 2016

EXPOSE 8080 is only for "introspection" of the container, it does not expose ports. (see https://docs.docker.com/engine/reference/builder/#/expose), so even without EXPOSE in the Dockerfile, a container can expose any port it wants.

docker service create --publish 8080:8080 in practice works similar to docker run --publish 8080:8080; port 8080 of the container is published at port 8080 on the host. The difference between docker run and docker service is that the individual containers backing the service are not directly exposed, but go through the Swarm routing mesh / load balancer.

What issue are you running into with your example Dockerfile? If I do;

docker build -t repro .

docker service create --name web --publish 8080:8080 repro

That looks to work;

screen shot 2016-09-22 at 19 58 12

@JettJones
Copy link
Author

Thanks for trying the repro - looks my mistake when setting that up this morning. Running again after rmi and rebuild, the node service is coming up as expected in swarm mode. I never saw an error from docker service inspect, docker service ps, or docker logs on the containers that spawned every 5 seconds.

Since service create connectivity is working, that greatly reduces the scope of this problem. Still broken is the consumer in my setup of the exposed port information - registrator. In theory, services that rely on the exposed port (or introspection detail, as you say) would be similarly affected.

Also, I see that I chose the wrong filter in the inspect command originally, which may be leading to some of the confusion here. What I meant was:

docker inspect -f "{{.Config.ExposedPorts}}" {service_id}

This command shows exposed ports when a service exposes them, or when additional ports are exposed at start time by docker run or docker-compose. The fact that service create does not add the same record is tripping up registrator in my cluster.

And to reiterate - since service create does not provide a way to set exposed ports, I still have to make and push a copy of images that need ports exposed, like logstash as a work-around.

@thaJeztah
Copy link
Member

@JettJones I don't think there's a need to create variations of your image, as registrator allows you to set overrides through labels or env-vars; http://gliderlabs.com/registrator/latest/user/services/#container-overrides, so if you need information about the individual containers, use --container-label, or to set the labels on the service, use --label

@JettJones
Copy link
Author

JettJones commented Sep 23, 2016

From the registrator docs:

The fields Name, Tags, Attrs, and ID can be overridden...

So I don't believe the IP or Port can be overridden by environment values or labels in the same way.

I tried the following variations:

    docker service create --name web -p 8080:8080 \
        -e SERVICE_NAME=web \
        -e SERVICE_8080_NAME=web80 \
        -e SERVICE_PORT=8080 \
        repro

To which registrator replied:

2016/09/22 08:40:07 ignored: 1280f5469f0e no published ports

Maybe I'm misunderstanding your suggestion though.

@JettJones
Copy link
Author

I looked into registrator some more, and found a bug from 2014 requesting the feature ( Docker instances without ports ). That further convinces me that registrator does not support registering instances that do not have ports exposed.

@vetional
Copy link

I am facing similar problem with docker 1.13.1,
When I use docker service,

docker service create --name spark-master \
--constraint 'node.hostname==master' \
--publish 8080:8080 --publish 7077:7077 --publish 6066:6066 \
gettyimages/spark

the ports don't get published, but when I use docker run

docker run -d --name spark-master \
-p 8080:8080 -p 7077:7077 -p 6066:6066 \
gettyimages/spark

The ports are published as expected.

@thaJeztah
Copy link
Member

thaJeztah commented Feb 27, 2017

@vetional with "the ports don't get published", do you mean they don't show up if you look at docker ps ? If so, that's expected, because container ports are not directly published when using services. They should show up in docker service inspect <name of service>

See #26817 (comment)

@vetional
Copy link

@thaJeztah yeah you have got it right but I also can't connect to the exposed ports

@svscorp
Copy link

svscorp commented Sep 18, 2017

Same issue here, I believe. Swarm mode. Docker version: Docker version 17.06.2-ce, build cec0b72

@vetional how are you trying to connect to the service? What I noticed if I do `wget localhost:8080' on my host machine (i.e. Swarm manager) where I deployed a service publishing port 8080 to 8080 - it doesn't work. It stuck on "Connected... Request sent... Awaiting response".

When I do `wget 127.0.0.1:8080' it works. Maybe it's related.

@svscorp
Copy link

svscorp commented Sep 18, 2017

@thaJeztah I wonder is this still issue, or I am misunderstanding somewhere?

@thaJeztah
Copy link
Member

If localhost doesn't work but 127.0.0.1 does, then it's likely wget attempts to connect over IPv6 instead of IPv4. Try wget -4 localhost:8080

@vetional
Copy link

@svscorp in my case I was deploying a spark cluster, I wasn't using localhost or 127.0.0.1 I was using the public ip of the master. None of worker nodes could reach the master. This used to work in the 0.12.x versions but stopped working on later versions.

@thaJeztah
Copy link
Member

@vetional is the service attached to a custom network? Early versions of docker with swarm mode allowed communication between services over the "ingress" network; this was an oversight as it breaks the "sandboxing" of services; this was later changed, so that services can only communicate with each other when attached to the same custom network.

@vetional
Copy link

@thaJeztah as I recall the service wasn't attached to a custom network.

@thaJeztah
Copy link
Member

That could explain what you're seeing

@eromoe
Copy link

eromoe commented Dec 26, 2017

Is there any example assign publish service in docker-compose.yml ???

I'd like to use docker stack deploy --compose-file docker-compose-swarm.yml test, but I don't see how to assign a service to which node and how to publish.

@riker09
Copy link

riker09 commented Jan 16, 2018

Yeah, I would be interested in using Registrator together with Docker swarm mode as well. Does anybody have a working solution for that? Maybe there are alternatives to Registrator that I'm not aware of?

@ilovemath
Copy link

ilovemath commented Feb 4, 2018

@thaJeztah
what you said is right

with "the ports don't get published", do you mean they don't show up if you look at docker ps ? If so, that's expected, because container ports are not directly published when using services. They should show up in docker service inspect 'name of service'

but i still cannot find the exposed port when i use 'netstat -ant | grep port', neither can i curl localhost:port.

@thaJeztah
Copy link
Member

@ilovemath if you suspect there's a bug, please open a new issue instead, and provide the information that's requested in the issue template

@xiispace
Copy link

I am facing similar problem, cant't find the listent port use 'netstat -nlt', then i restart the docker daemon and create service again. it works..........

@eripa
Copy link

eripa commented Apr 13, 2018

I also experience this with a Docker swarm mode enabled host running "regular" containers. Sometimes after restarting/recreating the containers the published port doesn't get mapped properly. I usually have to resort to rebooting as restarting the docker daemon doesn't seem to help.

@wjma90
Copy link

wjma90 commented Nov 29, 2018

I solved it using an earlier version of "boot2docker". Apparently version 18.09 has problems.

docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" --hyperv-boot2docker-url=https://github.com/boot2docker/boot2docker/releases/down
load/v18.05.0-ce/boot2docker.iso myvm1

I have tested this solution with virtualbox driver (in mac) and with hyper-v (obviously windows)

@deevodavis71
Copy link

I can confirm this as well.. using 18.09 on Docker for Mac I couldn't connect to the services using the published ports. When I downgraded the swarm to use 18.05 it all works as expected.

@thaJeztah
Copy link
Member

If your using boot2docker, this is likely due to boot2docker/boot2docker#1349

@danielcranford
Copy link

If localhost doesn't work but 127.0.0.1 does, then it's likely wget attempts to connect over IPv6 instead of IPv4. Try wget -4 localhost:8080

@thaJeztah, why would IPv6 versus IPv4 matter? Using 127.0.0.1 does resolve the issue for me, but a look at netstat output shows a a listening tcp6 socket on the port in question (443 in my case).

$ sudo netstat -pant | grep :443.*LISTEN
tcp6      15      0 :::443                  :::*                    LISTEN      3866/dockerd

If I start a container via docker run or docker-compose any of the following work

  • https://localhost
  • https://127.0.0.1
  • https://[::1]
    Moreover, the netstat output is the same (a tcp6 listening socket). So what is unique about docker stack that causes it to reject IPv6 connections?

@ukreddy-erwin
Copy link

Port redirection not working with docker swarm
docker service create --name postgres1 --publish 5432:5432 uday1kiran/postgres:9.6.10
working
docker service create --name postgres1 --publish 9000:5432 uday1kiran/postgres:9.6.10
not working

@ttutko
Copy link

ttutko commented Apr 21, 2020

I was struggling with this as well. As soon as I commented out the following line from my /etc/hosts file everything started working:
::1 localhost ip6-localhost -ip6-loopback

I did the same on all of my swarm nodes though on the others the lines were slightly different:
::1 ip6-localhost ip6-loopback

I don't know WHY this worked or affected things in the first place so if someone can explain it that would be appreciated. Obviously it's related to ipv6 which I should probably just look into disabling altogether but shouldn't the port have been published on both ipv4 and ipv6?

@MisderGAO
Copy link

I have the same problem on centos when firewall is inactive.
it can be reproduced by the following step:

  1. Installer docker (version 19.03.05).
  2. docker swarm init
  3. docker network create -d overlay my-overlay && docker service create --name my-nginx --network my-overlay --replicas 1 --publish published=8080,target=80 nginx:alpine
  4. curl 127.0.0.1:8080

I have tested 3 centos VM, none of them works.
However, it works on Debian 9 and Ubuntu 16.04 .

@MisderGAO
Copy link

I have the same problem on centos when firewall is inactive.
it can be reproduced by the following step:

  1. Installer docker (version 19.03.05).
  2. docker swarm init
  3. docker network create -d overlay my-overlay && docker service create --name my-nginx --network my-overlay --replicas 1 --publish published=8080,target=80 nginx:alpine
  4. curl 127.0.0.1:8080

I have tested 3 centos VM, none of them works.
However, it works on Debian 9 and Ubuntu 16.04 .

the problem is resovled by updating linux kernel from 3.10.0-1127 to 4.4.227 on centos.
Besides, if curl localhost doesn't work, try curl 127.0.0.1.
Hope that will be helpful.

@jimbo8098
Copy link

That was a whole major release kernel version and there's still newer versions out there. I'm not too surprised that didn't work before 😆

@ypzhuang
Copy link

All the services in my docker swarm cluster work fine for a long time, but one day I found I cannot access some services from the exposed ports, I just scale the service ,docker service scale {service-name}={number" and all works again.

@ethaniel
Copy link

I'm experiencing the same bug on Raspberry Pi.
I have a pihole service which I deploy via swarm:


sudo docker service create \
  --name pihole \
  --mode global \
  --publish published=80,target=80,mode=host,protocol=tcp \
  --publish published=53,target=53,mode=host,protocol=tcp \
  --publish published=53,target=53,mode=host,protocol=udp \
  -e TZ=Asia/Bangkok \
  -e WEBPASSWORD=admin \
  --mount type=volume,src=pihole_app,dst=/etc/pihole \
  --mount type=volume,src=dns_config,dst=/etc/dnsmasq.d \
  --log-driver journald \
  --with-registry-auth \
  --no-resolve-image \
  --constraint node.labels.home==1 \
  pihole/pihole:latest

I've noticed that randomly the swarm services get restarted and I lose access to the 80 and 53 port forwards:
image

When I do a docker restart, the service comes back up, however this time the port mappings look slightly different (and start working too):
image

So, basically, in my case, when docker swarm restarts a service automatically, it turns a 0.0.0.0:53->53/tcp port forward into a 53/udp port forward, and the port stops being accessible anymore.

Strange.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests