New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

containers are unable to communicate on overlay network #2161

Open
ikrel opened this Issue Apr 25, 2016 · 31 comments

Comments

Projects
None yet
@ikrel

ikrel commented Apr 25, 2016

I have Docker UCP installed following the production guide with 3 controllers and two workers.
Finished up by doing engine-discovery and restart to setup networks. However the containers are unable to speak to each other when spread across the nodes (ping).

docker info
Containers: 31
 Running: 28
 Paused: 0
 Stopped: 3
Images: 85
Server Version: swarm/1.1.3
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 5
 docker-dc-node1: 172.16.100.29:12376
  â”” Status: Healthy
  â”” Containers: 9
  â”” Reserved CPUs: 0 / 2
  â”” Reserved Memory: 0 B / 2.052 GiB
  â”” Labels: executiondriver=native-0.2, kernelversion=4.2.0-35-generic, operatingsystem=Ubuntu 15.10, storagedriver=devicemapper
  â”” Error: (none)
  â”” UpdatedAt: 2016-04-25T04:20:28Z
 docker-dc-node2: 172.16.100.28:12376
  â”” Status: Healthy
  â”” Containers: 7
  â”” Reserved CPUs: 0 / 2
  â”” Reserved Memory: 0 B / 2.052 GiB
  â”” Labels: executiondriver=native-0.2, kernelversion=4.2.0-35-generic, operatingsystem=Ubuntu 15.10, storagedriver=devicemapper
  â”” Error: (none)
  â”” UpdatedAt: 2016-04-25T04:20:20Z
 docker-dc-node3: 172.16.100.213:12376
  â”” Status: Healthy
  â”” Containers: 7
  â”” Reserved CPUs: 0 / 2
  â”” Reserved Memory: 0 B / 2.052 GiB
  â”” Labels: executiondriver=native-0.2, kernelversion=4.2.0-35-generic, operatingsystem=Ubuntu 15.10, storagedriver=devicemapper
  â”” Error: (none)
  â”” UpdatedAt: 2016-04-25T04:20:10Z
 docker-dc-node4: 172.16.100.216:12376
  â”” Status: Healthy
  â”” Containers: 4
  â”” Reserved CPUs: 0 / 2
  â”” Reserved Memory: 0 B / 2.052 GiB
  â”” Labels: executiondriver=native-0.2, kernelversion=4.2.0-35-generic, operatingsystem=Ubuntu 15.10, storagedriver=devicemapper
  â”” Error: (none)
  â”” UpdatedAt: 2016-04-25T04:20:41Z
 docker-dc-node5: 172.16.100.119:12376
  â”” Status: Healthy
  â”” Containers: 4
  â”” Reserved CPUs: 0 / 2
  â”” Reserved Memory: 0 B / 2.052 GiB
  â”” Labels: executiondriver=native-0.2, kernelversion=4.2.0-35-generic, operatingsystem=Ubuntu 15.10, storagedriver=devicemapper
  â”” Error: (none)
  â”” UpdatedAt: 2016-04-25T04:20:31Z
Cluster Managers: 3
 172.16.100.213: Healthy
  â”” Orca Controller: https://172.16.100.213:443
  â”” Swarm Manager: tcp://172.16.100.213:2376
  â”” KV: etcd://172.16.100.213:12379
 172.16.100.28: Healthy
  â”” Orca Controller: https://172.16.100.28:443
  â”” Swarm Manager: tcp://172.16.100.28:2376
  â”” KV: etcd://172.16.100.28:12379
 172.16.100.29: Healthy
  â”” Orca Controller: https://172.16.100.29:443
  â”” Swarm Manager: tcp://172.16.100.29:2376
  â”” KV: etcd://172.16.100.29:12379
Plugins: 
 Volume: 
 Network: 
Kernel Version: 4.2.0-35-generic
Operating System: linux
Architecture: amd64
CPUs: 10
Total Memory: 10.26 GiB
Name: ucp-controller-docker-dc-node1
docker network inspect test_busybox
[
    {
        "Name": "test_busybox",
        "Id": "cbf61da0ff54c97ba97ba3bbc5f0a411d0953f37ffbaefcd9bafa3185644dcd2",
        "Scope": "global",
        "Driver": "overlay",
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1/24"
                }
            ]
        },
        "Containers": {
            "0a7148d387bae61ef217b2262cef426a4a7f522690a0802674c4d1b3a87cc434": {
                "Name": "centos1",
                "EndpointID": "deca45eb356701d04c9b9c43b06074e443a76721654d66fbaadb39bdc24146a7",
                "MacAddress": "02:42:0a:00:00:02",
                "IPv4Address": "10.0.0.2/24",
                "IPv6Address": ""
            },
            "212337dfa7aa160cd5daf197329cff0c60f61037d58f470b67e0ed85835c7c48": {
                "Name": "centos5",
                "EndpointID": "db8791c7dc939aefdc6dccfe5dd40011208cec1ef87a3edaac327146550f5c90",
                "MacAddress": "02:42:0a:00:00:06",
                "IPv4Address": "10.0.0.6/24",
                "IPv6Address": ""
            },
            "78d44843aa807709ba5d60e7d66a3844d1fdaacbb6c138593d8a8a39d6a62773": {
                "Name": "centos3",
                "EndpointID": "fd55f2e98aef620c92f61980fdd9c7b8252966385301b131c0c75a8a15c20d2f",
                "MacAddress": "02:42:0a:00:00:04",
                "IPv4Address": "10.0.0.4/24",
                "IPv6Address": ""
            },
            "a068568658d4391976da9af1971f8816c27c7c3f5b080d11045ac5595c0e372c": {
                "Name": "centos2",
                "EndpointID": "49cdc7af937aac60d83de2bbe9f457a8aaaaab1b8d5843d39f3b876e4ec12855",
                "MacAddress": "02:42:0a:00:00:03",
                "IPv4Address": "10.0.0.3/24",
                "IPv6Address": ""
            },
            "f5d4efb898ac4c473e1b4fb729d6de472026a87784be674125c54764fb106927": {
                "Name": "centos4",
                "EndpointID": "a069b81d4435d2d5c3239786eccdd2035ae59b780ae889b3703d702aec3d360e",
                "MacAddress": "02:42:0a:00:00:05",
                "IPv4Address": "10.0.0.5/24",
                "IPv6Address": ""
            }
        },
        "Options": {}
    }
]
 docker run -it --net=test_busybox --name centos1 centos sh
sh-4.2# ping centos2
PING centos2 (10.0.0.3) 56(84) bytes of data.
^C
--- centos2 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1008ms

sh-4.2# ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms

I can ping host IPs from any container to any host. The containers can ping each other when they are on the same host only.

Checking the "cat /etc/docker/daemon.json" all IPs are correct (external IPs of each host).

any help?

NETWORK ID          NAME                              DRIVER
419e742a00ba        docker-dc-node1/bridge            bridge              
407c1b9534fb        docker-dc-node4/host              host                
22895cce9072        docker-dc-node4/docker_gwbridge   bridge              
ae5391918b38        docker-dc-node4/bridge            bridge              
584aef06c203        docker-dc-node5/bridge            bridge              
e57ede0787cf        docker-dc-node3/none              null                
cbf61da0ff54        test_busybox                      overlay             
13ab96b19a37        docker-dc-node4/none              null                
1d5ea4e79296        docker-dc-node5/docker_gwbridge   bridge              
0fa43b5c40f5        docker-dc-node5/host              host                
80fbf31eac5e        docker-dc-node3/bridge            bridge              
981fee098c25        docker-dc-node1/none              null                
613c42834e66        docker-dc-node2/host              host                
f066260244b7        docker-dc-node2/docker_gwbridge   bridge              
cc77c7d4b0ee        docker-dc-node2/none              null                
e2f6e427c613        docker-dc-node1/docker_gwbridge   bridge              
4e8843eea241        docker-dc-node1/host              host                
adc0cd972240        docker-dc-node5/none              null                
e903b5150ad8        docker-dc-node3/docker_gwbridge   bridge              
d3695bb9b940        docker-dc-node3/host              host                
dc8de35b2b5c        docker-dc-node2/bridge            bridge 
@nicolaka

This comment has been minimized.

nicolaka commented Apr 27, 2016

@ikrel where are you running this setup? also , could you make sure that the following ports are open ? https://docs.docker.com/ucp/production-install/#step-2-configure-your-network-for-ucp ?

@ikrel

This comment has been minimized.

ikrel commented Apr 27, 2016

@nicolaka i am running this on our openstack installation with wide open security group:

open
ALLOW IPv6 to ::/0
ALLOW IPv4 1-65535/tcp from 0.0.0.0/0
ALLOW IPv4 to 0.0.0.0/0
ALLOW IPv4 1-65535/udp from 0.0.0.0/0
ALLOW IPv4 icmp from 0.0.0.0/0

Image Name
ubuntu 15.10 wily werewolf

two of the nodes are on the same Nova engine, and still cant talk so that excludes inter-node openstack issues (i think), plus as I mentioned the host systems have no problems communicating themselves

here is a netstat from one of the controllers:
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 10.0.3.1:53 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 10.10.10.115:52658 172.16.100.28:12379 ESTABLISHED
tcp 0 0 10.10.10.115:54000 172.16.100.28:12379 ESTABLISHED
tcp 0 0 10.10.10.115:60564 172.16.100.28:12379 ESTABLISHED
tcp 0 0 10.10.10.115:33464 172.16.100.28:12379 ESTABLISHED
tcp 0 96 10.10.10.115:22 172.16.9.2:61199 ESTABLISHED
tcp 0 0 10.10.10.115:34048 172.16.100.29:12379 ESTABLISHED
tcp 0 0 10.10.10.115:53874 172.16.100.28:12379 ESTABLISHED
tcp 0 0 10.10.10.115:59764 172.16.100.29:12379 ESTABLISHED
tcp 0 0 10.10.10.115:59404 172.16.100.28:12379 ESTABLISHED
tcp 0 0 10.10.10.115:34278 172.16.100.213:12379 ESTABLISHED
tcp6 0 0 :::12379 :::* LISTEN
tcp6 0 0 :::443 :::* LISTEN
tcp6 0 0 :::12380 :::* LISTEN
tcp6 0 0 :::12381 :::* LISTEN
tcp6 0 0 :::12382 :::* LISTEN
tcp6 0 0 :::2376 :::* LISTEN
tcp6 0 0 fe80::e847:9cff:fea4:53 :::* LISTEN
tcp6 0 0 :::22 :::* LISTEN
tcp6 0 0 :::12376 :::* LISTEN
udp 0 0 0.0.0.0:4789 0.0.0.0:*
udp 0 0 0.0.0.0:41761 0.0.0.0:*
udp 0 0 10.0.3.1:53 0.0.0.0:*
udp 0 0 0.0.0.0:67 0.0.0.0:*
udp 0 0 0.0.0.0:68 0.0.0.0:*
udp6 0 0 :::6732 :::*
udp6 0 0 fe80::e847:9cff:fea4:53 :::*

the private subnet is 10.x and public is 172.
All UCP installation was done specifying the public side

@nicolaka

This comment has been minimized.

nicolaka commented Apr 28, 2016

@ikrel can you double check your cluster-store config? I spoke with the networking team and it seems like that is the issue. How did you configure multi-host networking? did you use engine-discovery as described here https://docs.docker.com/ucp/networking/ ?

@ikrel

This comment has been minimized.

ikrel commented Apr 29, 2016

Hi @nicolaka , yes I used the discovery process on one node at a time, starting from master, going to replicas and then workers. I also tried discovery again since then.
What should I be checking and how around cluster-store config? Is there a troubleshooting doc or some pointers?
thanks!

@dongluochen

This comment has been minimized.

Contributor

dongluochen commented Apr 29, 2016

@ikrel You can enable debug on your daemon like docker daemon -l debug .... The logs may give you idea. On different systems daemon log location is different http://stackoverflow.com/questions/30969435/where-is-the-docker-daemon-log .

@nicolaka

This comment has been minimized.

nicolaka commented Apr 30, 2016

@ikrel can you check /etc/docker/daemon.json ? can you double check that cluster-store is pointing to the right IPs of the controllers ? If it is , try to nc -vz <CONTROLLER_IP> <12379> and ensure you can establish a connection or not

@ikrel

This comment has been minimized.

ikrel commented May 6, 2016

sorry about delay, all daemon.json are identical except the cluster-advertise which changes to corresponding IP on each.
root@docker-dc-node1:# cat /etc/docker/daemon.json
{
"cluster-advertise": "172.16.100.29:12376",
"cluster-store": "etcd://172.16.100.29:12379,172.16.100.28:12379,172.16.100.213:12379",
"cluster-store-opts": {
"kv.cacertfile": "/var/lib/docker/discovery_certs/ca.pem",
"kv.certfile": "/var/lib/docker/discovery_certs/cert.pem",
"kv.keyfile": "/var/lib/docker/discovery_certs/key.pem"
}
}root@docker-dc-node1:
# nc -vz 172.16.100.28 12379
Connection to 172.16.100.28 12379 port [tcp/] succeeded!
root@docker-dc-node1:~# nc -vz 172.16.100.213 12379
Connection to 172.16.100.213 12379 port [tcp/
] succeeded!
root@docker-dc-node1:~#

I've upgraded today to 1.1.0 - same issue with ping

@ikrel

This comment has been minimized.

ikrel commented May 6, 2016

was able to catch this error:
May 06 21:06:57 docker-dc-node1 docker[8750]: time="2016-05-06T21:06:57.739222182Z" level=error msg="Multi-Host overlay networking requires cluster-advertise(172.16.100.29) to be configured with a local ip-address that is reachable within the cluster"
May 06 21:06:57 docker-dc-node1 docker[8750]: time="2016-05-06T21:06:57.739742707Z" level=error msg="initializing serf instance failed: failed to create cluster node: Failed to start TCP listener. Err: listen tcp 172.16.100.29:7946: bind: cannot assign requested address"

which is opposite of information in prod install guide:
docker run --rm -it --name ucp
-v /var/run/docker.sock:/var/run/docker.sock
docker/ucp install -i
--host-address <$UCP_PUBLIC_IP> <<<<<<<<<<<<< public

and help info:
--host-address Specify the visible IP/hostname for this node (override automatic detection) [$UCP_HOST_ADDRESS]

@nicolaka

This comment has been minimized.

nicolaka commented May 7, 2016

Thanks for catching this Ilya! just as an FYI, the new UCP 1.1 that was
just released takes care of the multi-host networking config. You don't
have to do this manual step anymore. Please give it a try!

thanks

On Sat, May 7, 2016 at 12:10 AM, Ilya Krel notifications@github.com wrote:

was able to catch this error:
May 06 21:06:57 docker-dc-node1 docker[8750]:
time="2016-05-06T21:06:57.739222182Z" level=error msg="Multi-Host overlay
networking requires cluster-advertise(172.16.100.29) to be configured with
a local ip-address that is reachable within the cluster"
May 06 21:06:57 docker-dc-node1 docker[8750]:
time="2016-05-06T21:06:57.739742707Z" level=error msg="initializing serf
instance failed: failed to create cluster node: Failed to start TCP
listener. Err: listen tcp 172.16.100.29:7946: bind: cannot assign
requested address"

which is opposite of information in prod install guide:
docker run --rm -it --name ucp
-v /var/run/docker.sock:/var/run/docker.sock
docker/ucp install -i
--host-address <$UCP_PUBLIC_IP> <<<<<<<<<<<<< public

and help info:
--host-address Specify the visible IP/hostname for this node (override
automatic detection) [$UCP_HOST_ADDRESS]


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#2161 (comment)

Nicola Kabar
Solutions Architect
nicola.kabar@docker.com
734-386-6039

@ikrel

This comment has been minimized.

ikrel commented May 9, 2016

Just to clarify, I noticed this error after I tore down old UCP and installed new 1.1 from scratch. I then reinstalled it all without using public IPs during setup and that was the only way to get containers across the nodes talking to each other. Of course this does not play well with UI since the IPs of controller nodes in "nodes" page are now private so one cannot click on them to manage or access.
Wouldnt this be the same issue in AWS with private/public IPs?

@pilgrim2go

This comment has been minimized.

pilgrim2go commented Jul 21, 2016

@ikrel : I have the same issue with AWS private/public IP.
Same as #1814.

@nicolaka

This comment has been minimized.

nicolaka commented Jul 21, 2016

@pilgrim2go are you using UCP ? if so, can you please double check if both In/Out security group is allowing this doc https://docs.docker.com/ucp/installation/system-requirements/. Please ensure both tcp and udp ports are open in both directions.

@pilgrim2go

This comment has been minimized.

pilgrim2go commented Jul 22, 2016

I'm using AWS ( https://github.com/andypowe11/AWS-Docker-Swarm-with-Consul)
Yes ports are openning

[ec2-user@node1 ~]$ nc -z -v -u 192.168.x.x 4789
Connection to 192.168.x.x 4789 port [udp/*] succeeded!
[ec2-user@node1 ~]$ nc -z -v -u 192.168.x.x 7946
Connection to 192.168.x.x 7946 port [udp/*] succeeded!

and

[ec2-user@node2 ~]$ nc -z -v -u 192.168.x.x 4789
Connection to 192.168.x.x 4789 port [udp/*] succeeded!
[ec2-user@node2 ~]$ nc -z -v -u 192.168.x.x 7946
Connection to 192.168.x.x 7946 port [udp/*] succeeded!
@kernelt

This comment has been minimized.

kernelt commented Jul 29, 2016

Hi guys. I've the same issue. I can't get ping between containers on different nodes, while they are within one overlay network. Access via http doesn't work too. Other things like a swarm, a consul with a dns, a docker registrator work nice. I tried to allow all network traffic for my ec2 instances, but it doesn't solve this issue.

@deviantony

This comment has been minimized.

deviantony commented Jul 29, 2016

@kernelt simple question, when you're unable to ping container B from container A, did you try to ping container A from container B?

@kernelt

This comment has been minimized.

kernelt commented Jul 30, 2016

I've just found the reason of the issue. The problem was that I didn't open the
udp 4789 Data plane (VXLAN) port.
@ikrel try the solution above.
@deviantony Thanks you. When I checked your advice, I found that it already works. After that, I checked the firewall rules again, in a security group and found that the issue was connected with the 4789 port.

@riuvshin

This comment has been minimized.

Contributor

riuvshin commented Aug 23, 2016

I had exactly same issue, I opened ports but issue was still there... the problem was that I missed VERY important point while open ports, according to doc ports need to be open:

udp 4789    Data plane (VXLAN)
tcp/udp 7946    Control plane

On my side I opened 4789 and 7946 only TCP! While 4789 must be udp and 7946 must be both udp and tcp on each node in cluster.

So @ikrel please make sure you properly open those ports.

@merijnv

This comment has been minimized.

merijnv commented Sep 19, 2016

This isn't tied to UCP in my opinion.
While containers can communicate on the IP-level, they cannot do a succesful DNS lookup for a service running on another node. For the microservice setup we have, that is very much what is needed.

@dongluochen

This comment has been minimized.

Contributor

dongluochen commented Oct 15, 2016

From multiple threads most of overlay network reachability issues are caused by not opening network ports.

@ikrel Do you still see such problem?

@dharmeshptl

This comment has been minimized.

dharmeshptl commented Dec 15, 2016

I had similar issue make sure port 4789/UDP 4789/TCP 7946/UDP 7946/TCP and 2377/TCP is open.

if you are using encrypted network make sure protocol 50 (ESP) is open
-A INPUT -p 50 -j ACCEPT
-A OUTPUT -p 50 -j ACCEPT

@amitsehgal

This comment has been minimized.

amitsehgal commented Jan 13, 2017

Have similar issue if try to provide floating IP:
Swarm was unable to auto-configure your advertise address. You must specify which IP or interface to use. Please enter an IP address, or interface name (e.g., 'eth0’):

And if build ucp/swarm cluster with private ip. The brower does work by ignoring certs. However remote docker-compose doesn't work beacuse the cert CN is on private-ip and not floating.

error during connect: Get https://$floating_ip:443/v1.25/info: x509: certificate is valid for 127.0.0.1, $private_ip, not $floating_ip

is this a bug ? why would you validate if advertise ip has interface assigned. It won't be the case with floating ip....

ucp:2.0.1

@dhiltgen

This comment has been minimized.

Contributor

dhiltgen commented Jan 18, 2017

@amitsehgal It sounds like you're using self-signed certs. At install time for UCP you can add additional SAN fields via the --san flag, and after deployment, on the nodes detail screen you can add them as well. This will update the server cert to include those alternate names.

@adriano-fonseca

This comment has been minimized.

adriano-fonseca commented Feb 9, 2017

I have not read fully the thread, but about security Group config, to have tcp and udp traffic opened Will not allows you to ping machines across the Cloud providers. Ping uses ICMP protocolo, you should realease this kind of traffic across your providers to be able to ping your machines.

@jicki

This comment has been minimized.

jicki commented Feb 23, 2017

I had similar issue
docker swarm overlay
server 1 and server 2 and server 3 port 4789/UDP 7946/UDP 7946/TCP is open

server 1 = container A
server 2 = container B
server 3 = container C

container A ping container B unreasonable

PING B-1 (10.0.0.107) 56(84) bytes of data.

container C ping container B ok

PING B-1 (10.0.0.107) 56(84) bytes of data.
64 bytes from B-1.ovrtest (10.0.0.107): icmp_seq=1 ttl=64 time=0.236 ms
64 bytes from B-1.ovrtest (10.0.0.107): icmp_seq=2 ttl=64 time=0.298 ms
64 bytes from B-1.ovrtest (10.0.0.107): icmp_seq=3 ttl=64 time=0.215 ms

@endeepak

This comment has been minimized.

endeepak commented Aug 18, 2017

Facing same issue running a service with replicated mode in docker swarm. There 3 conatiner running for the service. Container 1 & 2 are reachable on the network but container 3 is unreachable.

@jicki did you figure out the cause for your issue?

@endeepak

This comment has been minimized.

endeepak commented Aug 18, 2017

Additional Info: When I run traceroute to container2 ip inside container3, it prints astericks 30 times

 1  * * *
 2  * * *
 3  * * *
....truncated output for brevity....
 29  * * *
 30  * * *
@indywidualny

This comment has been minimized.

indywidualny commented Aug 29, 2017

The same issue. I'm running a few services on manager node and 1 service on worker node (slave). It looks like this (compose definition).

version: '3'

services:

  postgres:
    image: docker.example.com:5000/project/postgres:1.0
    networks:
      - test
    deploy:
      placement:
        constraints: [node.role == manager]

  worker:
    image: docker.example.com:5000/project/worker:1.0
    networks:
      - test
    deploy:
      replicas: 1
      placement:
        constraints: [node.role == worker]

networks:
  test:

I'm able to ping from postgres container to worker container. I'm not able to ping from worker container to postgres container which is crazy. So basically from main machine I can access container from worker machine. When reversed it doesn't work.

$ docker --version
Docker version 17.05.0-ce, build 89658be

EDIT: Resolved, at least for now.

Solution: While joining swarm as a worker it's crucial to specify --advertise-addr. Docker picks it wrong by default even when the machine has only 1 IP assigned to 1 network card.

docker swarm join --token xxx --advertise-addr xx.xx.xx.xx xx.xx.xx.xx:2377

@nocgp

This comment has been minimized.

nocgp commented Sep 5, 2017

--advertise-addr
Thanks, this helps me with having swarm on GCP with NAT 1:1, causing issues on network rules.

@karthick0070

This comment has been minimized.

karthick0070 commented Sep 12, 2017

hi,

i tried --advertise-addr this option while joining worker node in windows 2016, but its not working for me. i have a doubt --advertise-addr should be manager or node ip address

@indywidualny

This comment has been minimized.

indywidualny commented Sep 12, 2017

@karthick0070 advertise-addr is the address of the node you're adding right now. The address with port 2377 is the Swarm Manager to which you're joining as a worker.

@karthick0070

This comment has been minimized.

karthick0070 commented Sep 13, 2017

i tried same thing in windows but still i am can't access out side of the host and ports are not published. i am using docker Server Version: 17.06.1-ee-1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment