Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker 1.12 swarm mode: how to connect to another container on overlay network and how to use loadbalance? #24996

Closed
shenshouer opened this issue Jul 25, 2016 · 6 comments

Comments

@shenshouer
Copy link

shenshouer commented Jul 25, 2016

I used docker-machine on mac os. and create the swarm mode cluster like:

➜  docker-machine create --driver virtualbox docker1
➜  docker-machine create --driver virtualbox docker2
➜  docker-machine create --driver virtualbox docker3

➜  config docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER        ERRORS
docker1   -        virtualbox   Running   tcp://192.168.99.100:2376           v1.12.0-rc4
docker2   -        virtualbox   Running   tcp://192.168.99.101:2376           v1.12.0-rc4
docker3   -        virtualbox   Running   tcp://192.168.99.102:2376           v1.12.0-rc4


➜  config docker-machine ssh docker1
docker@docker1:~$ docker swarm init
No --secret provided. Generated random secret:
    b0wcyub7lbp8574mk1oknvavq

Swarm initialized: current node (8txt830ivgrxxngddtx7k4xe4) is now a manager.

To add a worker to this swarm, run the following command:
    docker swarm join --secret b0wcyub7lbp8574mk1oknvavq \
    --ca-hash sha256:e06f5213f5c67a708b2fa5b819f441fce8006df41d588ad7823e5d0d94f15f02 \
    10.0.2.15:2377


# on host docker2 and host docker3, I run cammand to join the cluster:

docker@docker2:~$ docker swarm join --secret b0wcyub7lbp8574mk1oknvavq --ca-hash sha256:e06f5213f5c67a708b2fa5b819f441fce8006df41d588ad7823e5d0d94f15f02 192.1
68.99.100:2377
This node joined a Swarm as a worker.

docker@docker3:~$ docker swarm join --secret b0wcyub7lbp8574mk1oknvavq --ca-hash sha256:e06f5213f5c67a708b2fa5b819f441fce8006df41d588ad7823e5d0d94f15f02 192.1
68.99.100:2377
This node joined a Swarm as a worker.

# on docker1:
docker@docker1:~$ docker node ls
ID                           HOSTNAME  MEMBERSHIP  STATUS  AVAILABILITY  MANAGER STATUS
8txt830ivgrxxngddtx7k4xe4 *  docker1   Accepted    Ready   Active        Leader
9fliuzb9zl5jcqzqucy9wfl4y    docker2   Accepted    Ready   Active
c4x8rbnferjvr33ff8gh4c6cr    docker3   Accepted    Ready   Active

then I create the network mynet with overlay driver on docker1.
The first question: but I cann`t see the network on other docker hosts:

docker@docker1:~$ docker network create --driver overlay mynet
a1v8i656el5d3r45k985cn44e
docker@docker1:~$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
5ec55ffde8e4        bridge              bridge              local
83967a11e3dd        docker_gwbridge     bridge              local
7f856c9040b3        host                host                local
bpoqtk71o6qo        ingress             overlay             swarm
a1v8i656el5d        mynet               overlay             swarm
829a614aa278        none                null                local

docker@docker2:~$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
da07b3913bd4        bridge              bridge              local
7a2e627634b9        docker_gwbridge     bridge              local
e8971c2b5b21        host                host                local
bpoqtk71o6qo        ingress             overlay             swarm
c37de5447a14        none                null                local

docker@docker3:~$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
06eb8f0bad11        bridge              bridge              local
fb5e3bcae41c        docker_gwbridge     bridge              local
e167d97cd07f        host                host                local
bpoqtk71o6qo        ingress             overlay             swarm
6540ece8e146        none                null                local

the I create the nginx service which echo the default hostname on index page on docker1:

docker@docker1:~$ docker service create --name nginx --network mynet --replicas 1 -p 80:80 dhub.yunpro.cn/shenshouer/nginx:hostname
9d7xxa8ukzo7209r30f0rmcut
docker@docker1:~$ docker service tasks nginx
ID                         NAME     SERVICE  IMAGE                                     LAST STATE              DESIRED STATE  NODE
0dvgh9xfwz7301jmsh8yc5zpe  nginx.1  nginx    dhub.yunpro.cn/shenshouer/nginx:hostname  Running 12 seconds ago  Running        docker3

The second question: I cann`t access from the IP of docker1 host to the service. I only get the response to access the IP of docker3 .

➜  tools curl 192.168.99.100
curl: (52) Empty reply from server
➜  tools curl 192.168.99.102
fda9fb58f9d4

So I think there have no loadbalance. How do I to use the build-in loadbalance ?

Then I create another service on the same network with busybox image to test ping :

docker@docker1:~$ docker service create --name busybox --network mynet --replicas 1 busybox sleep 3000
akxvabx66ebjlak77zj6x1w4h
docker@docker1:~$ docker service tasks busybox
ID                         NAME       SERVICE  IMAGE    LAST STATE              DESIRED STATE  NODE
9yc3svckv98xtmv1d0tvoxbeu  busybox.1  busybox  busybox  Running 11 seconds ago  Running        docke1

# on host docker3. I got the container name and the container IP to ping test:

docker@docker3:~$ docker ps
CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS               NAMES
fda9fb58f9d4        dhub.yunpro.cn/shenshouer/nginx:hostname   "sh -c /entrypoint.sh"   7 minutes ago       Up 7 minutes        80/tcp, 443/tcp     nginx.1.0dvgh9xfwz7301jmsh8yc5zpe

docker@docker3:~$ docker inspect fda9fb58f9d4
...

            "Networks": {
                "ingress": {
                    "IPAMConfig": {
                        "IPv4Address": "10.255.0.7"
                    },
                    "Links": null,
                    "Aliases": [
                        "fda9fb58f9d4"
                    ],
                    "NetworkID": "bpoqtk71o6qor8t2gyfs07yfc",
                    "EndpointID": "98c98a9cc0fcc71511f0345f6ce19cc9889e2958d9345e200b3634ac0a30edbb",
                    "Gateway": "",
                    "IPAddress": "10.255.0.7",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:0a:ff:00:07"
                },
                "mynet": {
                    "IPAMConfig": {
                        "IPv4Address": "10.0.0.3"
                    },
                    "Links": null,
                    "Aliases": [
                        "fda9fb58f9d4"
                    ],
                    "NetworkID": "a1v8i656el5d3r45k985cn44e",
                    "EndpointID": "5f3c5678d40b6a7a2495963c16a873c6a2ba14e94cf99d2aa3fa087b67a46cce",
                    "Gateway": "",
                    "IPAddress": "10.0.0.3",
                    "IPPrefixLen": 24,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:0a:00:00:03"
                }
            }
        }
    }
]


# on host docker1 :
docker@docker1:~$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
b94716e9252e        busybox:latest      "sleep 3000"        2 minutes ago       Up 2 minutes                            busybox.1.9yc3svckv98xtmv1d0tvoxbeu
docker@docker1:~$ docker exec -it b94716e9252e ping nginx.1.0dvgh9xfwz7301jmsh8yc5zpe
ping: bad address 'nginx.1.0dvgh9xfwz7301jmsh8yc5zpe'
docker@docker1:~$ docker exec -it b94716e9252e ping 10.0.0.3
PING 10.0.0.3 (10.0.0.3): 56 data bytes
90 packets transmitted, 0 packets received, 100% packet loss

The third question: How to communicate with each container on the same network?

and the network mynet as:

docker@docker1:~$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
5ec55ffde8e4        bridge              bridge              local
83967a11e3dd        docker_gwbridge     bridge              local
7f856c9040b3        host                host                local
bpoqtk71o6qo        ingress             overlay             swarm
a1v8i656el5d        mynet               overlay             swarm
829a614aa278        none                null                local
docker@docker1:~$ docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "a1v8i656el5d3r45k985cn44e",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Containers": {
            "b94716e9252e6616f0f4c81e0c7ef674d7d5f4fafe931953fced9ef059faeb5f": {
                "Name": "busybox.1.9yc3svckv98xtmv1d0tvoxbeu",
                "EndpointID": "794be0e92b34547e44e9a5e697ab41ddd908a5db31d0d31d7833c746395534f5",
                "MacAddress": "02:42:0a:00:00:05",
                "IPv4Address": "10.0.0.5/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "257"
        },
        "Labels": {}
    }
]


docker@docker2:~$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
da07b3913bd4        bridge              bridge              local
7a2e627634b9        docker_gwbridge     bridge              local
e8971c2b5b21        host                host                local
bpoqtk71o6qo        ingress             overlay             swarm
c37de5447a14        none                null                local

docker@docker3:~$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
06eb8f0bad11        bridge              bridge              local
fb5e3bcae41c        docker_gwbridge     bridge              local
e167d97cd07f        host                host                local
bpoqtk71o6qo        ingress             overlay             swarm
a1v8i656el5d        mynet               overlay             swarm
6540ece8e146        none                null                local

docker@docker3:~$ docker network inspect mynet
[
    {
        "Name": "mynet",
        "Id": "a1v8i656el5d3r45k985cn44e",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Containers": {
            "fda9fb58f9d46317ef1df60e597bd14214ec3fac43e32f4b18a39bb92925aa7e": {
                "Name": "nginx.1.0dvgh9xfwz7301jmsh8yc5zpe",
                "EndpointID": "5f3c5678d40b6a7a2495963c16a873c6a2ba14e94cf99d2aa3fa087b67a46cce",
                "MacAddress": "02:42:0a:00:00:03",
                "IPv4Address": "10.0.0.3/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "257"
        },
        "Labels": {}
    }
]
@Ozzyboshi
Copy link

Ozzyboshi commented Jul 29, 2016

I have the same issue, it seems that the overlay network is created only on manager node.
The only overlay network present in all nodes is the "ingress" network.
According to this document
https://docs.docker.com/engine/extend/plugins_network/
the overlay driver is supported in docker 1.12 with swarm mode active:

Docker 1.12 adds support for cluster management and orchestration called swarm mode. Docker Engine running in swarm mode currently only supports the built-in overlay driver for networking. Therefore existing networking plugins will not work in swarm mode.

The only way to get overlay network working between swarm nodes is the old way with and external key value store linked in /etc/default/docker

Am I missing something about the new swarm mode?

@gazzerh
Copy link

gazzerh commented Jul 29, 2016

Same issue. When i start a service I can see all the docker hosts in the swarm listen on that port. However only the node(s) which the actual service running on it is able to service the traffic.

Am I missing something?

@ernestgwilsonii
Copy link

This was answered in #25004

@shenshouer
Copy link
Author

I saw the network will create when create service with the custom network in the swarm mode . But there have no load balance. I only get the response on the node where have the container running.

@shenshouer
Copy link
Author

shenshouer commented Aug 1, 2016

I know the problem, Because I use the CoreOS as the Host of docker, and the IPVS mode does not load default. So the load balance has no effect. and load ipvs in coreos like:

  units:
    - name: systemd-modules-load.service
      command: restart
...
write-files:
  - path: /etc/modules-load.d/ip_vs.conf
    content: ip_vs

@YouriT
Copy link

YouriT commented Oct 7, 2016

@shenshouer I don't see any answer to your third question in fact. Did you figure that out?

The third question: How to communicate with each container on the same network?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants