Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't browse service's container on Swarm mode #30052

Closed
Pierpaolo1992 opened this issue Jan 11, 2017 · 9 comments
Closed

Can't browse service's container on Swarm mode #30052

Pierpaolo1992 opened this issue Jan 11, 2017 · 9 comments

Comments

@Pierpaolo1992
Copy link

Pierpaolo1992 commented Jan 11, 2017

Docker version: docker version
Client:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built: Wed Oct 26 22:01:48 2016
OS/Arch: linux/amd64

Server:
Version: 1.12.3
API version: 1.24
Go version: go1.6.3
Git commit: 6b644ec
Built: Wed Oct 26 22:01:48 2016
OS/Arch: linux/amd64

I've created 3 vm using docker-machine:

docker-machine create -d virtualbox manager1
docker-machine create -d virtualbox worker1
docker-machine create -d virtualbox worker2

these are theirs ip:

docker-machine ls
NAME      ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER    ERRORS
manager   -        virtualbox   Running tcp://192.168.99.102:2376                                   v1.12.6
worker1   -        virtualbox   Running   tcp://192.168.99.100:2376           v1.13.0-rc5  
worker2   -        virtualbox   Running   tcp://192.168.99.101:2376           v1.13.0-rc5   

Then docker-machine ssh manager1

and:

docker swarm init --advertise-addr 192.168.99.102:2377

then worker1 and worker2 join to the swarm.

Now i've created a overlay network as:

docker network create -d overlay skynet

and deployed a service in global mode (1 task for node):

docker service create --name http --network skynet --mode global -p 8200:80 katacoda/docker-http-server

And there is effectively 1 container (task) for node.

Now, i'd like accessind directly to my virtual host.. or, at least, i'd like browsing directly my service's container, because of i'd like developing a load balancer of my service with nginx.
For doing that, in my nginx conf file, i'd like to point to a specific service'container (i.e. now i have 3 node (1 manager and 2 workers) in global mode, so i have 3 tasks running-->i'd like to choose one of these 3 containers).
How can i do that?

I can point to my swarm nodes simply browsing to <IP_VM>:<SERVICE_PORT>, i.e:

192.168.99.102:8200

but there is still internal load balancing.
I was thinking that, if i point to a specific swarm node, i'll use container inside that specific node. But nothing, for now.

@thaJeztah
Copy link
Member

thaJeztah commented Jan 11, 2017

Docker 1.13 will have an "host" mode option to publish the container's ports backing a service directly on the host that they're running, e.g.;

docker service create \
  --publish mode=host,target=80,published=8200,protocol=tcp \
  --name web \
  nginx:alpine

Is there a particular reason you want to use external nginx load balancer instead of using the internal load-balancing? (i.e. if the nginx load balancer is a front-end proxy to map domain-names to services, you can also achieve this by deploying it as a service, and route traffic on the internal network)

@Pierpaolo1992
Copy link
Author

Pierpaolo1992 commented Jan 11, 2017

thanks for your answer, @thaJeztah . For my master thesis, goal is developing a CDN service (with docker swarm) with an external load balancing container (an nginx container) that points directly to service containers, i.e.:

upstream backend {
      server ip_service_container1:port weight=3;
      server ip_service_container2:port;
      server ip_service_container3:port;
   }

   server {
      listen 80;

      location / {
         proxy_pass http://backend;
      }
   }

With

docker service create \
  --publish mode=host,target=80,published=8200,protocol=tcp \
  --name web \
  nginx:alpine

how can i differentiate service containers? I mean.. externally, i have a service with N replicas (N tasks/containers) on post 8200 and only one port, 80.
Thanks for support. I'm new there world but i'm falling in love with Docker

@thaJeztah
Copy link
Member

thaJeztah commented Jan 11, 2017

There's a number of things here;

You're mentioning an external load balancer, running in a container. If that container is running as a swarm service (or container on one of the swarm nodes), it's not "external".

how can i differentiate service containers? I mean.. externally, i have a service with N replicas (N tasks/containers) on post 8200 and only one port, 80.

When using mode=host, the task's port is published on the node it is running on. This means that only a single task can run per host (port 8200 can only be used once), and you can access each task by using the ip-address of the host it's running on, i.e.;

If the load balancer is external (i.e., running on an external machine or another VM that is not part of the Swarm), those IP-addresses can be used as the "upstream" addresses to route traffic to.

From your description ('an nginx container') it looks like you want that nginx container to just route traffic to the other containers.

create a "backend" network for the loadbalancer to connect to the containers

$ docker network create -d overlay backend

Create the "web" service, which runs the tasks/containers to use as "backend". The backend containers, don't have to publish a port as networking will be routed from the loadbalancer container

docker service create --network backend --name web --replicas=3 nginx:alpine

Create the loadbalancer service. The loadbalancer container is a global service, so it's accessible on any node it uses "publish mode=host" so that traffic to the container never goes through the swarm load-balancer

$ docker service create \
  --network backend \
  --name loadbalancer \
  --mode=global \
  --publish mode=host,target=80,published=80,protocol=tcp \
  nginx:alpine

You can find the IP-addresses of the individual containers for a service using the tasks.<servicename> DNS entry, when performed inside a container. The "loadbalancer" and "web" containers both are connected to the "backend" network, so that they can communicate.

If you docker exec into a "loadbalancer" container, you can get the IP-addresses of the containers for the "web" service. (In the example below, the ID of the loadbalancer container on this node is 9a2959e595ee) - see Use swarm mode service discovery

$ docker exec -it 9a2959e595ee sh
/ # nslookup tasks.web
nslookup: can't resolve '(null)': Name does not resolve

Name:      tasks.web
Address 1: 10.0.0.4 web.2.lum0gl1x0k5a17iyasytpxix4.backend
Address 2: 10.0.0.5 web.3.wdhdr6tydt5mn633ci5hkrtht.backend
Address 3: 10.0.0.3 web.1.4ifp5sh79i2h2875j7w7ekpp3.backend

IP-addresses 10.0.0.3 .. 10.0.0.5 are the IP addresses of the web containers on the "backend" network. Keep in mind that those addresses can change any time a task is replaced (i.e., on docker service update, or if a node goes down, etc.)

Is the loadbalancer going to be doing any URL or domain based routing? (i.e., should foo.cdn.com go a different backend/container
than bar.cdn.com, or should foo.cdn.com/some/path go to a different backend/container?) If not, then you may not even need the nginx container, as it is only replicating what the Swarm routing mesh already does. In that case you can simply publish the "web" service's port 80, and have the Swarm network-mash handle the load-balancing.

Also keep in mind that the main role for a CDN is to pick a server closest to the location you're requesting from (lowest latency), implementing this may need a lot more than just this setup (e.g. geo-based DNS lookups for your swarm).

@Pierpaolo1992
Copy link
Author

Thanks for your awesome explanation, @thaJeztah . I'll research continuing on your way, if something doesn't work i'll post here. Thanks again.

@thaJeztah
Copy link
Member

Let me close this issue, but feel free to continue the conversation

@Pierpaolo1992
Copy link
Author

Another problem:
have a cluster composed by 1 manager and 2 worker, everyone on vm (created by docker-machine).
i've created a "cdn service" that caches, or pass request to backend (a tomcat container, on port 8700).
My docker version is 1.13-rc2

This is config file of my nginx image:

proxy_cache_path /tmp/nginx levels=1:2 keys_zone=my_zone:10m inactive=60m;
proxy_cache_key "$scheme$request_method$host$request_uri";
proxy_cache_methods GET HEAD POST;
proxy_cache_valid 200 206 100m;
proxy_ignore_headers Set-Cookie;
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Headers' 'Range'; 
server {
    listen       80;
    server_name  172.17.0.1;

    #charset koi8-r;
    #access_log  /var/log/nginx/log/host.access.log  main;

location =  /example-av1.mpd {
    add_header 'Access-Control-Allow-Origin' '*';
    
    proxy_cache my_zone;
    add_header X-Proxy-Cache             $upstream_cache_status;
    proxy_set_header X-Real-IP           $remote_addr;
    proxy_set_header X-Forwarded-For     $remote_addr;

    proxy_set_header Host                $host;

    proxy_pass http://172.17.0.1:8700/shaka-player-master/media/example-av1.mpd;
  }

}

the problem is the following:
I've created overlay network named "mynet"

when i create service:
docker service create --name nginx-cdn --network mynet --mode global --publish mode=host,target=80,published=9500,protocol=tcp *myimage*

assuming that my cluster nodes are 192.168.99.103-104-105, and service is on port 9500,
If i point to 192.168.99.103 (or 104, or 105) :9500/example-av1.mpd there is a 502 bad gateway, instead of redirect request to backend.

This problem appairs also with docker version 1.12.

How shoud i solve this problem?
172.17.0.1 is docker0 interface.
cdn service is inside an overlay network (mynet), while tomcat is a container (runned with docker run), not a service, and is not attached to any network.
The problem is that i think i should resolve problem if i attach tomcat container to service network (mynet). But service is created inside a vm, so from host (where tomcat container is running from) i don't see mynet.

@thaJeztah
Copy link
Member

I just answered your question on the other issue

@Pierpaolo1992
Copy link
Author

@thaJeztah Is there a way, with docker v13, to allocating multiple task's ports on the node it's running on?

For example: If i want to create a service using an image that expose multiple ports, i'd like doing something like:

docker service create --name nginx-cdn --network mynet --mode global --publish mode=host,target=8088,8188,published=9500,9600,protocol=tcp *myimage*

Can i do that?

@thaJeztah
Copy link
Member

Yes, you can use --publish multiple times on a service, to map multiple ports

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants