Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker Swarm Support or Multiple Node Support #97

Open
usmanismail opened this issue Feb 6, 2015 · 43 comments

Comments

Projects
None yet
@usmanismail
Copy link

commented Feb 6, 2015

Can nginx-proxy run on multiple nodes over something like docker swarm? The default setup was using the unix socket to listen for events so it would not work across nodes.

@jwilder

This comment has been minimized.

Copy link
Owner

commented Feb 6, 2015

I'm not sure... I've never tried it w/ docker swarm. You could try starting nginx-proxy w/ -e DOCKER_HOST=tcp://<ip>:<port> and not binding in the unix socket though. As long as you are passing in the ip:port of the host that nginx-proxy is running on, it might work. If it's a remote host, I'm not sure that the current template will use the correct IPs for the backend entries in nignx.conf so you might need to use a custom template.

@md5

This comment has been minimized.

Copy link
Contributor

commented Feb 22, 2015

@usmanismail What sort of topology are you imagining? Would you envision the jwilder/nginx-proxy container running on a single node in the swarm cluster and proxying to HTTP/HTTPS backends on any node in the cluster? Or would you envision an instance of jwilder/nginx-proxy on each node proxying any containers that are running on that node itself?

If you want the former (a single nginx-proxy with upstreams on any cluster node), then the backends would all either need to have their web ports exposed on the node's IP address or there would have to be some more advanced networking in place to allow the containers connected to the normally private docker0 bridges on each host to talk to each other (along with measures to avoid IP collisions).

@gorille

This comment has been minimized.

Copy link

commented Feb 25, 2015

@usmanismai I stumbled on the following repository : https://github.com/ehazlett/interlock while reading shipyard manual.
I have no idea if it's what you're looking for, but I hope it helps.

@usmanismail

This comment has been minimized.

Copy link
Author

commented Feb 25, 2015

@md5 Yeah Option 2 was closer to my use case. @gorille I will take a look thanks. I am using a tool called Rancher which gives me a VPN between remote containers. I just need something to automate the reverse proxy'ing across servers.

@md5

This comment has been minimized.

Copy link
Contributor

commented Feb 25, 2015

@gorille 👍

I just took a look at ehazlett/interlock and it looks like it unconditionally exposes the first port on every container connected to the swarm master. It also assumes that the port has been published as a host port with -p, as I hinted above: https://github.com/ehazlett/interlock/blob/1b45419dd3658028457773b940ccb0480e5477b6/controller/manager.go#L208

@gorille

This comment has been minimized.

Copy link

commented Feb 27, 2015

Always glad to help !

cheers

@Stellaverse

This comment has been minimized.

Copy link

commented Jul 17, 2015

@usmanismail Can you provide your use-case for wanting to proxy across all nodes in a cluster? I've toyed with this idea myself, but I've started to seriously question why I want to do this. Is there a specific problem that this would solve for you?

@klaszlo

This comment has been minimized.

Copy link

commented Sep 20, 2015

@Stellaverse My usecase is like this:
I bought a single VPS server with 1GB ram, and started using docker.
(I had 4 other VPS, each for its own job).

At first I only had 4 containers: nginx-proxy, mongodb, dockerui, website (node.js application).
Then months passed, and now I have 16 containers on this VPS, and I wanted to add a 17. when the first out of memory error occured to me (docker ps -> runtime/cgo: pthread_create failed: Resource temporarily unavailable).

Now I would like to buy a second VPS (different IP, different provider), and move some of the containers to that server. So it would be really nice to somehow signal back to the original server if a new container is started on the second server.

I started to create a new docker container for each job. I have docker container for image resizing (node.js + imagemagick), pdf generation (node.js + python+reportlab), zipping files (node.js + jszip), sending SMS (node.js + dedicated android phone). I also have a container for each website.

So for my own hobby usage I will have by the end of this year about 40 containers. I would like to move some of it to a secondary vps, and some of it to home server.

I have a few options to choose from: kubernetes, docker swarm, or nginx-proxy. I think nginx-proxy is more then enough for me.

I hope it helps, and it is a valid usecase. Sorry for the long message.

@md5

This comment has been minimized.

Copy link
Contributor

commented Sep 21, 2015

@klaszlo It sounds like you could use Docker Swarm to get the "signal back to the original server if a new container is started on the second server" part, then have nginx-proxy listen to the Swarm event stream instead of listening to a single Docker daemon's event stream. This setup should work fine since #192 was merged a couple months ago. It should just be a matter of setting -e DOCKER_HOST to point to the Swarm master instead of using the default Unix socket.

@lukemadera

This comment has been minimized.

Copy link

commented Sep 23, 2015

I'm having trouble getting the reverse proxy to work with Docker Swarm. It seems swarm is now supported via #192

I followed this tutorial https://blog.docker.com/2015/02/orchestrating-docker-with-machine-swarm-and-compose/

So I'm using docker-machine, docker-swarm, and scaling with docker-compose.

And I have my docker swarm running with 3 swarm nodes / agents.
I can access my public http site on all 3 individual IP addresses for the swarm agents but cannot figure out how to reverse proxy them to be accessible through ONE public, specified IP address.

My docker compose file looks like:

web:
  build: .
  ports:
    - "3000"
  environment:
    VIRTUAL_HOST: 104.236.87.160
  links:
    - droppriceCode
droppriceCode:
  image: lukemadera/dropprice-code

I've set the virtual host to the ip address of the swarm master. Is this the correct one I should be using?

docker ps outputs this:

CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                                                 NAMES
04de9f99bda9        droppricecompose_web   "/bin/sh -c 'node run"   About an hour ago   Up 25 minutes       3002/tcp, 27017/tcp, 104.236.87.160:32781->3000/tcp   swarm-master/04de9f99bd_04de9f99bd_04de9f99bd_04de9f99bd_04de9f99bd_04de9f99bd_droppricecompose_web_3
3ed097d392be        droppricecompose_web   "/bin/sh -c 'node run"   5 hours ago         Up 25 minutes       3002/tcp, 104.236.204.109:3000->3000/tcp              swarm-02/droppricecompose_web_2
3c8f88b7832b        droppricecompose_web   "/bin/sh -c 'node run"   5 hours ago         Up 25 minutes       3002/tcp, 104.236.204.109:80->3000/tcp                swarm-02/droppricecompose_web_1
8d74da04e2c3        mongo                  "/entrypoint.sh mongo"   31 hours ago        Up 31 hours         27017/tcp                                             swarm-master/droppricecompose_mongo_1

I then run docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy and that works but if I go to the ip address listed in docker ps it gives an nginx 503 error.
If I use port 3000, which is the port I'm using and have exposed, i.e. with docker run -d -p 3000:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy then I an error: unable to find a node with port 3000 available.

All my servers are on digitalocean:

  • main server I'm running all the above commands from: 104.236.133.134
  • swarm master is 104.236.87.160
  • and then 3 separate swarm nodes with their own ip addresses.

I think I'm getting lost in all the ip addresses and ports. I'm not sure which ones to use where..

Again everything seems to be working except the final step of linking / proxying all the individual swarm node ip addresses into one public ip address. I can access each node individually just fine.
Any help would be greatly appreciated.

@md5 any ideas?

@Stellaverse were you suggesting in your post that this wouldn't be necessary in the first place? Does docker swarm handle this linking all the nodes through one ip address (the swarm master?) in the first place? How do I access it on a public website, port 80 (or 443 for SSL)?

Thanks!

@md5

This comment has been minimized.

Copy link
Contributor

commented Sep 24, 2015

@lukemadera The first thing I'd probably do is look at the generated /etc/nginx/conf.d/default.conf file and see if it has any of your containers listed as upstream entries. It's possible you're getting a 503 because jwilder/nginx-proxy isn't actually proxying to any containers. Without seeing how you started jwilder/nginx-proxy, it's hard to see if you configured it correctly to talk to the Swarm master. You would check that file using a docker exec command.

Once you've ruled that out, the next possibility is that the incoming Host header is not matching any of the server blocks. This would be the result of VIRTUAL_HOST being specified incorrectly.

In general, you're going to have to do something to ensure that your jwilder/nginx-proxy container always runs on the node with the expected IP address. If you're launching it through Swarm, you should be able to use the node== constraint or another constraint to pick the right node. See the constraint docs here. If you're launching nginx-proxy directly, you'll just have to manually launch it on the right node and expose the right port (i.e. 80).

Regarding your question to @Stellaverse, the answer is that Docker Swarm does not handle proxying all nodes through a single IP address. The only thing it does that resembles that is to present a single Docker API that can be used to query and launch containers on all the underlying nodes.

@md5

This comment has been minimized.

Copy link
Contributor

commented Sep 24, 2015

I just realized you do say how you're running jwilder/nginx-proxy.

I think the problem is that you need to use VIRTUAL_HOST: 104.236.133.134 (or a DNS name that maps to 104.236.133.134), not VIRTUAL_HOST: 104.236.87.160. This is assuming that jwilder/nginx-proxy is running on 104.236.133.134. You also won't be able to use -v /var/run/docker.sock:/tmp/docker.sock:ro. Instead, you'll need to use -e DOCKER_HOST=tcp://104.236.87.160:2375 (assuming you exposed your Swarm master on port 2375). If you're using TLS to secure your Swarm master, you'll also need to set the DOCKER_TLS_* environment variables to tell docker-gen to connect with TLS.

@md5

This comment has been minimized.

Copy link
Contributor

commented Sep 24, 2015

One more thing to add is that you'll probably not want to have the internal services listening on public IP addresses (i.e. the individual Docker daemons, the backend app server ports, or the Swarm master). I haven't used DO, but I believe you can enable private networking on your droplets. You'll likely want everything except the port 80 mapping for your nginx-proxy to bind only to private IP addresses.

@md5

This comment has been minimized.

Copy link
Contributor

commented Sep 24, 2015

I haven't really played with Swarm that much, so I thought I'd try to get this setup working. Turns out I ran into some weird HTTPS read timeout errors and I wasn't able to tell whether it was Compose or Swarm causing the issue. I found quite a few issues for Compose that look related, but they may have been fixed in the yet-to-be-released version of Compose (cf. docker/compose#1963). I was having trouble with docker-compose scale not wanting to scale, as well as times when Swarm seemed to get stuck (which I could fix with a docker restart swarm-agent-master on my swarm-master instance).

Here's what I did to get it (mostly) working: https://gist.github.com/md5/38b0db36267a9456a840

Highlights include:

  • Using --engine-label type=app and --engine-label type=proxy, combined with constraint:type==app and constraint:type==proxy to control the placement of the app containers and the proxy container
  • Using the DOCKER_HOST and DOCKER_TLS_* environment variables to tell docker-gen inside the jwilder/nginx-proxy container to point to Swarm
  • Copying the SSL CA certificate, client certificate, and key to the swarm-proxy node to allow it to connect to Swarm via TLS

I wouldn't really recommend using my scripts from that gist for anything more than playing around, but I was able to see proxied requests from multiple backend containers.

I also didn't play around with the --digitalocean-private-networking flag to docker-machine. I spent enough time banging my head against this to leave that for another day. Off the top of my head, I think that the nginx.tmpl file in this repo would need to be enhanced to take into account .Address.HostIp and to use that instead of .Container.Node.Address.IP if it's present.

@lukemadera

This comment has been minimized.

Copy link

commented Sep 24, 2015

@md5 Thanks so much for the prompt and super detailed reply!! Very much appreciated. I'll check it out and let you know how it goes.

@lukemadera

This comment has been minimized.

Copy link

commented Sep 25, 2015

@md5 I tried your scripts and they worked nearly perfectly. I got stuck toward the end with no such service: proxy. Did you say you got yours working all the way through?

I'm going to try separating the app and proxy services into 2 different compose files and try again and see if that works - do they need to be combined together?

@md5

This comment has been minimized.

Copy link
Contributor

commented Sep 25, 2015

I can't say that they worked all the way through, but I didn't get that error. The errors I was getting were during the docker-compose scale command. I was getting timeout errors that I couldn't tell whether they were from Compose bugs or Swarm outages. In some cases, I tried running normal docker commands against the Swarm API and they also hung. After I restarted Swarm, I could run some of these commands (e.g. docker run --rm busybox true), but then running a different Compose command would hang.

Splitting into 2 compose files might help; there isn't any reason they need to be combined.

@lukemadera

This comment has been minimized.

Copy link

commented Sep 25, 2015

Ok, I separated them into 2 compose files and got those running fine. Gist here:
https://gist.github.com/lukemadera/0961776310d8deedfbde

However, I still can connect to the individual 3 swarm servers fine but the swarm-proxy url now gives "server not available" - not even an nginx page / error code. So does that mean the nginx proxy isn't even running at all?

docker ps gives:

CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS              PORTS                                                NAMES
320023e50c6b        lukemadera/dropprice-code   "/bin/sh -c 'node run"   4 seconds ago       Up 3 seconds        3002/tcp, 159.203.85.23:3000->3000/tcp, 27017/tcp    swarm-02/code_app_2
96caedc9ba9d        lukemadera/dropprice-code   "/bin/sh -c 'node run"   4 seconds ago       Up 3 seconds        3002/tcp, 45.55.72.15:3000->3000/tcp, 27017/tcp      swarm-03/code_app_3
83695ea3804a        lukemadera/dropprice-code   "/bin/sh -c 'node run"   5 seconds ago       Up 3 seconds        3002/tcp, 27017/tcp, 104.236.87.160:3000->3000/tcp   swarm-01/code_app_1
b28f14313d54        jwilder/nginx-proxy         "/app/docker-entrypoi"   12 seconds ago      Up 10 seconds       80/tcp, 443/tcp, 45.55.36.215:80->3000/tcp           swarm-proxy/code_proxy_1

Everything on the docker / swarm end appears to be working fine. The proxy just isn't working / running.

@md5 you said "I was able to see proxied requests from multiple backend containers" - so you WERE able to go to the swarm-proxy url ( http://45.55.36.215:80 in my case) and access things?

@md5

This comment has been minimized.

Copy link
Contributor

commented Sep 25, 2015

Yep, I was able to see responses from two different jwilder/whoami backends.

It looks like your port mapping is wrong. It says 45.55.36.215:80->3000/tcp, but it should say 45.55.36.215:80->80/tcp.

@lukemadera

This comment has been minimized.

Copy link

commented Sep 25, 2015

Ah, thanks @md5 ! Got an nginx page now at http://45.55.36.215
However, it says I need to further configure - is that what I should see and I need to take further steps? Or is this docker package / the proxy supposed to proxy through to the 3 swarm servers, which are in this case web servers? The goal is to see the website at that address, served from one of the 3 swarm nodes.

Also, I can't even find the nginx folder / configuration file. It's supposed to be /etc/nginx/conf.d/default.conf right? I tried cd /etc/nginx on all 4 of the main server, ssh-ing into docker-machine ssh swarm-proxy, swarm-master and swarm01 and it doesn't exist.

Sorry for all the questions and I REALLY appreciate all your incredibly helpful and prompt replies!

@md5

This comment has been minimized.

Copy link
Contributor

commented Sep 25, 2015

You should probably look at the logs for your nginx-proxy container to see if anything failed. You'l then want to look at the generated /etc/nginx/conf.d/default.conf file to see what it looks like.

@lukemadera

This comment has been minimized.

Copy link

commented Sep 26, 2015

Yeah, tons of repeats of the same error in docker logs swarm-proxy:

dockergen.1 | 2015/09/25 23:54:47 Unable to ping docker daemon: Get http://104.131.49.211:3376/_ping: malformed HTTP response "\x15\x03\x01\x00\x02\x02"

And where do I find the conf.d file?

find / -name "nginx" -type d outputs something on all docker machines but only swarm-proxy has a 'mnt' directory:

root@swarm-proxy:~# find / -name "nginx" -type d
/var/lib/docker/aufs/diff/e4e34ee3cba5e1635fc9b9bf8278a83b896269bc8a325d5aa4bba5b6426e2441/etc/nginx
/var/lib/docker/aufs/diff/d67512912afa7f4d9bf88eeeff6946a3e98b23ca747d4fb594ad800cf02c1203/usr/share/nginx
/var/lib/docker/aufs/diff/d67512912afa7f4d9bf88eeeff6946a3e98b23ca747d4fb594ad800cf02c1203/usr/share/doc/nginx
/var/lib/docker/aufs/diff/d67512912afa7f4d9bf88eeeff6946a3e98b23ca747d4fb594ad800cf02c1203/etc/nginx
/var/lib/docker/aufs/diff/d67512912afa7f4d9bf88eeeff6946a3e98b23ca747d4fb594ad800cf02c1203/var/cache/nginx
/var/lib/docker/aufs/diff/d67512912afa7f4d9bf88eeeff6946a3e98b23ca747d4fb594ad800cf02c1203/var/log/nginx
/var/lib/docker/aufs/diff/ce74e35473130297eb0bf8e5837692ffa0410db89dc6e01c253f0327c2509ad6/var/log/nginx
/var/lib/docker/aufs/diff/f96490160e3b2150a00fe3ac4cc34c03d90c3647535209200f2f9108f202f1a8/var/log/nginx
/var/lib/docker/aufs/diff/c964c72d9a0885841dec333e839039da6ff431f61bcc8083cd43f9ba435a051a/etc/nginx
/var/lib/docker/aufs/mnt/c964c72d9a0885841dec333e839039da6ff431f61bcc8083cd43f9ba435a051a/etc/nginx
/var/lib/docker/aufs/mnt/c964c72d9a0885841dec333e839039da6ff431f61bcc8083cd43f9ba435a051a/usr/share/doc/nginx
/var/lib/docker/aufs/mnt/c964c72d9a0885841dec333e839039da6ff431f61bcc8083cd43f9ba435a051a/usr/share/nginx
/var/lib/docker/aufs/mnt/c964c72d9a0885841dec333e839039da6ff431f61bcc8083cd43f9ba435a051a/var/cache/nginx
/var/lib/docker/aufs/mnt/c964c72d9a0885841dec333e839039da6ff431f61bcc8083cd43f9ba435a051a/var/log/nginx

The /etc/nginx file above seems to be very generic.

nginx -t doesn't seem to work anywhere, just prompts to install nginx.

@md5

This comment has been minimized.

Copy link
Contributor

commented Sep 26, 2015

The problem is that docker-gen is trying to connect to Swarm using HTTP, but docker-machine configures Swarm with TLS authentication by default.

As for the generated default.conf, you'll need to find that file inside the code_proxy_1 container. You should be able to do something like docker exec -it code_proxy_1 cat /etc/nginx/conf.d/default.conf after SSH'ing to the swarm-proxy instance.

@lukemadera

This comment has been minimized.

Copy link

commented Sep 26, 2015

Hmm, okay. I added back in the TLS stuff and now am going over https:

CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS              PORTS                                                                     NAMES
5424e6c69050        jwilder/nginx-proxy         "/app/docker-entrypoi"   5 seconds ago       Up 4 seconds        80/tcp, 45.55.36.215:443->443/tcp                                         swarm-proxy/code_proxy_1
180d80de591b        lukemadera/dropprice-code   "/bin/sh -c 'node run"   3 minutes ago       Up 3 minutes        159.203.85.23:3000->3000/tcp, 159.203.85.23:3002->3002/tcp, 27017/tcp     swarm-02/code_app_3
65758b5b0bf6        lukemadera/dropprice-code   "/bin/sh -c 'node run"   3 minutes ago       Up 3 minutes        45.55.72.15:3000->3000/tcp, 45.55.72.15:3002->3002/tcp, 27017/tcp         swarm-03/code_app_2
9a2d0c8c13e0        lukemadera/dropprice-code   "/bin/sh -c 'node run"   4 minutes ago       Up 4 minutes        104.236.87.160:3000->3000/tcp, 104.236.87.160:3002->3002/tcp, 27017/tcp   swarm-01/code_app_1

Now neither http://45.55.36.215 nor https://45.55.36.215 works. Clearly I don't quite understand what's going on. How did you know TLS was the issue and how do I fix it? I was trying to keep SSL out at first to keep it simple but I do need it in the end so might as well just do it now since I can't http working.

Thanks for explaining how to get to the config file. docker exec -it code_proxy_1 cat /etc/nginx/conf.d/default.conf shows:

server {
    listen       80;
    server_name  localhost;

    #charset koi8-r;
    #access_log  /var/log/nginx/log/host.access.log  main;

    location / {
        root   /usr/share/nginx/html;
        index  index.html index.htm;
    }

    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}
}

Does that look correct? Seems generic - I don't see any docker stuff in there. Do I just need to change 80 to 443 above?

@md5

This comment has been minimized.

Copy link
Contributor

commented Sep 26, 2015

Get http://104.131.49.211:3376/_ping: malformed HTTP response "\x15\x03\x01\x00\x02\x02"

This error shows that docker-gen is trying to connect to your Swarm master using an HTTP URL, but it's getting what it considers to be garbage back because it's seeing a TLS response.

As far as I know, docker-machine always provisions TLS-enabled Swarm clusters, so I don't think it's something that's easy to opt out of.

The default.conf you show doesn't look like one created from nginx.tmpl by this image. I'd take another look at docker-compose -f docker-production-swarm-proxy.yml logs proxy and you'll probably still see an error from docker-gen and/or nginx.

@md5

This comment has been minimized.

Copy link
Contributor

commented Sep 26, 2015

As for neither http://45.55.36.215 nor https://45.55.36.215 working, the reason the first doesn't work is that you aren't mapping port 80, only "443:443". The reason https isn't working is likely that you haven't configured SSL at the level of nginx-proxy. The TLS we were discussing before is for the Swarm daemon itself, not for anything at the web level.

Still, the fact that default.conf is not the version generated by this image likely indicates that you still have something else going on.

@almereyda

This comment has been minimized.

Copy link

commented Oct 21, 2015

Has anybody here seen http://www.before.no/2015/04/using-docker-gen-with-a-swarm-cluster/ ?

I jumped to my searching for "docker gen nginx swarm".

Can https://github.com/JustAdam/docker-gen-swarm from @JustAdam be considered a reasonable approach? Its documentation seems much more concise than the agglomeration of comments in this thread.

@md5

This comment has been minimized.

Copy link
Contributor

commented Oct 21, 2015

The comments in this thread were mainly helping @lukemadera troubleshoot his problems.

My gist looks pretty similar to @JustAdam's approach: https://gist.github.com/md5/38b0db36267a9456a840

If you compare his docker-compose.yml with mine and allow for the fact that my approach is dynamically determining the IP address of the Swarm master and using nginx-proxy instead of a custom nginx.tmpl, they're pretty much the same.

@almereyda

This comment has been minimized.

Copy link

commented Oct 21, 2015

I may have to accept the one-size-fits-it-all solution for all usage scenarios doesn't exist. In my case for future scenarios TLS seems mandatory, while there are still limitations on Machine for ARM devices.

Thanks alot for the deliberate answer. Will have to condense my requirements at build time, then.

@md5

This comment has been minimized.

Copy link
Contributor

commented Oct 22, 2015

@almereyda I think you're right. Since Swarm is meant for managing clusters, there's going to be a lot of variation in how people want their topologies to work.

In the simple cases, such as when you're running a single nginx-proxy container on the same node as the Swarm master, the stuff I have in that Gist for determining the Swarm master address for DOCKER_HOST can be replaced with something like link: swarm and the DOCKER_HOST can be hard-coded to https://swarm:3376. You'll still have to deal with getting the TLS stuff right, though.

In more complex cases, there's going to be a lot of moving parts. In the next few Docker releases, as it becomes easier to manage private overlay networks (a la Weave) and service discovery, this stuff will almost certainly get easier. If something like docker secrets becomes a thing, I could see the need to bind-mount SSL keys and credentials with -v eventually going away as well.

Still, you're definitely right that the current status quo is still quite complicated to manage.

@pascalandy

This comment has been minimized.

Copy link

commented Sep 21, 2016

Hey folks,

Here is my report using a reverse proxy with Swarm 1.12+
#520 (comment)

@ldshi

This comment has been minimized.

Copy link

commented Nov 5, 2016

@jwilder @md5 so for this project for new docker swarm mode, what is the conclusion?

  1. I saw this project is still force checking the unix socket binding, if i take the -e DOCKER_HOST swarm manager node approach, is the unix socket binding still required?
  2. I set the DOCKER_HOST with the docker swarm initial manager node info, but for the swarm cluster, the manager node cloud be always changing, how do we handle this?
  3. My setup will be like this:
    • all service run in docker swarm cluster with self created overlay network, can expose port to physical host
    • two or three reverse proxy servers(nginx or haproxy) run in the same overlay network
    • upon real service containers's joining/leaving/restarting, docker-gen help me to regenerate the nginx's/haproxy's configuration and then do the reload

will this setup work?

@RehanSaeed

This comment has been minimized.

Copy link

commented Aug 3, 2017

So does this work with Swarm in 2017?

@andre-brongniart

This comment has been minimized.

Copy link

commented Sep 11, 2017

+1 for 2017 swarm.

@jasonchi38

This comment has been minimized.

Copy link

commented Sep 20, 2017

+1 Swarm mode for 2017

@arefaslani

This comment has been minimized.

Copy link

commented Mar 10, 2018

@jwilder Is there any plan to support swarm mode? We love the simplicity of this image and it's a cool feature to support swarm mode.

@mallchin

This comment has been minimized.

Copy link

commented Apr 30, 2018

+1

@arefaslani

This comment has been minimized.

Copy link

commented Apr 30, 2018

Take a look at Nginx Autoconf. I've written it in nodejs. You can easily develop and extend it. Pull requests are welcome.

@mallchin

This comment has been minimized.

Copy link

commented May 1, 2018

@arefaslani That looks awesome; any plans to add functionality on-par with nginx-proxy?

@arefaslani

This comment has been minimized.

Copy link

commented May 1, 2018

@mallchin Unfortunately I'm a Ruby developer who knows nothing about Golang. Because javascript was a good option for creating this image, I've chose it to solve my problem... Look at Nginx Autoconf's code. Its written in javascript and is very simple. You could extend it easily. But in the case of nginx-proxy, it's a matter of time...

@mallchin

This comment has been minimized.

Copy link

commented May 1, 2018

@arefaslani Roger, thanks, will do.

I don't require much functionality now but that may change as I use Docker more.

@arefaslani

This comment has been minimized.

Copy link

commented May 1, 2018

@mallchin I'll try to check this repo to see if I can do the same thing.

@mallchin

This comment has been minimized.

Copy link

commented May 1, 2018

@arefaslani Great, thanks :)

I will also look at extending if I get time but I have other projects that need attention. It is a shame as this is one of the last pieces to the puzzle for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.