Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Swarm Mode 1.12] Improvements #520

Open
mvdstam opened this issue Jul 29, 2016 · 57 comments

Comments

Projects
None yet
@mvdstam
Copy link

commented Jul 29, 2016

With Docker 1.12 and the new "Swarm Mode", distributed applications can now be deployed with docker service create. This will essentially create a container in a different scope, where the name of the service can be queried for a virtual IP address:

Create a network for the application(s) to live in:
docker network create mynet --driver overlay

Create a service:
docker service create --name nginx-test -e VIRTUAL_HOST=my-virtual-host.local --network mynet --replicas 10 nginx:alpine

Now, we can run ping nginx-test from anywhere within the mynet network. For instance, we can fire up a nginx-proxy container as a service:

docker service create --name nginx-proxy -p 80:80 -p 443:443 --mount type=volume,source=/var/run/docker.sock,target=/tmp/docker.sock --mode global --network mynet jwilder/nginx-proxy

When we attach to the container with docker exec -ti ${containerID} bash, we can ping nginx-test and get a virtual IP address back:

root@f12b4d621565:/app# ping nginx-test
PING nginx-test (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: icmp_seq=0 ttl=64 time=0.120 ms
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.121 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.186 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=0.126 ms

No matter where and when we ping the service, we'll always get the same virtual IP address back. This is because Docker Swarm Mode load balances internally using ingress load balancing.

Basically, two things need to happen.

Container detection
Currently, the nginx-proxy container listens to a local docker.sock socket. This means that, as seen in the latter docker service create command above, we have to make sure the nginx-proxy is actually distributed on each node in the cluster. What it should do, if possible, is listen to global container events on the cluster level. This would probably mean that the nginx-proxy container can only be placed on a node manager, but I would be totally fine with that (it would make sense as well from a semantic point of view). The question is more if such events can actually be listened to, and what kind of changes would be needed in fsouza/go-dockerclient to facilitate this, since docker-gen uses said client to connect and listen to container events.

Template generation based on Virtual IPs
If we would let the nginx-proxy container do it's thing with the setup above, we'd get the following configuration for example:

# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
  default $http_x_forwarded_proto;
  ''      $scheme;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
  default upgrade;
  '' close;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent"';
access_log off;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
    server_name _; # This is just an invalid value which will never trigger on a real hostname.
    listen 80;
    access_log /var/log/nginx/access.log vhost;
    return 503;
}
upstream my-virtual-host.local {
                ## Can be connect with "mynet" network
            # nginx-test.5.c34629l2apb297lo0hxa6ddtk
            server 10.0.0.12:80;
                ## Can be connect with "mynet" network
            # nginx-test.2.4wqzljfsyqqzzoyim9n7yp1tp
            server 10.0.0.11:80;
                ## Can be connect with "mynet" network
            # nginx-test.4.5nm2pugmexl59ajjk3yd0tmno
            server 10.0.0.3:80;
}
server {
    server_name my-virtual-host.local;
    listen 80 ;
    access_log /var/log/nginx/access.log vhost;
    location / {
        proxy_pass http://my-virtual-host.local;
    }
}

We definitely don't want this, as we want identical configurations across the cluster for each nginx-proxy container that's running. What we should get, is the virtual IP of the service:

upstream my-virtual-host.local {
                ## Can be connect with "mynet" network
            # service nginx-test
            server 10.0.0.2:80;
}

The virtual IP address for a service can easily be determined with a single command:

$ docker service inspect -f '{{json .Endpoint.VirtualIPs}}' nginx-test
[{"NetworkID":"bd1d9tovm3eheegj01oewepfd","Addr":"10.0.0.2/24"}]

Let's get this awesome image even more awesome! 😃

CC @jwilder @fsouza @sgillespie

@emmetog

This comment has been minimized.

Copy link

commented Aug 4, 2016

Thanks @mvdstam for the fantastic write up.

Is there any way around this to get the nginx-proxy working on a multi node 1.12 swarm now? You suggest making sure the proxy is present on each node (--mode=global?) but won't the proxy on each node still only be able to see the events on it's own node and therefore only update it's config with the containers that are on the same node?

@mvdstam

This comment has been minimized.

Copy link
Author

commented Aug 4, 2016

Thanks for the kind words @emmetog!

I've been looking into the Docker API v1.24 documentation to determine if it's even possible to watch for swarm-scoped events. In other words, to be able to observe and hook into service-start events that occur on all of the nodes in the current swarm cluster. Ideally, you'd have a nginx-proxy container on all of the master nodes, which should have identical configuration files at all times. Maybe I'm looking in the wrong place, but I can't find any information regarding listening to such events.

@Arachnid

This comment has been minimized.

Copy link

commented Aug 8, 2016

I have something very similar working with nginx-proxy by having it listen to the Swarm master's docker socket, instead of the local machine's docker socket. Isn't that possible with Swarm Mode, too?

@pascalandy

This comment has been minimized.

Copy link

commented Aug 21, 2016

Very exited by this topic!

This is the challenge I would like to resolve. Quote source:

You don’t have to put all your nodes in the DNS (…) The only issue is that there is one single point of failure (if that one server behind that IP goes down, apocalypse now).

(…) you probably want to have a few load balancers in front of the cluster (…) you would point the A records at the load balancers and configure the load balancers to balance over the 1000 nodes.

Now, let's say you have:

  • 3 Swarm masters
  • 100 worker nodes

Would be great if we could put the "jwilder/nginx-proxy" stack as -- global (or only on Swarm Masters --replica).

As docker-compose is not supported by Swarm, the doc will need a dedicated section for using the "jwilder/nginx-proxy" stack docker service ....

Cheers!
Pascal

@nickvanw

This comment has been minimized.

Copy link

commented Aug 27, 2016

I'm very excited by this topic! I have a toy project that does something similar to nginx-proxy, and I'm trying to piece together how it might work with the new swarm mode. I designed my project to work with the previous iteration of Docker Swarm by being able to configure it to watch multiple Docker endpoints over HTTP+TLS, allowing it to point to multiple Swarm masters and watch for events.

In the new Swarm architecture, however, I'm not sure it's possible to know what the master node is at the time a Docker Service is created, and it's not guaranteed to stay static (by design - a new manager can be elected at any time). If any Docker Engine was able to read the Swarm state, this wouldn't be a problem - you could query the list of nodes and find the master at any point in time. Unfortunately, it looks like this is not the case (this happens with docker service ls as well):

root@swarm-node03:~# docker node ls
Error response from daemon: This node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager.

A service does have an ability to only schedule to manager nodes using the node.role scheduling constraint, but in testing this will not ensure that the container is always running on the manager. By creating a service with this constraint and manually promoting & demoting two nodes, I am able to have an nginx container running on the non-leader:

# docker service inspect web_test | jq .[].Spec.TaskTemplate.Placement.Constraints
[
  "node.role==manager"
]

root@swarm-node03:~# docker node ls
ID                           HOSTNAME                           STATUS  AVAILABILITY  MANAGER STATUS
0emcew1av540uxifaupg92rq6    swarm-node02.nyc1.internal.nvw.io  Ready   Active
9cvuvmpi9jc1dxo0imxmfc52r *  swarm-node03.nyc1.internal.nvw.io  Ready   Active        Leader
deouq7l2xp5a3xa6qqmzls9ic    swarm-node01.nyc1.internal.nvw.io  Ready   Active

root@swarm-node03:~# docker service ps 4qj9uaho8qtr
ID                         NAME        IMAGE  NODE                               DESIRED STATE  CURRENT STATE          ERROR
5knlz190us1ib026vfze7qfjc  web_test.1  nginx  swarm-node02.nyc1.internal.nvw.io  Running        Running 2 minutes ago

I believe this would result in an nginx-proxy instance that is unable to get information about services from it's Docker Engine, resulting in no updates.

If I'm wrong here, I would be very interested in knowing - given that non-managers can't read cluster state, however, I am not sure it will be easy to co-opt the nginx-proxy model to the new Swarm Mode.

My hunch is that, as services are not created or destroyed as often as containers (and IPs do not change), it may be easier to move to a more manual add/remove of "services" as units to a LB

@jpetazzo

This comment has been minimized.

Copy link

commented Sep 10, 2016

Hi! A few notes/ideas:

  1. As you noted, placement constraints are not (yet) dynamic; i.e. if you say "this service should run on a manager node", then demote the node on which it's running, it won't automatically be rescheduled.

  2. However, the service can detect that it's no longer on a manager node, and terminate itself. Swarm will then reschedule it.

  3. You might have found it already; but if you want to bind-mount the Docker control socket, you can achieve it like this:

docker service create \
  --mount source=/var/run/docker.sock,type=bind,target=/var/run/docker.sock \
  --name proxy --constraint node.role==manager ...
  1. Swarm doesn't have full support for events yet; so a few ideas come to mind:
  • wait until events are properly supported, so that manager nodes can have a good view of the system
  • poll (yucky and messy but it'll work)
  • run two services: one event collector (running with global scheduling) gathering local events and posting them to the second service; the latter would listen on an internal socket and aggregate events

I hope this helps (a bit) !

@pascalandy

This comment has been minimized.

Copy link

commented Sep 13, 2016

I saw a Keynote named "GOTO 2016 • Higher Order Infrastructure - Microservices on the Docker Swarm • Nicola Paolucci".

Good news everyone. Nicolas is using jwilder/nginx-proxy/ with Swarm and it's working now !!!

He was not clear on the how. He used some 'script' to make it happen. See it here https://youtu.be/eQ-XMDzuvxY?t=23m30s

Jerome, cc @jpetazzo, do you happen to know Nicolas and ask him how he was able to use this project with Swarm ? I will also tried to track him via Twitter :)

There is light!
Cheers!

@jpetazzo

This comment has been minimized.

Copy link

commented Sep 13, 2016

We can try to ask him through GitHub :) /cc @durdn

@durdn

This comment has been minimized.

Copy link

commented Sep 13, 2016

Hey @pascalandy and @jpetazzo. The talk from GOTO came out recently but was recorded a few months ago. It was recorded before the new Docker 1.12 awesomeness came out. I haven't retried the setup in the talk on the latest Docker yet so unfortunately I can't add useful information :(. If I get to it I'll make sure to ping you.

@pascalandy

This comment has been minimized.

Copy link

commented Sep 14, 2016

@durdn I see. But could you share how you made "jwilder/nginx-proxy" work with Swarm 1.10? To my knowledge, "jwilder/nginx-proxy" was not working with any version of Swarm. Cheers!

@pascalandy

This comment has been minimized.

Copy link

commented Sep 14, 2016

I just found that @vfarcic seems to resolve our issue with his project.
Search for serviceDomain. Looks it acts like the VIRTUAL_HOST flag.

@durdn

This comment has been minimized.

Copy link

commented Sep 14, 2016

@pascalandy I can share all the code/config I had prepared for the talk. It is here: https://bitbucket.org/nicolapaolucci/example-voting-app/src/21e94182aad7?at=swarm

@pascalandy

This comment has been minimized.

Copy link

commented Sep 21, 2016

I want to take the time to report on my research.
I took about 15 hours messing around few other solution than jwilder and here is the conclusion.

  1. Made by a Docker captain. Was unstable for me. Plus, you have to update the discovery after your web ctn is up.

  2. Made by the Docker team. Still don't know how to use it with Swarm 1.12+. Ticket opened here docker/dockercloud-haproxy#111

  3. This takes the same jwilder approach (3 micro-services ctn). Made for Docker Swarm 1.12+ Very clean and is the clear winner as per today. Check the commands here: https://github.com/tpbowden/swarm-ingress-router/blob/master/bootstrap.sh

Please buzz me on Twitter if I missed something.
Cheers!

@mvdstam

This comment has been minimized.

Copy link
Author

commented Sep 21, 2016

@pascalandy Looks good, I'd like to add Traefik to the list as well:

  • Quite an active repo, there's activity every day
  • Built-in support for Letsencrypt
  • Built specifically to handle front-end proxy'ing for microservices, full support for Docker
  • An open PR for Swarm 1.12 support
@corradio

This comment has been minimized.

Copy link

commented Oct 3, 2016

@subhashdasyam

This comment has been minimized.

Copy link

commented Oct 20, 2016

@corradio https://github.com/tpbowden/swarm-ingress-router this works perfect (only downside is doesn't support multiple DNS names in single service command) although its working with different Service commands :)

@pascalandy

This comment has been minimized.

Copy link

commented Oct 20, 2016

@corradio Yes I did. Do you confirm Interlock is working in Swarm Mode 1.12+ ??

I also confirm https://github.com/tpbowden/swarm-ingress-router is working great. Cheers!

@bernardomk

This comment has been minimized.

Copy link

commented Nov 22, 2016

Hi @mvdstam

Im not sure if I do understand what you need but couldnt you have a consul template updating nginx-proxy? You`d need a Consul server and registrator running on each node.

Hope I don`t sound too dumb.

King regards.

@subhashdasyam

This comment has been minimized.

Copy link

commented Nov 25, 2016

@corradio @pascalandy
update guys even https://github.com/vfarcic/docker-flow-proxy is working great and easy to setup :)

@pi0

This comment has been minimized.

Copy link

commented Jan 26, 2017

Hello there.very interesting discussion, @pascalandy thanks for your links, i've spend completely 1 day investigating and comparing this solutions, but actually non of them was stable & flexible enough for our business comparing to nginx.
As a first (& successful) attempt i've added an special config VIRTUAL_UPSTREAM option to our customized fork @banianhost). The key point is that docker swarm allows us to resolve hosts on any node of cluster just using it's service name so we can easily just point upstream to service name instead of it's ip address and everything magically works!
There is only one Limitation. we need at least one instance of service containers on master node so it can be discovered!! Does/Can/Will docker-gen currently supports swarm-wide containers discovery? ( ping @jwilder )

Thanks :)

@jwilder

This comment has been minimized.

Copy link
Owner

commented Jan 26, 2017

There is a .Node field on each container exposed to templates. This is a SwarmNode struct.

I haven't tried nginx-proxy on a recent version of swarm in a while. I used some of the early versions and had nginx-proxy running on a swam master and that worked with remote backends.

Does/Can/Will docker-gen currently supports swarm-wide containers discovery?

I don't know. I wouldn't be opposed to adding support for it if it doesn't currently work. I haven't had time to look into it though.

@pi0

This comment has been minimized.

Copy link

commented Jan 26, 2017

@jwilder Thanks for reply, i'll make more tests on that. don't want to miss this awesome project :))

@yanky83

This comment has been minimized.

Copy link

commented Feb 7, 2017

Hey,

just started using this great docker image. Very nice project!

Is there any news on full support for swarm mode? Would be awesome to get this to work!

@ivandir

This comment has been minimized.

Copy link

commented Mar 18, 2017

I'm also interested in using this with Swarm mode but I understand the hindrances you must overcome first with Persistent volumes and docker socket. Please let us know when this is anticipated for release?

@wanghaibo

This comment has been minimized.

Copy link

commented Apr 13, 2017

moby/moby#32421 cluster event support is on the way

@amq

This comment has been minimized.

Copy link

commented Oct 22, 2017

Tried traefik 1.4 on a moderately busy production, but found it to be much more resource-intensive than nginx...

RES TIME+ COMMAND
2.621g 1247:58 mysqld
561032 1358:33 traefik
118784 171:20 nginx // sum of all 9 backends

4x more RAM
8x more CPU

Granted, nginx backends are not doing TLS, but then again, traefik is not doing filesystem access and fastcgi.

@MichaelErmer

This comment has been minimized.

Copy link

commented Oct 22, 2017

@amq are you sure you didnt run traefik in debug mode?

@amq

This comment has been minimized.

Copy link

commented Oct 22, 2017

@MichaelErmer this is how I run it:

  traefik:
    image: traefik:1.4
    command: |-
      --logLevel=WARN
      --entrypoints='Name:http Address::80 Redirect.EntryPoint:https'
      --entrypoints='Name:https Address::443 TLS'
      --docker
      --docker.swarmmode
      --docker.exposedbydefault=false
      --acme
      --acme.entrypoint=https
      --acme.email=...
      --acme.storage=/acme/acme.json
      --acme.ondemand=false
      --acme.onhostrule=true
    ports:
      - target: 80
        published: 80
        protocol: tcp
        mode: host
      - target: 443
        published: 443
        protocol: tcp
        mode: host
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /dev/null:/traefik.toml
      - acme:/acme
    networks:
      - overlay
      - private
    deploy:
      resources:
        limits:
          memory: 1G
      placement:
        constraints:
          - node.role == manager

upd: 95% is spent on futex, as far as I understand that corresponds to goroutines "fighting" for cpu time. There are 20 traefik threads running (on a 4-core machine, couldn't find a way to limit them down) and probably 100+ goroutines.

strace -cfp pid

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 95.34   42.104657         308    136669      7217 futex
  3.13    1.383145          22     63508           epoll_wait
  1.33    0.587015           6    106654           pselect6
  0.09    0.039938           2     24066       137 write
  0.05    0.023566           0     52763     30054 read
  0.03    0.015365          11      1452           getrandom
  0.01    0.004103           2      2226           sched_yield
  0.00    0.001388           0      2937           close
  0.00    0.001249           0      5180           setsockopt
  0.00    0.000857           3       314       314 connect
  0.00    0.000379           0      6974      3360 accept4
  0.00    0.000327           0      6865           epoll_ctl
  0.00    0.000253           0      3927           getsockname
  0.00    0.000016           0       314           socket
  0.00    0.000000           0       132           madvise
  0.00    0.000000           0       314           getpeername
  0.00    0.000000           0       314           getsockopt
  0.00    0.000000           0         2         1 restart_syscall
------ ----------- ----------- --------- --------- ----------------
100.00   44.162258                414611     41083 total
@jpetazzo

This comment has been minimized.

Copy link

commented Oct 23, 2017

@amq can you clarify:

Granted, nginx backends are not doing TLS, but then again, traefik is not doing filesystem access and fastcgi.

Does that mean that you compared traefik doing TLS vs NGINX serving files and passing traffic to fastcgi?

Or did you compare them side by side doing exactly the same thing ?

Thank you!

@MichaelErmer

This comment has been minimized.

Copy link

commented Oct 23, 2017

What I see is 238% cpu usage on a Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz at currently 278k req/min, memory usage is stable at 8gb which I attribute to ssl caches, I've seen similar with nginx.
I think the CPU usage is about 2x of nginx, however the interaction with docker swarm is so much better especially regarding rolling updates of services etc. so its justifiable.

@amq

This comment has been minimized.

Copy link

commented Oct 23, 2017

@jpetazzo I compared traefik vs all backends combined (they are all nginx). Yes, it's not a fully direct comparison, but hey, the difference is just huge.

The usage also seems to increase over time. ~200M RAM on start, 250M after a couple of hours (warmed up, I'd expect it to stay here), 600M after 3 days.

Ubuntu 16.04, Linux 4.4.0-97, Docker 17.09

@jpetazzo

This comment has been minimized.

Copy link

commented Oct 27, 2017

@amq if you're comparing traefik doing SSL vs. NGINX not doing SSL, it's normal that you see a big difference. SSL is way cheaper than it used to be, but it's not free either :-)

Regarding RAM usage, you might also want to check which tier is doing the buffering. Let me try to explain: when the HTTP client is slower than the server (which is 99% of the time, since your servers will be able to send content way faster than the client can read it), the data can be buffered on the app server (PHP in your case), on the web server (NGINX), or on the load balancer (traefik). One of these three will be holding the data and sending it at the speed of the client. In your scenario I don't know if NGINX is doing that, or traefik. (Hopefully it is not PHP, because it would hold up precious resources).

That being said, you pointed out two very valid points: threads fighting for CPU access, and increasing RAM usage over time. If you happen to have graphs of RAM usage over time, we could get traefik's maintainers attention of the problem … For the CPU issue, Go can be tricky to fine-tune; I have seen a few very interesting presentations on that subject — let me know if you're interested, I can find the references (but I don't want to bother you if you don't have any time for that :))

Thank you for all the testing and reports in cany case; that's extremely useful!

@biels

This comment has been minimized.

Copy link

commented Feb 21, 2018

Is this being worked on, or are there any alternatives to this for swarm mode?

@uLan08

This comment has been minimized.

Copy link

commented Feb 22, 2018

I don't know but an alternative would be to use traefik.

@pattonwebz

This comment has been minimized.

Copy link

commented Mar 8, 2018

After failing to make this repo image work in swarm mode I tried Traefik. It works, it's cool and all... but I would love to move back to using this image as Traefik is still relatively new and still lacking on support and documentation. Would love to see some movement on this issue if anyone has any ideas on where to go from here.

@helderco

This comment has been minimized.

Copy link

commented Mar 8, 2018

If you use split containers, you can try my docker-gen extension:
https://store.docker.com/community/images/helder/docker-gen

I've been using it with swarm for quite some time.

@shyd

This comment has been minimized.

Copy link

commented Mar 8, 2018

I am using helder's docker-gen on a single node swarm as well. 👍

@CamoMacdonald

This comment has been minimized.

Copy link

commented Mar 8, 2018

A more manual alternative for services that wont change name much, would be to fork jwilder and strip out docker-gen - then use scripts of sort to generate the same information for the default.conf but using the docker service names rather than the IPs to connect. This is something i have been able to get working quite successfully

@pattonwebz

This comment has been minimized.

Copy link

commented Mar 10, 2018

I had not yet come across @helderco custom image for this and am very interested in giving it a try. Will be testing it this coming week.

Thanks to all for the help and advice on this!

@arefaslani

This comment has been minimized.

Copy link

commented Mar 11, 2018

@shyd Did you test it in multi node swarm cluster?

@shyd

This comment has been minimized.

Copy link

commented Mar 11, 2018

@arefaslani Yes, I did give it a try. But in my test some months back docker-gen was able to only discover containers running on the same host as the docker-gen container. For my knowledge it was/is limited by the docker API.

My guess would be to let docker-gen run on every node and then merge the configs. Another option would be to call the current docker-gen master and implement a docker-gen version like satellite to only push container info from remote nodes to the master.

@arefaslani

This comment has been minimized.

Copy link

commented Mar 11, 2018

@shyd Thank you for quick response. I'm new to docker and I'm using docker-compose files to deploy my stack using docker stack deply --compose-file docker-compose.yml stackname. I'm not using docker-gen directly, but by using nginx-proxy that uses it in the background. What should I write to my compose file to force nginx-proxy to run on every node?

P.S. I did test the constraints option on both nginx-proxy and my app to run on the same node (manager, which in my case was one node).

@shyd

This comment has been minimized.

Copy link

commented Mar 11, 2018

@arefaslani To run a service on every node, you can set the deploy mode to global like so:

...
    deploy:
      mode: global
...

But this won't be enough, because docker-gen isn't made for multi node use, afaik.

@amq

This comment has been minimized.

Copy link

commented Mar 18, 2018

Looks like this is the only real alternative to traefik: http://proxy.dockerflow.com/

@jaschaio

This comment has been minimized.

Copy link

commented Apr 4, 2018

So current status is still that jwilder/nginx-proxy doesn't work with docker swarm mode?

Nobody is currently working on a fix for this?

The only alternative is using traefik which has the downside of being less performant than nginx?

I will take a look at http://proxy.dockerflow.com/ than. Let me know if I am wrong with the above statements or if there is any other solution I am overlooking beside the one mentioned by @shyd but I am honestly not sure how to do that.

@srigi

This comment has been minimized.

Copy link

commented Apr 6, 2018

@jaschaio you don't need to deploy proxy on every node when using docker-flow. Just deploy proxy service on worker that is facing public traffic and proxy-listener on swarm manager node.

I've been using dfp for quite time with swarm, it is very good solution.

@arefaslani

This comment has been minimized.

Copy link

commented May 23, 2018

It works well in swarm mode for me. Checkout #927 (comment)

@achrjulien

This comment has been minimized.

Copy link

commented Oct 10, 2018

It probably works as @arefaslani tested

@jaschaio

This comment has been minimized.

Copy link

commented Oct 10, 2018

@achrjulien checkout docker flow proxy – works great

@achrjulien

This comment has been minimized.

Copy link

commented Oct 11, 2018

@achrjulien checkout docker flow proxy – works great

Thank you @jaschaio , that seem to work very well so far! Let's hope it holds performance-wise. I was worried with Traefik but there is much less people using docker flow proxy so it is hard to find a complaint.

@arefaslani

This comment has been minimized.

Copy link

commented Oct 12, 2018

@achrjulien I tested it with 2 VMs created by docker-machine. It worked...

@zodern

This comment has been minimized.

Copy link

commented Oct 12, 2018

I have used https://github.com/zodern/nginx-proxy-swarm-upstream with nginx-proxy and multiple nodes.

@achrjulien

This comment has been minimized.

Copy link

commented Nov 23, 2018

I was using a hybrid swarm so I edited my comment. This was probably due to the swarm config that is broken by default if you try to make a swarm with -ee and -ce docker. Everything seems to work much better when self compiling the 18.06.1-ce for Windows and have the same version on the Linux side

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.