Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multicast in Overlay driver #552

Open
nicklaslof opened this issue Sep 21, 2015 · 57 comments
Open

Multicast in Overlay driver #552

nicklaslof opened this issue Sep 21, 2015 · 57 comments

Comments

@nicklaslof
Copy link

@nicklaslof nicklaslof commented Sep 21, 2015

Multicast network packages doesn't seem to be transported to all containers in an Overlay network configuration.

I have investigated this a bit trying various manual config inside the network name space but haven't got anywhere.

@sanimej
Copy link
Contributor

@sanimej sanimej commented Sep 22, 2015

@nicklaslof Yes, this is the current behavior. overlay is implemented using vxlan unicast. So handling multicast need some form of packet replication. We are looking into possible options to support multicast on top of overlay.

Loading

@nicklaslof
Copy link
Author

@nicklaslof nicklaslof commented Sep 22, 2015

Just to make it clear (especially since I wrote host instead of container in my original text) is that I mean using multicast between containers whilst still having vxlan unicast between the docker hosts.

Loading

@dave-tucker
Copy link
Contributor

@dave-tucker dave-tucker commented Nov 6, 2015

@sanimej @mrjana @mavenugo hey guys, any update on whether there a solution in sight for this one? Per #740, this impacts use of the official elasticsearch image for creating a cluster.

If we can outline a possible solution here, perhaps someone from the community can attempt a fix if we don't have bandwidth for 1.10

Loading

@sanimej
Copy link
Contributor

@sanimej sanimej commented Nov 6, 2015

@dave-tucker Multiple vxlan fdb entries can be created for all zero mac, which is the default destination. This gives an option (with some drawbacks) to handle multicast without the complexities of snooping. I have to try this out manually to see if it works.

Loading

@dweomer
Copy link

@dweomer dweomer commented Nov 15, 2015

@sanimej, @dave-tucker: any news on this one? We are looking for this so as to support containerization, with minimal refactoring, of a multicast-based service discovery integration in our stack. We can probably make unicast work but would prefer to not incur such a refactoring so as to avoid unintended consequences and/or further refactoring in our stack.

Loading

@mavenugo
Copy link
Contributor

@mavenugo mavenugo commented Nov 15, 2015

@dweomer this is not planned for the upcoming release. But, have added the help-wanted label to request help from any interested dev. If someone is interested in contributing this feature, we can help with design review & get this moving forward for the upcoming release.

Loading

@alvinr
Copy link

@alvinr alvinr commented Dec 2, 2015

+1 - some infrastructure components (like Aerospike) rely on Multicast for cluster discovery.

Loading

@oobles
Copy link

@oobles oobles commented Dec 2, 2015

+1 - This would be very useful. At the very least the documentation should note that this is not currently supported.

Loading

@bboreham
Copy link
Contributor

@bboreham bboreham commented Dec 10, 2015

Note there are other Docker Network plugins available which do support multicast. For instance the one I work on.

Loading

@jainvipin
Copy link

@jainvipin jainvipin commented Feb 12, 2016

Note there are other Docker Network plugins available which do support multicast. For instance the one I work on.

Same with Contiv plugin. More here.

Loading

@tomqwu
Copy link

@tomqwu tomqwu commented Apr 14, 2016

+1 in order to adapt Guidewire Application multihost clustering, multihost is must

Loading

@DSTOLF
Copy link

@DSTOLF DSTOLF commented Jul 13, 2016

+1 Can't get Wildfly and mod_cluster to work on Swam Mode, because the overlay network doesn't support multicast.

One could fall back to Unicast, but since one would also need to provide a proxy list with the ip addresses of all httpd load balancers and it would be very difficult to figure them out beforehand, I would say that Wildfly and Modcluster don't currently work on Swarm Mode. Regards.

Loading

@medined
Copy link

@medined medined commented Jul 21, 2016

+1 to support http://crate.io

Loading

@DanyC97
Copy link

@DanyC97 DanyC97 commented Oct 17, 2016

any update on getting the multicast implemented ?

Loading

@mavenugo
Copy link
Contributor

@mavenugo mavenugo commented Oct 17, 2016

@DanyC97 @medined @DSTOLF @tomqwu and others. this issue is currently labeled help-wanted. If someone is interested in contributing this feature, we will accept it.

Loading

@ghost
Copy link

@ghost ghost commented Oct 17, 2016

@DanyC97 used macvlan underlay instead for my needs as a quick solution and worked fine.

Loading

@jocelynthode
Copy link

@jocelynthode jocelynthode commented Oct 19, 2016

@codergr : Were you able to use macvlan in a swarm with services ? Can you easily change the scope of a network ?

Loading

@mavenugo
Copy link
Contributor

@mavenugo mavenugo commented Oct 19, 2016

@jocelynthode that is not supported yet unfortunately. PTAL moby/moby#27266 and it is one of supporting such a need. But it needs some discussion to get it right.

Loading

@mjlodge
Copy link

@mjlodge mjlodge commented Oct 19, 2016

Another option is to use the Weave Net networking plugin for Docker, which has multicast support. Full disclosure: I work for Weaveworks.

Loading

@jonlatorre
Copy link

@jonlatorre jonlatorre commented Jan 10, 2017

@mjlodge but there is no support for network plugins on swarm mode, no? So weave can't be used in swarm mode and docker services. It's pity that in such environment (docker swarm and services) much of the applications that support clustering can't be used due to the lack of multicast.

Loading

@bboreham
Copy link
Contributor

@bboreham bboreham commented Jan 10, 2017

@jonlatorre local scope plugins do work, so Weave Net can be used with ordinary containers when Docker is in swarm mode. But not services.

With Docker 1.12 it is not possible to use Docker services with plugins.

Docker 1.13 is intended to support network plugins, but as of today, using Docker 1.13-rc5, we have been unable to get a network plugin to do anything with Docker services.

(note I work for Weaveworks)

Loading

@jonlatorre
Copy link

@jonlatorre jonlatorre commented Jan 10, 2017

@bboreham thanks for the update. I hope that in final release wave could work with docker swarm and services, i'm impatient to use it :)

Loading

@wkok
Copy link

@wkok wkok commented Jul 11, 2017

+1

Loading

3 similar comments
@rpofuk
Copy link

@rpofuk rpofuk commented Jul 19, 2017

+1

Loading

@gkozyryatskyy
Copy link

@gkozyryatskyy gkozyryatskyy commented Aug 3, 2017

+1

Loading

@rauschbit
Copy link

@rauschbit rauschbit commented Aug 10, 2017

+1

Loading

@tdevopsottawa
Copy link

@tdevopsottawa tdevopsottawa commented Aug 17, 2017

+1, this issue is the only thing blocking me from using docker in my infrastructure

Loading

@fssilva
Copy link

@fssilva fssilva commented Sep 5, 2017

+1

Loading

@blop
Copy link

@blop blop commented Sep 5, 2017

I use macvlan network for now because of this (macvlan is supported in swarm since version 17.06), but it's clearly less convenient.

Loading

@markwylde
Copy link

@markwylde markwylde commented Sep 13, 2017

@mavenugo is there any plan or tips on how this feature could/should be designed. What would be the starting point for getting it implemented.

I'm guessing the code goes somewhere in here:
https://github.com/docker/libnetwork/tree/master/drivers/overlay

Does the driver contain a list or method of fetching all the IP's within the network. Could it watch for a multicast then replicate it over all the IP's individually. Would this work or be a performance hit?

Loading

@gokhansari
Copy link

@gokhansari gokhansari commented Sep 25, 2017

+1

Loading

@tunix
Copy link

@tunix tunix commented Sep 26, 2017

I've been relying on Hazelcast's multicast discovery until I found out that overlay network doesn't support multicast. An external network definition with macvlan driver (swarm scoped) seems to be working although it cannot be defined inside the compose file. (as part of the stack) There is an issue already filed for this one as well: docker/cli#410

Loading

@intershopper
Copy link

@intershopper intershopper commented Oct 30, 2017

+1

Loading

@deratzmann
Copy link

@deratzmann deratzmann commented Nov 9, 2017

+1
@tunix right now I try to install a Hazelcast Cluster (Running on payara full Profile Server) on different nodes via docker service...and run into the same issue. Could you Please describe your macvlan workaround? This issue seems to be a along lasting One...

Loading

@tunix
Copy link

@tunix tunix commented Nov 9, 2017

There are 2 solutions to this problem afaik:

  • Using hazelcast-discovery-spi plugin for Docker
  • Using macvlan network driver

It's been a while since my last trial on this but just creating a macvlan network and using it (as an external network) should be sufficient.

$ docker network create -d macvlan --attachable my-network
$ docker service create ... --network my-network ...

Loading

@deratzmann
Copy link

@deratzmann deratzmann commented Nov 13, 2017

@tunix creating a macvlan in a Swarm scope doesn't seem to Work. The Container starts, but it cannot reach any ipadress. Running with Overlay it work ( but then multicast is not available).
Any ideas?
` docker network create --driver macvlan --scope swarm sa_hazel

docker service create --network sa_hazel ...foo
`

Loading

@conker84
Copy link

@conker84 conker84 commented Nov 29, 2017

+1

Loading

@dhet
Copy link

@dhet dhet commented Nov 30, 2017

@deratzmann Using a network like yours, I can ping containers on remote hosts but multicast still doesn't work.
+1

Loading

@blop
Copy link

@blop blop commented Jan 10, 2018

Found another tradeoff when using macvlan driver in swarm.

The macvlan driver driver does not support port mappings, which prevents from using "mode=host" published ports as described in https://docs.docker.com/engine/swarm/services/#publish-a-services-ports-directly-on-the-swarm-node

Loading

@torokati44
Copy link

@torokati44 torokati44 commented Feb 19, 2018

Just asking: Is there any progress on this?

Loading

@bbiallowons
Copy link

@bbiallowons bbiallowons commented Apr 18, 2018

Is there any chance, that this will be implemented sometime?

Loading

@dhet
Copy link

@dhet dhet commented Apr 27, 2018

For the time being I suggest everyone to use Weave Net which works flawlessly in my setup.

Loading

@KylePreuss
Copy link

@KylePreuss KylePreuss commented Oct 18, 2018

+1. From my own testing, multicast does not work with the bridge driver either. I'm not talking about routing multicast between the host's network and the containers' NAT'ed network -- I'm talking about two containers deployed side-by-side (same host) using the default bridge or a user-defined bridge network. Said containers cannot communicate via multicast. IMO getting multicast working within the bridge network would be a logical first step before moving on to the overlay network.

Also, the suggestion to use Weave Net will only work with Linux hosts.

I wish I knew earlier that containers cannot use multicast.

Edit: I know multicast should work with "net=host" but, aside from that not being an ideal solution in any sense, it does not work with Windows hosts.

Loading

@davidzwa
Copy link

@davidzwa davidzwa commented Jan 11, 2020

Any update? Want to run discovery services in docker setup, because our images are automagically pulled by a provisioning service (IoT-Edge). I can't update any binaries outside the docker system... or it would be great hackery.

Loading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet