-
Notifications
You must be signed in to change notification settings - Fork 881
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multicast in Overlay driver #552
Comments
@nicklaslof Yes, this is the current behavior. overlay is implemented using vxlan unicast. So handling multicast need some form of packet replication. We are looking into possible options to support multicast on top of overlay. |
Just to make it clear (especially since I wrote host instead of container in my original text) is that I mean using multicast between containers whilst still having vxlan unicast between the docker hosts. |
@sanimej @mrjana @mavenugo hey guys, any update on whether there a solution in sight for this one? Per #740, this impacts use of the official elasticsearch image for creating a cluster. If we can outline a possible solution here, perhaps someone from the community can attempt a fix if we don't have bandwidth for 1.10 |
@dave-tucker Multiple vxlan fdb entries can be created for all zero mac, which is the default destination. This gives an option (with some drawbacks) to handle multicast without the complexities of snooping. I have to try this out manually to see if it works. |
@sanimej, @dave-tucker: any news on this one? We are looking for this so as to support containerization, with minimal refactoring, of a multicast-based service discovery integration in our stack. We can probably make unicast work but would prefer to not incur such a refactoring so as to avoid unintended consequences and/or further refactoring in our stack. |
@dweomer this is not planned for the upcoming release. But, have added the |
+1 - some infrastructure components (like Aerospike) rely on Multicast for cluster discovery. |
+1 - This would be very useful. At the very least the documentation should note that this is not currently supported. |
Note there are other Docker Network plugins available which do support multicast. For instance the one I work on. |
+1 in order to adapt Guidewire Application multihost clustering, multihost is must |
+1 Can't get Wildfly and mod_cluster to work on Swam Mode, because the overlay network doesn't support multicast. One could fall back to Unicast, but since one would also need to provide a proxy list with the ip addresses of all httpd load balancers and it would be very difficult to figure them out beforehand, I would say that Wildfly and Modcluster don't currently work on Swarm Mode. Regards. |
+1 to support http://crate.io |
any update on getting the multicast implemented ? |
@DanyC97 used macvlan underlay instead for my needs as a quick solution and worked fine. |
@codergr : Were you able to use macvlan in a swarm with services ? Can you easily change the scope of a network ? |
@jocelynthode that is not supported yet unfortunately. PTAL moby/moby#27266 and it is one of supporting such a need. But it needs some discussion to get it right. |
Another option is to use the Weave Net networking plugin for Docker, which has multicast support. Full disclosure: I work for Weaveworks. |
@mjlodge but there is no support for network plugins on swarm mode, no? So weave can't be used in swarm mode and docker services. It's pity that in such environment (docker swarm and services) much of the applications that support clustering can't be used due to the lack of multicast. |
@jonlatorre local scope plugins do work, so Weave Net can be used with ordinary containers when Docker is in swarm mode. But not services. With Docker 1.12 it is not possible to use Docker services with plugins. Docker 1.13 is intended to support network plugins, but as of today, using Docker 1.13-rc5, we have been unable to get a network plugin to do anything with Docker services. (note I work for Weaveworks) |
@bboreham thanks for the update. I hope that in final release wave could work with docker swarm and services, i'm impatient to use it :) |
+1 |
I use macvlan network for now because of this (macvlan is supported in swarm since version 17.06), but it's clearly less convenient. |
@mavenugo is there any plan or tips on how this feature could/should be designed. What would be the starting point for getting it implemented. I'm guessing the code goes somewhere in here: Does the driver contain a list or method of fetching all the IP's within the network. Could it watch for a multicast then replicate it over all the IP's individually. Would this work or be a performance hit? |
+1 |
I've been relying on Hazelcast's multicast discovery until I found out that overlay network doesn't support multicast. An external network definition with |
+1 |
+1 |
There are 2 solutions to this problem afaik:
It's been a while since my last trial on this but just creating a macvlan network and using it (as an external network) should be sufficient.
|
@tunix creating a macvlan in a Swarm scope doesn't seem to Work. The Container starts, but it cannot reach any ipadress. Running with Overlay it work ( but then multicast is not available). docker service create --network sa_hazel ...foo |
+1 |
@deratzmann Using a network like yours, I can ping containers on remote hosts but multicast still doesn't work. |
Found another tradeoff when using macvlan driver in swarm. The macvlan driver driver does not support port mappings, which prevents from using "mode=host" published ports as described in https://docs.docker.com/engine/swarm/services/#publish-a-services-ports-directly-on-the-swarm-node |
Just asking: Is there any progress on this? |
Is there any chance, that this will be implemented sometime? |
For the time being I suggest everyone to use Weave Net which works flawlessly in my setup. |
+1. From my own testing, multicast does not work with the bridge driver either. I'm not talking about routing multicast between the host's network and the containers' NAT'ed network -- I'm talking about two containers deployed side-by-side (same host) using the default bridge or a user-defined bridge network. Said containers cannot communicate via multicast. IMO getting multicast working within the bridge network would be a logical first step before moving on to the overlay network. Also, the suggestion to use Weave Net will only work with Linux hosts. I wish I knew earlier that containers cannot use multicast. Edit: I know multicast should work with "net=host" but, aside from that not being an ideal solution in any sense, it does not work with Windows hosts. |
Any update? Want to run discovery services in docker setup, because our images are automagically pulled by a provisioning service (IoT-Edge). I can't update any binaries outside the docker system... or it would be great hackery. |
@KylePreuss it is possible to setup side-by-side (same host) multicast traffic between two or more containers using virtual ethernet Create the
Next the
Bring created
Bring created
Create Docker network configuration for created
Create Docker network configuration for created
Create Docker Swarm network
Create Docker Swarm network
Use it in Docker compose: services:
multicast-sender:
networks:
- veth1
multicast-receiver:
networks:
- veth2
networks:
veth1:
external: true
veth2:
external: true |
The solution to use weave net seems to work on linux hosts, but is there a way to achieve this on windows hosts? I would like to have linux host as the manager and windows host as a worker. I can install the weave plugin, but multicast communication between the hosts does not work |
Weave has not been updated over 4 years. Is there any implementations now? |
Multicast network packages doesn't seem to be transported to all containers in an Overlay network configuration.
I have investigated this a bit trying various manual config inside the network name space but haven't got anywhere.
The text was updated successfully, but these errors were encountered: