Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable multicast between containers? #3043

Closed
benbooth493 opened this issue Dec 4, 2013 · 75 comments

Comments

Projects
None yet
@benbooth493
Copy link

commented Dec 4, 2013

I currently have a bunch of containers configured using veth as the network type. The host system has a bridge device and can ping 224.0.0.1 but the containers can't.

Any ideas?

@unclejack

This comment has been minimized.

Copy link
Contributor

commented Jan 18, 2014

@benbooth493 Could you try https://github.com/jpetazzo/pipework to set up a dedicated network interface in the container, please? @jpetazzo confirms that this would help with multicast traffic.

@unclejack

This comment has been minimized.

Copy link
Contributor

commented Feb 21, 2014

@benbooth493 pipework should allow you to set something like this up. Docker 0.9/1.0+ will have support for plugins and I believe that'll make it easier to come up with custom networking setups.

I'll close this issue now since it's not immediately actionable and it's going to be possible to do this via a future pipework Docker plugin. Please feel free to comment.

@unclejack unclejack closed this Feb 21, 2014

@vincentbernat

This comment has been minimized.

Copy link
Contributor

commented Apr 22, 2014

Is there a reason for veth to be created without multicast flag? This would help to get multicast to docker.

@jpetazzo

This comment has been minimized.

Copy link
Contributor

commented May 30, 2014

@vincentbernat: it looks like it is created without MULTICAST because it's the default mode; but I think it's fairly safe to change that. Feel free to submit a pull request!

@HackerLlama

This comment has been minimized.

Copy link

commented Jun 29, 2014

I had the same problem and confirmed that using pipework to define a dedicated interface did indeed work. That said, I'd very much like to see a way to support multicast out of the box in docker. Can someone point me to where in the docker source the related code lives so I can try a custom build with multicast enabled?

@jpetazzo

This comment has been minimized.

Copy link
Contributor

commented Jun 30, 2014

I recently had a conversation with @spahl, who confirmed that it was necessary (and sufficient) to set the MULTICAST flag if you want to do multicast.

@unclejack: can we reopen that issue, please?

@HackerLlama: I think that the relevant code would be in https://github.com/docker/libcontainer/blob/master/network/veth.go (keep in mind that this code is vendored in the Docker repository).

@vincentbernat

This comment has been minimized.

Copy link
Contributor

commented Jun 30, 2014

❦ 30 juin 2014 13:25 -0700, Jérôme Petazzoni notifications@github.com :

@HackerLlama: I think that the relevant code would be in
https://github.com/docker/libcontainer/blob/master/network/veth.go
(keep in mind that this code is vendored in the Docker repository).

Maybe, it would be easier to modify this here:
https://github.com/docker/libcontainer/blob/master/netlink/netlink_linux.go#L867

I suppose that adding:

msg.Flags = syscall.IFF_MULTICAST

would be sufficient (maybe the same thing for the result of

newInfomsgChild just below).

Use variable names that mean something.
- The Elements of Programming Style (Kernighan & Plauger)

@jpetazzo

This comment has been minimized.

Copy link
Contributor

commented Jul 1, 2014

Agreed, it makes more sense to edit the netlink package, since MULTICAST can (and should, IMHO!) be the default.

@bhyde

This comment has been minimized.

Copy link

commented Jul 8, 2014

Can we reopen this? It was closed "since it's not immediately actionable". With vincentbernat's comment in mind it now appears not just actionable and simple. Pretty please?

@unclejack unclejack reopened this Jul 8, 2014

@vielmetti

This comment has been minimized.

Copy link

commented Jul 9, 2014

Agreed with @bhyde that this looks doable, and that multicast support would have a substantial positive effect on things like autodiscovery of resources provided through docker.

@erikh erikh added the ICC label Jul 16, 2014

@jhuiting

This comment has been minimized.

Copy link

commented Jul 23, 2014

This would really help me, it makes e.g. ZMQ pub/sub with Docker much easier. Anyone already working on this?

@defunctzombie

This comment has been minimized.

Copy link

commented Aug 9, 2014

Is rebuilding docker with

msg.Flags = syscall.IFF_MULTICAST

And installing this build as a daemon sufficient to get multicast working or does the docker client (that builds the containers) also need some changes?

@rhasselbaum

This comment has been minimized.

Copy link

commented Aug 11, 2014

Multicast seems to be working fine for me between containers on the same host. In different shells, I start up two containers with:

 docker run -it --name node1 ubuntu:14.04 /bin/bash
 docker run -it --name node2 ubuntu:14.04 /bin/bash

Then in each one, I run:

apt-get update && apt-get install iperf

Then in node 1, I run:

iperf -s -u -B 224.0.55.55 -i 1

And in node 2, I run:

iperf -c 224.0.55.55 -u -T 32 -t 3 -i 1

I can see the packets from node 2 show up in node 1's console, so looks like it's working. The only thing I haven't figured out yet is multicasting among containers on different hosts. I'm sure that'll require forwarding the multicast traffic through some iptables magic.

@ghost

This comment has been minimized.

Copy link

commented Sep 18, 2014

Please make it happen, if it is easy to fix! Thank you!

@Lawouach

This comment has been minimized.

Copy link

commented Oct 9, 2014

Hi there,

I'm also highly interested in understanding how to enable multicast in containers (between container and the outside world). Do I have to compile docker myself for now?

Thanks,

@defunctzombie

This comment has been minimized.

Copy link

commented Oct 9, 2014

Using --net host option works for now but obviously is less than ideal in
the true isolate networking container flow.
On Oct 9, 2014 6:03 AM, "Sylvain Hellegouarch" notifications@github.com
wrote:

Hi there,

I'm also highly interested in understanding how to enable multicast in
containers (between container and the outside world). Do I have to compile
docker myself for now?

Thanks,


Reply to this email directly or view it on GitHub
#3043 (comment).

@Lawouach

This comment has been minimized.

Copy link

commented Oct 9, 2014

Indeed. That's what I'm using and it does work as expected. I was wondering if there could be an update on this ticket regarding what remains to be done in docker. There is a mention of a flag to be set, is there more work to it?

Cheers :)

@brunoborges

This comment has been minimized.

Copy link

commented Dec 7, 2014

How can we have multicast on Docker 1.3.2?

@defunctzombie

This comment has been minimized.

Copy link

commented Dec 7, 2014

@brunoborges use --net host

@brunoborges

This comment has been minimized.

Copy link

commented Dec 8, 2014

@defunctzombie yeah, that will work. But are there any known downsides of using --net=host?

@hekaldama

This comment has been minimized.

Copy link

commented Dec 9, 2014

@brunoborges, yes there are significant downsides IMHO and should be used if you know what you are doing.

Take a look at:

https://docs.docker.com/articles/networking/#how-docker-networks-a-container

@hmeerlo

This comment has been minimized.

Copy link

commented Jan 26, 2015

Ok, so --net=host is no option, it can not be used together with --link. Has anyone tried what @defunctzombie said? Does it work? If so, why not integrate it? IMHO multicast is used by too many applications for discovery to ignore this issue.

@hmeerlo

This comment has been minimized.

Copy link

commented Jan 29, 2015

Ok, I gave it a try myself but to no avail. I modified the code to set the IFF_MULTICAST flag. I see the veth interfaces coming up with MULTICAST enabled, but once the interface is up multicast is gone (ip monitor all):

[LINK]79: vethe9774fa: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
[LINK]79: vethe9774fa: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
[LINK]79: vethe9774fa: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
[NEIGH]dev vethe9774fa lladdr a2:ae:8c:b8:6c:0a PERMANENT
[LINK]79: vethe9774fa: <BROADCAST,MULTICAST> mtu 1500 master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <NO-CARRIER,BROADCAST,UP> mtu 1500 qdisc noqueue master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
[LINK]79: vethe9774fa: <NO-CARRIER,BROADCAST,UP> mtu 1500 master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <NO-CARRIER,BROADCAST,UP> mtu 1500 master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a
[LINK]Deleted 78: vethf562f68: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 56:28:af:c2:e9:a0 brd ff:ff:ff:ff:ff:ff
[ROUTE]ff00::/8 dev vethe9774fa  table local  metric 256
[ROUTE][ROUTE]fe80::/64 dev vethe9774fa  proto kernel  metric 256
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500 master docker0 state UP
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500 master docker0 state UP
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500 master docker0 state UP
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
@Lawouach

This comment has been minimized.

Copy link

commented Jan 30, 2015

I'd be intrested in helping working on this issue but, at this stage, it's unclear about the multicast support status. Does a docker container fail at being routed multicast stream at all? Or just between running containers?

@hmeerlo

This comment has been minimized.

Copy link

commented Jan 30, 2015

Well, the plot thickens because I overlooked @rhasselbaum comment. Multicast actually works fine between containers. It is just that the ifconfig or 'ip address show' output doesn't indicate this. I ran the exact same tests as @rhasselbaum and the test was successful. After that I tried my own solution with a distributed EHCache that uses multicast for discovery and that worked as well. So there doesn't seem to be a problem anymore...

@icecrime icecrime removed this from the 1.9.0 milestone Oct 10, 2015

@perrocontodo

This comment has been minimized.

Copy link

commented Oct 23, 2015

@icecrime care to comment why this has been removed from 1.9.0?

@cpuguy83

This comment has been minimized.

Copy link
Contributor

commented Oct 23, 2015

Becaus it's not ready and 1.9 is code freeze as of last week.

@christianhuening

This comment has been minimized.

Copy link

commented Oct 23, 2015

oddly enough since Docker 1.8.1 I seem to have the multicast flag and my app is working just fine.
bildschirmfoto 2015-10-23 um 14 19 33

@eyaldahari

This comment has been minimized.

Copy link

commented Oct 26, 2015

Same here. I am running two Elasticsearch official docker containers on different hosts which are on the same sub-net with the same broadcast address. Elasticsearch multicast discovery just do not work between the two containers even though multicast is defined on the NIC. The docker version is: 1.8.2.

@WooDzu

This comment has been minimized.

Copy link

commented Oct 26, 2015

And same here. Internal app using broadcasts for service discovery. Broadcast/multicast enabled within ubuntu container. Currently containers are on the same host. Docker 1.8.3 and 1.9

@alvinr

This comment has been minimized.

Copy link
Contributor

commented Nov 20, 2015

+1 for multicast over the overlay network - I used Aerospike, which will do self discovery over multicast.

@oobles

This comment has been minimized.

Copy link

commented Dec 1, 2015

+1 for multicast over the overlay network. This should also be listed in documentation as a limitation.

@thaJeztah

This comment has been minimized.

Copy link
Member

commented Dec 1, 2015

There's an open ticket in libnetwork for supporting multicast in overlay, see: docker/libnetwork#552

@Lawouach

This comment has been minimized.

Copy link

commented Dec 2, 2015

As a side note, I've been using weave successfully for multicast. For those who may add it to their toolbox.

@jumanjiman

This comment has been minimized.

Copy link

commented Dec 4, 2015

👍 for weave

@mavenugo

This comment has been minimized.

Copy link
Contributor

commented Dec 4, 2015

@Lawouach @jumanjiman i havent looked into weave myself. does it support native multicast using IGMP (snooping) ?

@bboreham

This comment has been minimized.

Copy link
Contributor

commented Dec 10, 2015

@mavenugo Weave doesn't specifically look at IGMP but it does let containers attached to the Weave network multicast to each other.

@emsi

This comment has been minimized.

Copy link

commented Dec 10, 2015

So you actually mean "braodcast"?

@bboreham

This comment has been minimized.

Copy link
Contributor

commented Dec 10, 2015

Assuming @emsi meant that question to refer to Weave Net: multicast packets are transported to every host but only delivered to individual containers that are listening for them. So somewhere inbetween.

@awh

This comment has been minimized.

Copy link

commented Dec 10, 2015

So you actually mean "braodcast"?

It behaves like a switch hierarchy without IGMP snooping enabled - we effectively compute an unweighted minimum spanning tree and use it to forward broadcast and multicast traffic. IGMP snooping is an optimisation which only has any effect on multicast applications that use IGMP subscriptions... weave's multicast support means things like service discovery Just Work (which I would argue is the main use case) but we're not as efficient as we could be if you had subsets of containers wanting to receive multicast traffic. That being said, if you have a requirement for efficient high volume multicast traffic an overlay network would probably not be your tool of choice anyway...

@dreamcat4

This comment has been minimized.

Copy link

commented Dec 10, 2015

A part of multicast protocol (uPNP / bonjour / mdns) requires the sending
and receiving of broadcast packets to the 239.*** listen address. That is
the SSDP part

https://en.wikipedia.org/wiki/Simple_Service_Discovery_Protocol

On Thu, Dec 10, 2015 at 10:46 AM, Mariusz Woloszyn <notifications@github.com

wrote:

So you actually mean "braodcast"?


Reply to this email directly or view it on GitHub
#3043 (comment).

@rcarmo

This comment has been minimized.

Copy link

commented Feb 5, 2016

Just saw the 1.10 release notes. Are we there yet?

@dreamcat4

This comment has been minimized.

Copy link

commented Feb 5, 2016

[EDIT]

@rcarmo until we get there I am using pipework. With that multicast works well enough. To be clear that multicast's simple service discovery protocol (SSDP) AKA Bonjour is generally provided by avahi-daemon in your container. But that in-turn also needs the DBUS as another service dependancy to be installed right alongside it. Where DBUS ususally communicates with the multicast server application. I have been using s6-overlay for encapsulating all required services within same container. Example here: https://github.com/dreamcat4/docker-images/tree/master/forked-daapd/services.d

Anyway for other reasons, it certainly would be nice to retire pipework one day. Pipework is useful sets up host-side L2 macvlan bridges. AFAIKT a seamless L2 networking is pretty much the de-facto requirement for Multicast (due to the 239.* broadcast nature of some of the packets). That is basically the same functionality as VMWare / VirtualBox's 'Bridged mode' networking adapter.

Don't know / can't help further.

@mavenugo

This comment has been minimized.

Copy link
Contributor

commented Feb 11, 2016

It seems like this issue is being used to discuss the multicast support for both bridge and overlay driver.
As indicated by this : #3043 (comment), multicast works just fine in bridge driver. (I tried 1.10).

Yes, the overlay driver needs multicast support (docker/libnetwork#552).

Given that, should we close this issue and use the above issue to track the multicast support in overlay driver ?

@tiborvass

This comment has been minimized.

Copy link
Collaborator

commented Feb 11, 2016

Agreed with @mavenugo. This issue was opened specificly for single-host which seems to have been resolved a long ago. I suggest opening a new issue for multicast in overlay drivers. In the meantime there is an issue on libnetwork people can follow.

@tiborvass tiborvass closed this Feb 11, 2016

@combitel

This comment has been minimized.

Copy link

commented Jul 5, 2016

Multicast between containers works, but containers still cannot receive multicast from outside. It doesn't work both in bridge and overlay networks. I've created separate issue #23659 for this use case . Can somebody please provide more information on why it doesn't work natively?

@dreamcat4

This comment has been minimized.

Copy link

commented Jul 5, 2016

Maybe you should switch over to the new macvlan driver

http://stackoverflow.com/questions/35742807/docker-1-10-containers-ip-in-lan/36470828#36470828

On Tue, Jul 5, 2016 at 6:24 AM, combitel notifications@github.com wrote:

Multicast between containers works, but containers still cannot receive
multicast from outside. It doesn't work both in bridge and overlay
networks. I've created separate issue for this use case #23659
#23659 . Can somebody please
provide more information on why it doesn't work natively?


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#3043 (comment), or mute
the thread
https://github.com/notifications/unsubscribe/AAD2Yk2pu_bg75uDPwRC89mn4Zi0yqtBks5qSeqagaJpZM4BRxl4
.

@combitel

This comment has been minimized.

Copy link

commented Jul 6, 2016

Thanks @dreamcat4, but I need fully isolated network behind NAT which means that I need to use IPVlan L3 mode and authors of MACVLAN driver explicitly state here that

-Ipvlan L3 mode drops all broadcast and multicast traffic.

@sleebapaul

This comment has been minimized.

Copy link

commented Jan 10, 2019

Multicast seems to be working fine for me between containers on the same host. In different shells, I start up two containers with:

 docker run -it --name node1 ubuntu:14.04 /bin/bash
 docker run -it --name node2 ubuntu:14.04 /bin/bash

Then in each one, I run:

apt-get update && apt-get install iperf

Then in node 1, I run:

iperf -s -u -B 224.0.55.55 -i 1

And in node 2, I run:

iperf -c 224.0.55.55 -u -T 32 -t 3 -i 1

I can see the packets from node 2 show up in node 1's console, so looks like it's working. The only thing I haven't figured out yet is multicasting among containers on different hosts. I'm sure that'll require forwarding the multicast traffic through some iptables magic.

@rhasselbaum Hey I've tried this and worked for me. But how to send packets from host to this multicast network and received by containers?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.