New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable multicast between containers? #3043

Closed
benbooth493 opened this Issue Dec 4, 2013 · 74 comments

Comments

Projects
None yet
@benbooth493

benbooth493 commented Dec 4, 2013

I currently have a bunch of containers configured using veth as the network type. The host system has a bridge device and can ping 224.0.0.1 but the containers can't.

Any ideas?

@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Jan 18, 2014

Contributor

@benbooth493 Could you try https://github.com/jpetazzo/pipework to set up a dedicated network interface in the container, please? @jpetazzo confirms that this would help with multicast traffic.

Contributor

unclejack commented Jan 18, 2014

@benbooth493 Could you try https://github.com/jpetazzo/pipework to set up a dedicated network interface in the container, please? @jpetazzo confirms that this would help with multicast traffic.

@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Feb 21, 2014

Contributor

@benbooth493 pipework should allow you to set something like this up. Docker 0.9/1.0+ will have support for plugins and I believe that'll make it easier to come up with custom networking setups.

I'll close this issue now since it's not immediately actionable and it's going to be possible to do this via a future pipework Docker plugin. Please feel free to comment.

Contributor

unclejack commented Feb 21, 2014

@benbooth493 pipework should allow you to set something like this up. Docker 0.9/1.0+ will have support for plugins and I believe that'll make it easier to come up with custom networking setups.

I'll close this issue now since it's not immediately actionable and it's going to be possible to do this via a future pipework Docker plugin. Please feel free to comment.

@unclejack unclejack closed this Feb 21, 2014

@vincentbernat

This comment has been minimized.

Show comment
Hide comment
@vincentbernat

vincentbernat Apr 22, 2014

Contributor

Is there a reason for veth to be created without multicast flag? This would help to get multicast to docker.

Contributor

vincentbernat commented Apr 22, 2014

Is there a reason for veth to be created without multicast flag? This would help to get multicast to docker.

@jpetazzo

This comment has been minimized.

Show comment
Hide comment
@jpetazzo

jpetazzo May 30, 2014

Contributor

@vincentbernat: it looks like it is created without MULTICAST because it's the default mode; but I think it's fairly safe to change that. Feel free to submit a pull request!

Contributor

jpetazzo commented May 30, 2014

@vincentbernat: it looks like it is created without MULTICAST because it's the default mode; but I think it's fairly safe to change that. Feel free to submit a pull request!

@HackerLlama

This comment has been minimized.

Show comment
Hide comment
@HackerLlama

HackerLlama Jun 29, 2014

I had the same problem and confirmed that using pipework to define a dedicated interface did indeed work. That said, I'd very much like to see a way to support multicast out of the box in docker. Can someone point me to where in the docker source the related code lives so I can try a custom build with multicast enabled?

HackerLlama commented Jun 29, 2014

I had the same problem and confirmed that using pipework to define a dedicated interface did indeed work. That said, I'd very much like to see a way to support multicast out of the box in docker. Can someone point me to where in the docker source the related code lives so I can try a custom build with multicast enabled?

@jpetazzo

This comment has been minimized.

Show comment
Hide comment
@jpetazzo

jpetazzo Jun 30, 2014

Contributor

I recently had a conversation with @spahl, who confirmed that it was necessary (and sufficient) to set the MULTICAST flag if you want to do multicast.

@unclejack: can we reopen that issue, please?

@HackerLlama: I think that the relevant code would be in https://github.com/docker/libcontainer/blob/master/network/veth.go (keep in mind that this code is vendored in the Docker repository).

Contributor

jpetazzo commented Jun 30, 2014

I recently had a conversation with @spahl, who confirmed that it was necessary (and sufficient) to set the MULTICAST flag if you want to do multicast.

@unclejack: can we reopen that issue, please?

@HackerLlama: I think that the relevant code would be in https://github.com/docker/libcontainer/blob/master/network/veth.go (keep in mind that this code is vendored in the Docker repository).

@vincentbernat

This comment has been minimized.

Show comment
Hide comment
@vincentbernat

vincentbernat Jun 30, 2014

Contributor

❦ 30 juin 2014 13:25 -0700, Jérôme Petazzoni notifications@github.com :

@HackerLlama: I think that the relevant code would be in
https://github.com/docker/libcontainer/blob/master/network/veth.go
(keep in mind that this code is vendored in the Docker repository).

Maybe, it would be easier to modify this here:
https://github.com/docker/libcontainer/blob/master/netlink/netlink_linux.go#L867

I suppose that adding:

msg.Flags = syscall.IFF_MULTICAST

would be sufficient (maybe the same thing for the result of

newInfomsgChild just below).

Use variable names that mean something.
- The Elements of Programming Style (Kernighan & Plauger)

Contributor

vincentbernat commented Jun 30, 2014

❦ 30 juin 2014 13:25 -0700, Jérôme Petazzoni notifications@github.com :

@HackerLlama: I think that the relevant code would be in
https://github.com/docker/libcontainer/blob/master/network/veth.go
(keep in mind that this code is vendored in the Docker repository).

Maybe, it would be easier to modify this here:
https://github.com/docker/libcontainer/blob/master/netlink/netlink_linux.go#L867

I suppose that adding:

msg.Flags = syscall.IFF_MULTICAST

would be sufficient (maybe the same thing for the result of

newInfomsgChild just below).

Use variable names that mean something.
- The Elements of Programming Style (Kernighan & Plauger)

@jpetazzo

This comment has been minimized.

Show comment
Hide comment
@jpetazzo

jpetazzo Jul 1, 2014

Contributor

Agreed, it makes more sense to edit the netlink package, since MULTICAST can (and should, IMHO!) be the default.

Contributor

jpetazzo commented Jul 1, 2014

Agreed, it makes more sense to edit the netlink package, since MULTICAST can (and should, IMHO!) be the default.

@bhyde

This comment has been minimized.

Show comment
Hide comment
@bhyde

bhyde Jul 8, 2014

Can we reopen this? It was closed "since it's not immediately actionable". With vincentbernat's comment in mind it now appears not just actionable and simple. Pretty please?

bhyde commented Jul 8, 2014

Can we reopen this? It was closed "since it's not immediately actionable". With vincentbernat's comment in mind it now appears not just actionable and simple. Pretty please?

@unclejack unclejack reopened this Jul 8, 2014

@vielmetti

This comment has been minimized.

Show comment
Hide comment
@vielmetti

vielmetti Jul 9, 2014

Agreed with @bhyde that this looks doable, and that multicast support would have a substantial positive effect on things like autodiscovery of resources provided through docker.

vielmetti commented Jul 9, 2014

Agreed with @bhyde that this looks doable, and that multicast support would have a substantial positive effect on things like autodiscovery of resources provided through docker.

@erikh erikh added the ICC label Jul 16, 2014

@jhuiting

This comment has been minimized.

Show comment
Hide comment
@jhuiting

jhuiting Jul 23, 2014

This would really help me, it makes e.g. ZMQ pub/sub with Docker much easier. Anyone already working on this?

jhuiting commented Jul 23, 2014

This would really help me, it makes e.g. ZMQ pub/sub with Docker much easier. Anyone already working on this?

@defunctzombie

This comment has been minimized.

Show comment
Hide comment
@defunctzombie

defunctzombie Aug 9, 2014

Is rebuilding docker with

msg.Flags = syscall.IFF_MULTICAST

And installing this build as a daemon sufficient to get multicast working or does the docker client (that builds the containers) also need some changes?

defunctzombie commented Aug 9, 2014

Is rebuilding docker with

msg.Flags = syscall.IFF_MULTICAST

And installing this build as a daemon sufficient to get multicast working or does the docker client (that builds the containers) also need some changes?

@rhasselbaum

This comment has been minimized.

Show comment
Hide comment
@rhasselbaum

rhasselbaum Aug 11, 2014

Multicast seems to be working fine for me between containers on the same host. In different shells, I start up two containers with:

 docker run -it --name node1 ubuntu:14.04 /bin/bash
 docker run -it --name node2 ubuntu:14.04 /bin/bash

Then in each one, I run:

apt-get update && apt-get install iperf

Then in node 1, I run:

iperf -s -u -B 224.0.55.55 -i 1

And in node 2, I run:

iperf -c 224.0.55.55 -u -T 32 -t 3 -i 1

I can see the packets from node 2 show up in node 1's console, so looks like it's working. The only thing I haven't figured out yet is multicasting among containers on different hosts. I'm sure that'll require forwarding the multicast traffic through some iptables magic.

rhasselbaum commented Aug 11, 2014

Multicast seems to be working fine for me between containers on the same host. In different shells, I start up two containers with:

 docker run -it --name node1 ubuntu:14.04 /bin/bash
 docker run -it --name node2 ubuntu:14.04 /bin/bash

Then in each one, I run:

apt-get update && apt-get install iperf

Then in node 1, I run:

iperf -s -u -B 224.0.55.55 -i 1

And in node 2, I run:

iperf -c 224.0.55.55 -u -T 32 -t 3 -i 1

I can see the packets from node 2 show up in node 1's console, so looks like it's working. The only thing I haven't figured out yet is multicasting among containers on different hosts. I'm sure that'll require forwarding the multicast traffic through some iptables magic.

@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Sep 18, 2014

Please make it happen, if it is easy to fix! Thank you!

ghost commented Sep 18, 2014

Please make it happen, if it is easy to fix! Thank you!

@Lawouach

This comment has been minimized.

Show comment
Hide comment
@Lawouach

Lawouach Oct 9, 2014

Hi there,

I'm also highly interested in understanding how to enable multicast in containers (between container and the outside world). Do I have to compile docker myself for now?

Thanks,

Lawouach commented Oct 9, 2014

Hi there,

I'm also highly interested in understanding how to enable multicast in containers (between container and the outside world). Do I have to compile docker myself for now?

Thanks,

@defunctzombie

This comment has been minimized.

Show comment
Hide comment
@defunctzombie

defunctzombie Oct 9, 2014

Using --net host option works for now but obviously is less than ideal in
the true isolate networking container flow.
On Oct 9, 2014 6:03 AM, "Sylvain Hellegouarch" notifications@github.com
wrote:

Hi there,

I'm also highly interested in understanding how to enable multicast in
containers (between container and the outside world). Do I have to compile
docker myself for now?

Thanks,


Reply to this email directly or view it on GitHub
#3043 (comment).

defunctzombie commented Oct 9, 2014

Using --net host option works for now but obviously is less than ideal in
the true isolate networking container flow.
On Oct 9, 2014 6:03 AM, "Sylvain Hellegouarch" notifications@github.com
wrote:

Hi there,

I'm also highly interested in understanding how to enable multicast in
containers (between container and the outside world). Do I have to compile
docker myself for now?

Thanks,


Reply to this email directly or view it on GitHub
#3043 (comment).

@Lawouach

This comment has been minimized.

Show comment
Hide comment
@Lawouach

Lawouach Oct 9, 2014

Indeed. That's what I'm using and it does work as expected. I was wondering if there could be an update on this ticket regarding what remains to be done in docker. There is a mention of a flag to be set, is there more work to it?

Cheers :)

Lawouach commented Oct 9, 2014

Indeed. That's what I'm using and it does work as expected. I was wondering if there could be an update on this ticket regarding what remains to be done in docker. There is a mention of a flag to be set, is there more work to it?

Cheers :)

@brunoborges

This comment has been minimized.

Show comment
Hide comment
@brunoborges

brunoborges Dec 7, 2014

How can we have multicast on Docker 1.3.2?

brunoborges commented Dec 7, 2014

How can we have multicast on Docker 1.3.2?

@defunctzombie

This comment has been minimized.

Show comment
Hide comment
@defunctzombie

defunctzombie commented Dec 7, 2014

@brunoborges use --net host

@brunoborges

This comment has been minimized.

Show comment
Hide comment
@brunoborges

brunoborges Dec 8, 2014

@defunctzombie yeah, that will work. But are there any known downsides of using --net=host?

brunoborges commented Dec 8, 2014

@defunctzombie yeah, that will work. But are there any known downsides of using --net=host?

@hekaldama

This comment has been minimized.

Show comment
Hide comment
@hekaldama

hekaldama Dec 9, 2014

@brunoborges, yes there are significant downsides IMHO and should be used if you know what you are doing.

Take a look at:

https://docs.docker.com/articles/networking/#how-docker-networks-a-container

hekaldama commented Dec 9, 2014

@brunoborges, yes there are significant downsides IMHO and should be used if you know what you are doing.

Take a look at:

https://docs.docker.com/articles/networking/#how-docker-networks-a-container

@hmeerlo

This comment has been minimized.

Show comment
Hide comment
@hmeerlo

hmeerlo Jan 26, 2015

Ok, so --net=host is no option, it can not be used together with --link. Has anyone tried what @defunctzombie said? Does it work? If so, why not integrate it? IMHO multicast is used by too many applications for discovery to ignore this issue.

hmeerlo commented Jan 26, 2015

Ok, so --net=host is no option, it can not be used together with --link. Has anyone tried what @defunctzombie said? Does it work? If so, why not integrate it? IMHO multicast is used by too many applications for discovery to ignore this issue.

@hmeerlo

This comment has been minimized.

Show comment
Hide comment
@hmeerlo

hmeerlo Jan 29, 2015

Ok, I gave it a try myself but to no avail. I modified the code to set the IFF_MULTICAST flag. I see the veth interfaces coming up with MULTICAST enabled, but once the interface is up multicast is gone (ip monitor all):

[LINK]79: vethe9774fa: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
[LINK]79: vethe9774fa: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
[LINK]79: vethe9774fa: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
[NEIGH]dev vethe9774fa lladdr a2:ae:8c:b8:6c:0a PERMANENT
[LINK]79: vethe9774fa: <BROADCAST,MULTICAST> mtu 1500 master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <NO-CARRIER,BROADCAST,UP> mtu 1500 qdisc noqueue master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
[LINK]79: vethe9774fa: <NO-CARRIER,BROADCAST,UP> mtu 1500 master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <NO-CARRIER,BROADCAST,UP> mtu 1500 master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a
[LINK]Deleted 78: vethf562f68: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 56:28:af:c2:e9:a0 brd ff:ff:ff:ff:ff:ff
[ROUTE]ff00::/8 dev vethe9774fa  table local  metric 256
[ROUTE][ROUTE]fe80::/64 dev vethe9774fa  proto kernel  metric 256
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500 master docker0 state UP
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500 master docker0 state UP
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500 master docker0 state UP
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff

hmeerlo commented Jan 29, 2015

Ok, I gave it a try myself but to no avail. I modified the code to set the IFF_MULTICAST flag. I see the veth interfaces coming up with MULTICAST enabled, but once the interface is up multicast is gone (ip monitor all):

[LINK]79: vethe9774fa: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
[LINK]79: vethe9774fa: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
[LINK]79: vethe9774fa: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
[NEIGH]dev vethe9774fa lladdr a2:ae:8c:b8:6c:0a PERMANENT
[LINK]79: vethe9774fa: <BROADCAST,MULTICAST> mtu 1500 master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <NO-CARRIER,BROADCAST,UP> mtu 1500 qdisc noqueue master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
[LINK]79: vethe9774fa: <NO-CARRIER,BROADCAST,UP> mtu 1500 master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <NO-CARRIER,BROADCAST,UP> mtu 1500 master docker0 state DOWN
    link/ether a2:ae:8c:b8:6c:0a
[LINK]Deleted 78: vethf562f68: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 56:28:af:c2:e9:a0 brd ff:ff:ff:ff:ff:ff
[ROUTE]ff00::/8 dev vethe9774fa  table local  metric 256
[ROUTE][ROUTE]fe80::/64 dev vethe9774fa  proto kernel  metric 256
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500 master docker0 state UP
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500 master docker0 state UP
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500 master docker0 state UP
    link/ether a2:ae:8c:b8:6c:0a
[LINK]79: vethe9774fa: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
    link/ether a2:ae:8c:b8:6c:0a brd ff:ff:ff:ff:ff:ff
@Lawouach

This comment has been minimized.

Show comment
Hide comment
@Lawouach

Lawouach Jan 30, 2015

I'd be intrested in helping working on this issue but, at this stage, it's unclear about the multicast support status. Does a docker container fail at being routed multicast stream at all? Or just between running containers?

Lawouach commented Jan 30, 2015

I'd be intrested in helping working on this issue but, at this stage, it's unclear about the multicast support status. Does a docker container fail at being routed multicast stream at all? Or just between running containers?

@hmeerlo

This comment has been minimized.

Show comment
Hide comment
@hmeerlo

hmeerlo Jan 30, 2015

Well, the plot thickens because I overlooked @rhasselbaum comment. Multicast actually works fine between containers. It is just that the ifconfig or 'ip address show' output doesn't indicate this. I ran the exact same tests as @rhasselbaum and the test was successful. After that I tried my own solution with a distributed EHCache that uses multicast for discovery and that worked as well. So there doesn't seem to be a problem anymore...

hmeerlo commented Jan 30, 2015

Well, the plot thickens because I overlooked @rhasselbaum comment. Multicast actually works fine between containers. It is just that the ifconfig or 'ip address show' output doesn't indicate this. I ran the exact same tests as @rhasselbaum and the test was successful. After that I tried my own solution with a distributed EHCache that uses multicast for discovery and that worked as well. So there doesn't seem to be a problem anymore...

@Lawouach

This comment has been minimized.

Show comment
Hide comment
@Lawouach

Lawouach Jan 30, 2015

Alright. So the stock docker seems to have multicast working between containers. I'm not sure I understand regarding the last part of your comment. More specifically, I'm wondering if multicast can be forwarded from the host to the contaienr with --net host (which is, well, expected).

Lawouach commented Jan 30, 2015

Alright. So the stock docker seems to have multicast working between containers. I'm not sure I understand regarding the last part of your comment. More specifically, I'm wondering if multicast can be forwarded from the host to the contaienr with --net host (which is, well, expected).

@hmeerlo

This comment has been minimized.

Show comment
Hide comment
@hmeerlo

hmeerlo Jan 30, 2015

I can not confirm your question about --net host because I require --link but that combination is impossible.

hmeerlo commented Jan 30, 2015

I can not confirm your question about --net host because I require --link but that combination is impossible.

@Lawouach

This comment has been minimized.

Show comment
Hide comment
@Lawouach

Lawouach Jan 30, 2015

As do I. Okay, I will play with it as well and report here.

Lawouach commented Jan 30, 2015

As do I. Okay, I will play with it as well and report here.

@rhasselbaum

This comment has been minimized.

Show comment
Hide comment
@rhasselbaum

rhasselbaum Jan 30, 2015

@hmeerlo Multicast works fine between containers on the same host. But I think more work is needed (or a HOWTO) on getting it to work across hosts. I'm sure it would work with --net host but that has other drawbacks.

rhasselbaum commented Jan 30, 2015

@hmeerlo Multicast works fine between containers on the same host. But I think more work is needed (or a HOWTO) on getting it to work across hosts. I'm sure it would work with --net host but that has other drawbacks.

@Lawouach

This comment has been minimized.

Show comment
Hide comment
@Lawouach

Lawouach Feb 1, 2015

I finally got the time to try for myself and I was able to use multicast in the various scenarios that I was interested in:

  • container to container
  • host to container
  • container to host

For the last two, just make sure you have the appropriate route, something along the lines:

$ sudo route add -net 224.0.0.0/4 dev docker0

This worked with the stock docker 1.3.3 without --net host.

Lawouach commented Feb 1, 2015

I finally got the time to try for myself and I was able to use multicast in the various scenarios that I was interested in:

  • container to container
  • host to container
  • container to host

For the last two, just make sure you have the appropriate route, something along the lines:

$ sudo route add -net 224.0.0.0/4 dev docker0

This worked with the stock docker 1.3.3 without --net host.

@rivallu

This comment has been minimized.

Show comment
Hide comment
@rivallu

rivallu Mar 2, 2015

@Lawouach when you say "container to container" it's on the same host ?

rivallu commented Mar 2, 2015

@Lawouach when you say "container to container" it's on the same host ?

@dlintw

This comment has been minimized.

Show comment
Hide comment
@dlintw

dlintw Mar 23, 2015

Contributor

I require the default docker interface eth0 provide MULTICAST, and trying to patch, but it can not work.

diff --git a/vendor/src/github.com/docker/libcontainer/netlink/netlink_linux.go     b/vendor/src/github.com/docker/libcontainer/netlink/netlink_linux.go
index 3ecb81f..c78cd14 100644
--- a/vendor/src/github.com/docker/libcontainer/netlink/netlink_linux.go
+++ b/vendor/src/github.com/docker/libcontainer/netlink/netlink_linux.go
@@ -713,7 +713,7 @@ func NetworkCreateVethPair(name1, name2 string, txQueueLen int) error {
        }
        defer s.Close()

-       wb := newNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK)
+       wb := newNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK|syscall.IFF_MULTICAST)

        msg := newIfInfomsg(syscall.AF_UNSPEC)
        wb.AddData(msg)
Contributor

dlintw commented Mar 23, 2015

I require the default docker interface eth0 provide MULTICAST, and trying to patch, but it can not work.

diff --git a/vendor/src/github.com/docker/libcontainer/netlink/netlink_linux.go     b/vendor/src/github.com/docker/libcontainer/netlink/netlink_linux.go
index 3ecb81f..c78cd14 100644
--- a/vendor/src/github.com/docker/libcontainer/netlink/netlink_linux.go
+++ b/vendor/src/github.com/docker/libcontainer/netlink/netlink_linux.go
@@ -713,7 +713,7 @@ func NetworkCreateVethPair(name1, name2 string, txQueueLen int) error {
        }
        defer s.Close()

-       wb := newNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK)
+       wb := newNetlinkRequest(syscall.RTM_NEWLINK, syscall.NLM_F_CREATE|syscall.NLM_F_EXCL|syscall.NLM_F_ACK|syscall.IFF_MULTICAST)

        msg := newIfInfomsg(syscall.AF_UNSPEC)
        wb.AddData(msg)

@jessfraz jessfraz removed the Networking label Jul 10, 2015

@ghost

This comment has been minimized.

Show comment
Hide comment
@ghost

ghost Aug 3, 2015

+1, this would make multi-host Elasticsearch clusters easier

(note that Elastic themselves don't actually recommend this in prod, citing the cast of nodes accidentally joining a cluster, but that seems less likely in a containerized scenario).

ghost commented Aug 3, 2015

+1, this would make multi-host Elasticsearch clusters easier

(note that Elastic themselves don't actually recommend this in prod, citing the cast of nodes accidentally joining a cluster, but that seems less likely in a containerized scenario).

@mavenugo mavenugo added this to the 1.9.0 milestone Aug 3, 2015

@fernandoneto

This comment has been minimized.

Show comment
Hide comment

fernandoneto commented Aug 5, 2015

+1

@christianhuening

This comment has been minimized.

Show comment
Hide comment
@christianhuening

christianhuening Aug 16, 2015

@mavenugo cool! So we'll see this in 1.9 ?

christianhuening commented Aug 16, 2015

@mavenugo cool! So we'll see this in 1.9 ?

@emsi

This comment has been minimized.

Show comment
Hide comment
@emsi

emsi Aug 18, 2015

I'm on 1.9-dev. The veth/eth pair is created as such:

40: veth660ec14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
(...)
39: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default

Yet the multicasts do not seem to work with overlay multihost network

emsi commented Aug 18, 2015

I'm on 1.9-dev. The veth/eth pair is created as such:

40: veth660ec14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
(...)
39: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default

Yet the multicasts do not seem to work with overlay multihost network

@Lawouach

This comment has been minimized.

Show comment
Hide comment
@Lawouach

Lawouach Aug 18, 2015

@emsi: have you properly configured your routes?

Lawouach commented Aug 18, 2015

@emsi: have you properly configured your routes?

@emsi

This comment has been minimized.

Show comment
Hide comment
@emsi

emsi Aug 27, 2015

On underlay? Yes, as the unicast on overlay is working.
On overlay there is no need for routing as it's just one segment.

emsi commented Aug 27, 2015

On underlay? Yes, as the unicast on overlay is working.
On overlay there is no need for routing as it's just one segment.

@ugurarpaci

This comment has been minimized.

Show comment
Hide comment
@ugurarpaci

ugurarpaci Sep 9, 2015

Just a workaround for multicast enabled hazelcast cluster;

You can use hazelcast with version 1.6 by setting trusted subnet block enabled=true in default-cluster.xml configuration as follows:

<interfaces enabled="true">
  <interface>192.168.1.*</interface>
</interfaces>

Then; for sure you need to run your docker by setting the parameters bu --net=host options.
Anyway, still looking forward to see multicast enabling merge from libnetwork library.

ugurarpaci commented Sep 9, 2015

Just a workaround for multicast enabled hazelcast cluster;

You can use hazelcast with version 1.6 by setting trusted subnet block enabled=true in default-cluster.xml configuration as follows:

<interfaces enabled="true">
  <interface>192.168.1.*</interface>
</interfaces>

Then; for sure you need to run your docker by setting the parameters bu --net=host options.
Anyway, still looking forward to see multicast enabling merge from libnetwork library.

@icecrime icecrime removed this from the 1.9.0 milestone Oct 10, 2015

@perrocontodo

This comment has been minimized.

Show comment
Hide comment
@perrocontodo

perrocontodo Oct 23, 2015

@icecrime care to comment why this has been removed from 1.9.0?

perrocontodo commented Oct 23, 2015

@icecrime care to comment why this has been removed from 1.9.0?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Oct 23, 2015

Contributor

Becaus it's not ready and 1.9 is code freeze as of last week.

Contributor

cpuguy83 commented Oct 23, 2015

Becaus it's not ready and 1.9 is code freeze as of last week.

@christianhuening

This comment has been minimized.

Show comment
Hide comment
@christianhuening

christianhuening Oct 23, 2015

oddly enough since Docker 1.8.1 I seem to have the multicast flag and my app is working just fine.
bildschirmfoto 2015-10-23 um 14 19 33

christianhuening commented Oct 23, 2015

oddly enough since Docker 1.8.1 I seem to have the multicast flag and my app is working just fine.
bildschirmfoto 2015-10-23 um 14 19 33

@eyaldahari

This comment has been minimized.

Show comment
Hide comment
@eyaldahari

eyaldahari Oct 26, 2015

Same here. I am running two Elasticsearch official docker containers on different hosts which are on the same sub-net with the same broadcast address. Elasticsearch multicast discovery just do not work between the two containers even though multicast is defined on the NIC. The docker version is: 1.8.2.

eyaldahari commented Oct 26, 2015

Same here. I am running two Elasticsearch official docker containers on different hosts which are on the same sub-net with the same broadcast address. Elasticsearch multicast discovery just do not work between the two containers even though multicast is defined on the NIC. The docker version is: 1.8.2.

@WooDzu

This comment has been minimized.

Show comment
Hide comment
@WooDzu

WooDzu Oct 26, 2015

And same here. Internal app using broadcasts for service discovery. Broadcast/multicast enabled within ubuntu container. Currently containers are on the same host. Docker 1.8.3 and 1.9

WooDzu commented Oct 26, 2015

And same here. Internal app using broadcasts for service discovery. Broadcast/multicast enabled within ubuntu container. Currently containers are on the same host. Docker 1.8.3 and 1.9

@alvinr

This comment has been minimized.

Show comment
Hide comment
@alvinr

alvinr Nov 20, 2015

Contributor

+1 for multicast over the overlay network - I used Aerospike, which will do self discovery over multicast.

Contributor

alvinr commented Nov 20, 2015

+1 for multicast over the overlay network - I used Aerospike, which will do self discovery over multicast.

@oobles

This comment has been minimized.

Show comment
Hide comment
@oobles

oobles Dec 1, 2015

+1 for multicast over the overlay network. This should also be listed in documentation as a limitation.

oobles commented Dec 1, 2015

+1 for multicast over the overlay network. This should also be listed in documentation as a limitation.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Dec 1, 2015

Member

There's an open ticket in libnetwork for supporting multicast in overlay, see: docker/libnetwork#552

Member

thaJeztah commented Dec 1, 2015

There's an open ticket in libnetwork for supporting multicast in overlay, see: docker/libnetwork#552

@Lawouach

This comment has been minimized.

Show comment
Hide comment
@Lawouach

Lawouach Dec 2, 2015

As a side note, I've been using weave successfully for multicast. For those who may add it to their toolbox.

Lawouach commented Dec 2, 2015

As a side note, I've been using weave successfully for multicast. For those who may add it to their toolbox.

@jumanjiman

This comment has been minimized.

Show comment
Hide comment
@jumanjiman

jumanjiman Dec 4, 2015

👍 for weave

jumanjiman commented Dec 4, 2015

👍 for weave

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Dec 4, 2015

Contributor

@Lawouach @jumanjiman i havent looked into weave myself. does it support native multicast using IGMP (snooping) ?

Contributor

mavenugo commented Dec 4, 2015

@Lawouach @jumanjiman i havent looked into weave myself. does it support native multicast using IGMP (snooping) ?

@bboreham

This comment has been minimized.

Show comment
Hide comment
@bboreham

bboreham Dec 10, 2015

Contributor

@mavenugo Weave doesn't specifically look at IGMP but it does let containers attached to the Weave network multicast to each other.

Contributor

bboreham commented Dec 10, 2015

@mavenugo Weave doesn't specifically look at IGMP but it does let containers attached to the Weave network multicast to each other.

@emsi

This comment has been minimized.

Show comment
Hide comment
@emsi

emsi Dec 10, 2015

So you actually mean "braodcast"?

emsi commented Dec 10, 2015

So you actually mean "braodcast"?

@bboreham

This comment has been minimized.

Show comment
Hide comment
@bboreham

bboreham Dec 10, 2015

Contributor

Assuming @emsi meant that question to refer to Weave Net: multicast packets are transported to every host but only delivered to individual containers that are listening for them. So somewhere inbetween.

Contributor

bboreham commented Dec 10, 2015

Assuming @emsi meant that question to refer to Weave Net: multicast packets are transported to every host but only delivered to individual containers that are listening for them. So somewhere inbetween.

@awh

This comment has been minimized.

Show comment
Hide comment
@awh

awh Dec 10, 2015

So you actually mean "braodcast"?

It behaves like a switch hierarchy without IGMP snooping enabled - we effectively compute an unweighted minimum spanning tree and use it to forward broadcast and multicast traffic. IGMP snooping is an optimisation which only has any effect on multicast applications that use IGMP subscriptions... weave's multicast support means things like service discovery Just Work (which I would argue is the main use case) but we're not as efficient as we could be if you had subsets of containers wanting to receive multicast traffic. That being said, if you have a requirement for efficient high volume multicast traffic an overlay network would probably not be your tool of choice anyway...

awh commented Dec 10, 2015

So you actually mean "braodcast"?

It behaves like a switch hierarchy without IGMP snooping enabled - we effectively compute an unweighted minimum spanning tree and use it to forward broadcast and multicast traffic. IGMP snooping is an optimisation which only has any effect on multicast applications that use IGMP subscriptions... weave's multicast support means things like service discovery Just Work (which I would argue is the main use case) but we're not as efficient as we could be if you had subsets of containers wanting to receive multicast traffic. That being said, if you have a requirement for efficient high volume multicast traffic an overlay network would probably not be your tool of choice anyway...

@dreamcat4

This comment has been minimized.

Show comment
Hide comment
@dreamcat4

dreamcat4 Dec 10, 2015

A part of multicast protocol (uPNP / bonjour / mdns) requires the sending
and receiving of broadcast packets to the 239.*** listen address. That is
the SSDP part

https://en.wikipedia.org/wiki/Simple_Service_Discovery_Protocol

On Thu, Dec 10, 2015 at 10:46 AM, Mariusz Woloszyn <notifications@github.com

wrote:

So you actually mean "braodcast"?


Reply to this email directly or view it on GitHub
#3043 (comment).

dreamcat4 commented Dec 10, 2015

A part of multicast protocol (uPNP / bonjour / mdns) requires the sending
and receiving of broadcast packets to the 239.*** listen address. That is
the SSDP part

https://en.wikipedia.org/wiki/Simple_Service_Discovery_Protocol

On Thu, Dec 10, 2015 at 10:46 AM, Mariusz Woloszyn <notifications@github.com

wrote:

So you actually mean "braodcast"?


Reply to this email directly or view it on GitHub
#3043 (comment).

@rcarmo

This comment has been minimized.

Show comment
Hide comment
@rcarmo

rcarmo Feb 5, 2016

Just saw the 1.10 release notes. Are we there yet?

rcarmo commented Feb 5, 2016

Just saw the 1.10 release notes. Are we there yet?

@dreamcat4

This comment has been minimized.

Show comment
Hide comment
@dreamcat4

dreamcat4 Feb 5, 2016

[EDIT]

@rcarmo until we get there I am using pipework. With that multicast works well enough. To be clear that multicast's simple service discovery protocol (SSDP) AKA Bonjour is generally provided by avahi-daemon in your container. But that in-turn also needs the DBUS as another service dependancy to be installed right alongside it. Where DBUS ususally communicates with the multicast server application. I have been using s6-overlay for encapsulating all required services within same container. Example here: https://github.com/dreamcat4/docker-images/tree/master/forked-daapd/services.d

Anyway for other reasons, it certainly would be nice to retire pipework one day. Pipework is useful sets up host-side L2 macvlan bridges. AFAIKT a seamless L2 networking is pretty much the de-facto requirement for Multicast (due to the 239.* broadcast nature of some of the packets). That is basically the same functionality as VMWare / VirtualBox's 'Bridged mode' networking adapter.

Don't know / can't help further.

dreamcat4 commented Feb 5, 2016

[EDIT]

@rcarmo until we get there I am using pipework. With that multicast works well enough. To be clear that multicast's simple service discovery protocol (SSDP) AKA Bonjour is generally provided by avahi-daemon in your container. But that in-turn also needs the DBUS as another service dependancy to be installed right alongside it. Where DBUS ususally communicates with the multicast server application. I have been using s6-overlay for encapsulating all required services within same container. Example here: https://github.com/dreamcat4/docker-images/tree/master/forked-daapd/services.d

Anyway for other reasons, it certainly would be nice to retire pipework one day. Pipework is useful sets up host-side L2 macvlan bridges. AFAIKT a seamless L2 networking is pretty much the de-facto requirement for Multicast (due to the 239.* broadcast nature of some of the packets). That is basically the same functionality as VMWare / VirtualBox's 'Bridged mode' networking adapter.

Don't know / can't help further.

@mavenugo

This comment has been minimized.

Show comment
Hide comment
@mavenugo

mavenugo Feb 11, 2016

Contributor

It seems like this issue is being used to discuss the multicast support for both bridge and overlay driver.
As indicated by this : #3043 (comment), multicast works just fine in bridge driver. (I tried 1.10).

Yes, the overlay driver needs multicast support (docker/libnetwork#552).

Given that, should we close this issue and use the above issue to track the multicast support in overlay driver ?

Contributor

mavenugo commented Feb 11, 2016

It seems like this issue is being used to discuss the multicast support for both bridge and overlay driver.
As indicated by this : #3043 (comment), multicast works just fine in bridge driver. (I tried 1.10).

Yes, the overlay driver needs multicast support (docker/libnetwork#552).

Given that, should we close this issue and use the above issue to track the multicast support in overlay driver ?

@tiborvass

This comment has been minimized.

Show comment
Hide comment
@tiborvass

tiborvass Feb 11, 2016

Collaborator

Agreed with @mavenugo. This issue was opened specificly for single-host which seems to have been resolved a long ago. I suggest opening a new issue for multicast in overlay drivers. In the meantime there is an issue on libnetwork people can follow.

Collaborator

tiborvass commented Feb 11, 2016

Agreed with @mavenugo. This issue was opened specificly for single-host which seems to have been resolved a long ago. I suggest opening a new issue for multicast in overlay drivers. In the meantime there is an issue on libnetwork people can follow.

@tiborvass tiborvass closed this Feb 11, 2016

@combitel

This comment has been minimized.

Show comment
Hide comment
@combitel

combitel Jul 5, 2016

Multicast between containers works, but containers still cannot receive multicast from outside. It doesn't work both in bridge and overlay networks. I've created separate issue #23659 for this use case . Can somebody please provide more information on why it doesn't work natively?

combitel commented Jul 5, 2016

Multicast between containers works, but containers still cannot receive multicast from outside. It doesn't work both in bridge and overlay networks. I've created separate issue #23659 for this use case . Can somebody please provide more information on why it doesn't work natively?

@dreamcat4

This comment has been minimized.

Show comment
Hide comment
@dreamcat4

dreamcat4 Jul 5, 2016

Maybe you should switch over to the new macvlan driver

http://stackoverflow.com/questions/35742807/docker-1-10-containers-ip-in-lan/36470828#36470828

On Tue, Jul 5, 2016 at 6:24 AM, combitel notifications@github.com wrote:

Multicast between containers works, but containers still cannot receive
multicast from outside. It doesn't work both in bridge and overlay
networks. I've created separate issue for this use case #23659
#23659 . Can somebody please
provide more information on why it doesn't work natively?


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#3043 (comment), or mute
the thread
https://github.com/notifications/unsubscribe/AAD2Yk2pu_bg75uDPwRC89mn4Zi0yqtBks5qSeqagaJpZM4BRxl4
.

dreamcat4 commented Jul 5, 2016

Maybe you should switch over to the new macvlan driver

http://stackoverflow.com/questions/35742807/docker-1-10-containers-ip-in-lan/36470828#36470828

On Tue, Jul 5, 2016 at 6:24 AM, combitel notifications@github.com wrote:

Multicast between containers works, but containers still cannot receive
multicast from outside. It doesn't work both in bridge and overlay
networks. I've created separate issue for this use case #23659
#23659 . Can somebody please
provide more information on why it doesn't work natively?


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#3043 (comment), or mute
the thread
https://github.com/notifications/unsubscribe/AAD2Yk2pu_bg75uDPwRC89mn4Zi0yqtBks5qSeqagaJpZM4BRxl4
.

@combitel

This comment has been minimized.

Show comment
Hide comment
@combitel

combitel Jul 6, 2016

Thanks @dreamcat4, but I need fully isolated network behind NAT which means that I need to use IPVlan L3 mode and authors of MACVLAN driver explicitly state here that

-Ipvlan L3 mode drops all broadcast and multicast traffic.

combitel commented Jul 6, 2016

Thanks @dreamcat4, but I need fully isolated network behind NAT which means that I need to use IPVlan L3 mode and authors of MACVLAN driver explicitly state here that

-Ipvlan L3 mode drops all broadcast and multicast traffic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment