Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement MLDv1 #574

Closed
wants to merge 1 commit into from
Closed

implement MLDv1 #574

wants to merge 1 commit into from

Conversation

ghost
Copy link

@ghost ghost commented Feb 24, 2014

This patch adds support for MLDv1 to Contiki 3.x. MLD is disabled by default, but can be enabled by defining UIP_CONF_MLD to a non-zero value. If enabled, a Contiki node will respond to MLD queries and multicast address events as required by RFC 2710, i.e.

  • ff02::1 will never be reported
  • other addresses will be reported within the time frame set by the querier
  • new addresses will be reported a number of times (currently 3), and less often if another node reports the same address
  • when addresses are deconfigured, MLDv1 done messages are sent

When this patch was first written, we decided that MLDv1 is more than sufficient for our needs, and probably for all Contiki nodes. MLDv2 was deemed to complex for too little benefit, so we went with MLDv1 instead since all MLD queriers must understand v1 messages anyway.

@g-oikonomou g-oikonomou mentioned this pull request Feb 24, 2014
@ghost
Copy link
Author

ghost commented Feb 25, 2014

The Travis tests all pass now. My patch had a slight problem with datatype definitions (it used uN_t, which doesn't seem to exist everywhere), those are now fixed. The one test that fails passed in earlier runs, now it seems to fail because of a faulty buildenv setup.

#define PRINTF(...)
#define PRINT6ADDR(addr)
#endif

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use uip-debug.h instead of defining these macros.

@nvt
Copy link
Member

nvt commented Mar 5, 2014

I think that this could be OK to merge once we get a clean Travis test, and my comment above is addressed. Fortunately, the Travis failure seems to have been caused by problems that are not related to this pull request.

@g-oikonomou
Copy link
Contributor

Strongly disagree with merging this as-is. More when I'm not typing on a phone :)

On 5 Mar 2014, at 00:51, Nicolas Tsiftes notifications@github.com wrote:

I think that this could be OK to merge once we get a clean Travis test, and my comment above is addressed.


Reply to this email directly or view it on GitHub.

@g-oikonomou
Copy link
Contributor

As per my previous post, here are my concerns:

  • In the original pull (implement MLDv1 #572), I suggested checking out the changes recommended in Add Support for IPv6 Multicast #364. I see that this hasn't happened and this causes issues in a couple of places, as discussed below.
  • Perhaps most crucially, this implementation appears to be sending MLD reports out of the wireless interface. The IETF is pushing MPL and RPL supports its own multicast group management mechanism. I always thought that MLD would be a great feature in order to be able to advertise group membership to the Internet and not inside the lowpan. Assuming we decide, for whatever reason, that we want MLD within the lowpan, this should be updated to play nicely with RPL MOP3. Since you are using MLD, can you provide a use-case whereby MLD would be useful within the lowpan instead of RPL MOP3 or MPL/Trickle Multicast?
  • There are some overlaps with Add Support for IPv6 Multicast #364, but Add Support for IPv6 Multicast #364 does things in a more standards-compliant fashion. For instance, uip_is_addr_routable_mcast(a) should be checking the scop bits of (a)->u8[1] and not the entire byte. Furthermore, the comment refers to RFC3513. RFC4291 obsoletes 3513. The check should be based on RFC4291 and also on draft-ietf-6man-multicast-scopes (updates 4291). This draft introduces scop 3: Realm-Local and as far as I know is moving forward.
  • This implementation only reports our own mcast addresses. Perhaps I am misinterpreting RFC2710 here, but shouldn't we also be reporting multicast addresses present on the link? That's how I'm interpreting Sec 4, page 5, paragraph 3. Add Support for IPv6 Multicast #364 adds multicast forwarding tables and that's how it differentiates between multicast addresses we are subscribed to and multicast addresses present on link. Could you please clarify and point me to the correct location in 2710?
  • uip_icmp6_ml_report_input doesn't appear to be doing anything other than increment a counter? IMHO it should be used to cause changes to forwarding tables (see above).
  • Do we really need to define UIP_ICMP6_MLD_BUF, struct uip_icmp6_mld1, struct uip_ext_hdr_rtr_alert_tlv, struct uip_ext_hdr_padn_tlv inside the generic uIP core? I would have prefered all of them to be inside uip-mld.{c,h}.

To sum it up, as I said already, MLD is a great feature to have - we definitely want it! However, this pull uses MLD to advertise a multicast functionality which we don't have yet. I think this is a little too early / we are not ready for MLD. The way forward would be to wait for #364 and then rebase this again, taking into account multicast forwarding tables, RPL MOP3 and also making sure we report (with MLD) to the outside world (which is what we should be primarily using MLD for, IMHO).

Somewhat less importantly:

  • I see that the license has been updated to the simplified BSD / Free BSD. As far as I know this is not a problem in terms of compatibility, but I do want to confirm that this is indeed the author's intention.
  • Do we really need the MLD periodic to be inside the tcpip eventhandler? I would create an MLD process, with its own periodic timer and start it somewhere, probably inside uip_init wrapped inside a conditional.
  • I've not seen bit-fields anywhere else in the core. Do we want to start using them for MLD?
  • Do we really want to extend struct uip_ds6_maddr with MLD-specifics? Couldn't internal MLD state be maintained internally?
  • The code needs formatting fixes at various places.

As ever, many thanks for your efforts, but I'm afraid at this stage I'd rather see this put on hold.

@ghost
Copy link
Author

ghost commented Mar 12, 2014

  • Correct. Your code is also "only" a pull request, so for lack of time I decided to not look at that for too long. When I have more time again, I might do that though.
  • Yes, MLD packets are sent into the wireless network. That's because the current architecture of Contiki, Linux, and related services support multicast routing only when all participating network segments have some sort of MLD implemention.
    • If IETF standarizes another MLD-equivalent protocol that works better in these constrained networks and Linux support for this protocol is available, MLD in Contiki will probably not be necessary anymore. Currently though, we are running networks that rely on multicast transmission and routing of multicast packets, so we do need MLD at least for the moment.
    • To my understanding, RPL MOP3 and Trickle are multicast propagation methods/protocols, whereas MLD is a network management protocol. MLD will not work properly without MOP3/Trickle in networks with non-total node visibility, but otherwise the two are completely independent.
  • You are correct. Once I find the time to update, I will change that.
  • No. MLD does not manage multicast forwarding tables, that's the job of multicast routing protocols like PIM. MLD is only used to determine which multicast addresses are of interest to a network segment - if you treat the wireless network as multicast-segmented (which MOP3/Trickle explicitly want to avoid, IIRC), you will have to run a proper multicast routing protocol between the nodes to avoid degradation to multicast flooding.
  • Yes, to avoid reporting addresses more often than required. Using flooding multicast propagation protocols like Trickle, a node may assume that every other node in the network sees its own multicasts, so this is a performance optimization that's also described in the relevant RFC. For the forwarding tables, see above.
  • Not necessarily, I just put it where everything else was when I first wrote this. Declaring the buffer pointer somewhere more private is also perfectly fine.
  • It is, we've internally discussed licensing and the current license as declared in the files is what was chosen for this submission.
  • MLD is a core service, so I thought it would be good to put where other such services already are. Adding a process for MLD would increase RAM usage, processing time is already insignificant. Thus I didn't put it into its own process.
  • It saves space, and devices are usually space constrained, thus bitfields. It'd certainly be possible to implement MLD without them, at the cost of increased RAM usage and a few saved cycles. I think the RAM savings outweigh the processing cost, but whether to use bitfields in merged code is not my decision.
  • Why not extend it? Putting multicast management info somewhere else will only cost more and obscure the implementation.
  • Where?

@ghost
Copy link
Author

ghost commented Apr 7, 2014

It's been a month now since my last message, and #364 has been merged, making MLD even more useful, if not required for a network to function properly once nodes want to receive multicast packet. Is there still interest in MLD support? I understand that the code needs some changes now, but I'd rather not spend time on those changes and have the pull request rejected for other reasons afterwards.

@g-oikonomou
Copy link
Contributor

Hi.

I've meant to reply to your post but I need to find an opportunity when I'll have enough time to reply to it properly.

In short: Yes, we still want MLD. Contiki multicast support will be (at least IMHO) incomplete without MLD.

However, I want to properly articulate my understanding of what it is we need MLD for and how I'd visualise it playing nicely and complementing everything else.

Thus, as I say, I'll definitely champion MLD as a feature. However, the decision whether it will get merged or not is not one which will be taken by myself exclusively. Additionally, the decision whether it will get merged or rejected does not depend only on what the feature is, but it also depends on how well the feature is implemented. To put this in different words, I can't promise that the pull will get accepted or rejected before I've seen the pull :)

Nevertheless, thanks for your efforts to contribute to Contiki, and this is irrespective of whether you decide to throw more time into this.

@g-oikonomou
Copy link
Contributor

For the rest of the discussion, let's consider this very simple topology, with nodes A, B, C and the LBR forming a 6LoWPAN mesh.

+---+        +---+        +-----+
| A | ------ | B | ------ | LBR | -- Non-.15.4-Link -- The Internet
+---+        +---+        +-----+
                             |
                             |
                           +---+
                           | C |
                           +---+

Thus:

To my understanding, RPL MOP3 and Trickle are multicast propagation methods/protocols, whereas MLD is a network management protocol. MLD will not work properly without MOP3/Trickle in networks with non-total node visibility, but otherwise the two are completely independent.

It's slightly more complicated than that, I'm afraid:

  • RPL MOP3 uses DAO messages to relay multicast group registrations. It is not a forwarding/propagation protocol, it's group management. RFC6550 discusses some ideas on how forwarding could also be achieved in a MOP3 network, but this is just that: a discussion.
  • TM/MPL is forwarding, but because of the way it works it doesn't need explicit group management.

Thus, we currently have two options for multicast inside the 6LoWPAN:

  • RPL in MOP3 + something to do the actual forwarding. In Contiki this 'something' is currently SMRF but there could be others.
  • TM/MPL. In this case MOP3 is irrelevant, and currently in Contiki we don't set RPL in MOP3 for TM deployments. There is a corner case whereby MOP3 could be useful (but not required) in a TM/MPL deployment, but this has nothing to do with MLD.

My interpretation of the specs is that RPL MOP3 and MLD serve the same purpose: Notify about the presence of multicast listeners. In a MOP3 deployment, MLD won't add any value that I can think of.

Thus, in summary:

  • Multicast with RPL MOP3 + forwarding engine: I can't see a need for MLD. MOP3 does the same thing.
  • TM/MPL: I can't see a need for MLD, TM handles everything internally.

MLD does not manage multicast forwarding tables, that's the job of multicast routing protocols like PIM.

You are of course right

... #364 has been merged, making MLD even more useful, if not required for a network to function properly once nodes want to receive multicast packet.

Now that we have MOP3 / TM in place, you say that MLD is even more useful if not required. You seem to be suggesting that MLD requires MOP3 or TM/MPL in the case of a multi-hop network (at the link layer). But assuming the presence of either TM/MPL or MOP3, I can't see what additional purpose MLD would serve.

Please, can you provide a use-case / scenario whereby MLD would be necessary or at least useful? By use-case I mean one that includes a description of the sender's location, multicast datagram destination, number and location of subscribers, protocols in place, DODAG depth, and the reasons why things would fail without MLD.

I've had some discussions with people active in the ROLL WG recently, and I might be able to think of a single corner case where MLD could be useful in the 6LoWPAN side. However, this corner case is not currently supported by Contiki (and would take a lot of effort to do so) and ideally I'd prefer to see your own use-case, rather than lead the discussion towards something you may or may not have thought of originally.

If I were to implement MLD support for Contiki, here's the premise / assumptions I'd work with (think the above topology):

  • MPL or MOP3, but not MLD, inside the 6LoWPAN
  • If MOP3, the LBR learns about the presence of listeners in the 6LoWPAN
  • The LBR runs MLD, probably as a non-querier, and sends reports to the outside world over its Non-.15.4-Link about the presence of listeners inside the 6LoWPAN. This is subject to discussion, since the LBR won't necessarily be subscribed to the same groups. There's an alternative approach to achieve this, but it also has problems and it is out of scope here anyway.
  • The LBR receives MLD reports over its non-.15.4 interface for other multicast listeners present on the Non-.15.4-Link and forwards datagrams to the outside world as required.

Multicast datagrams originating within the 6LoWPAN and with a destination scope higher than realm-local are forwarded over the Non-.15.4-Link (or dropped) based on the information learnt through MLD on this link.

@ghost
Copy link
Author

ghost commented May 21, 2014

Thanks for your reply. I've taken another look at RFC 6550, and it seems that in my first go I read it wrong. I agree with your summary that MOP3 and MLDv1 serve the same purpose inside a 6LoWPAN network, rendering MLDv1 inside such a network effectively useless if MOP3 is in place. I also agree with most of your premises for MLD on a LBR, but as you already said, whether and how a LBR would forward multicasts or propagate reports is another discussion. In our application, we've made it easy for ourselves by running mrd6 on the LBR and MLD withing the lowpan, so multicast routing just works. Without MLD, we'll just have to get the multicast listener information from RPL into the non-15.4 system by some means, but I'm not sure right now how that would look like. It would almost certainly require an extension to mrd6 or a partial reimplementation, but I might be wrong.

For TM/MPL, the situation as I see it is similar, but more awkward. To my understanding, a forwarded message in MPL will almost always be an IP-in-IP packet, necessitating decapsulation at each receiver. Is this correct? If so, border routers that want to process multicasts correctly will have to do that somehow. But again, MLDv1 does seem to be of little value in an MPL network, when a LBR never prunes its seed set.

Either way, yes, MLDv1 carries little extra values if the LBR implements MOP3/MPL and does not intend to act as a multicast router. I have no idea how often that would happen, for us multicast routing is the default.

We want to create a scalable, stable system for home and building automation based on 802.15.4 radios, running 6lowpan and RPL, and on top of that our multicast-based dissemination and control protocol. We haven't yet had deployments large enough to require actual multicast forwarding within a radio domain (since the version of Contiki we're using does not do that, and we don't want to reimplement that wheel), but we do have deployments with multiple lowpan border routers that are also multicast routers. To keep the entire thing simple, these LBRs run a single instance of mrd6, thus our need for MLDv1 in the lowpan. In MOP3 or MPL, that would no longer be an option because mrd6 just doesn't support those protocols (and I doubt it ever will).

From what I understand, it seems that a LBR that also has to support multicast correctly must either be contain full multicast routing component with specific support for MOP3/MPL, or a smart bridge that snoops MOP3/MPL messages and synthesizes MLD reports at appropriate times. The former seems unlikely to happen anytime soon, while the latter is something that Contiki can implement as you have mentioned.

For our case, neither would help a lot, simply because we're moving to Linux to run the LBR and leave Contiki out of that part of the network for a number of reasons. While MOP3 and MPL do provide methods to limit and direct multicast traffic in radio networks, they don't provide as much interoperability with the rest of the world as one would wish for. In our case, we've just added MLD with ridiculously long query intervals (on the order of one query per 12 hours) to keep redundant traffic low. That way, a border router that properly implements MOP3 or MPL will discard multicasts it needn't forward, but at any point in time we can be reasonably sure that in a given site, all multicast addresses of interest to lowpan nodes are also forwarded to the LBR in question (and thus the lowpan nodes, via the LBR) through traditional mechanisms.

I guess this pull request has been superseded by other technologies after all. Consider it void from my side and close it if you want to.

@nvt
Copy link
Member

nvt commented Jun 3, 2014

There seems to be a consensus that the proposed functionality is overlapping with that of a recently merged pull request, so I'm closing it as the author proposes.

@nvt nvt closed this Jun 3, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants