Add Support for IPv6 Multicast#364
Conversation
|
Wow! Very, very cool and something that will be very useful. I quick question: from looking at the examples, joining a multi-hop multicast group is done in the same way as joining a local group ( |
|
That's right |
|
👍 |
|
@g-oikonomou With suppression enabled (K=1) and Tactive < consistent timer < Tdwell for all the members of the domain. Should the addition of a new node cause the forwarding of all the messages in the window? or should the fact that Tactive was reached cause the multicast messages to be suppressed? The behavior I am currently observing is such that the multicast messages get re-broadcast as long as the consistency timer < Tdwell. In my interpretation this is incorrect. Could you confirm? [1] http://tools.ietf.org/html/draft-ietf-roll-trickle-mcast-01 |
core/net/uip-mcast6/roll-tm.c
Outdated
There was a problem hiding this comment.
This conditional must also check if the cached datagram's age is < Tactive before including it in the sequence list.
Note to self: This is likely as simple as replacing the if block with something like:
if(MCAST_PACKET_IS_USED(locmpptr) &&
(locmpptr->active < TRICKLE_ACTIVE(&t[(sl->flags & SEQUENCE_LIST_M_BIT) != 0]))) {
|
Melvin, that's a very very nice catch. This is a combination of what happens in three different locations of the code (see various inline comments). You made me read the draft again: Even though it 'suppresses', it is unclear to me whether Tactive is part of the mechanism referred to as 'suppression'. My understanding/interpretation of the draft is that 'suppression' only refers to K and that a forwarder may only transmit a datagram with age < Tactive regardless of whether K is infinity or not. See my comment under L623. So here is what happens in your situation:
Upon reception of the new node's Trickle ICMP message, the older node will flag an inconsistency on the expired datagram. This is because its Seed ID will not be listed in the received ICMP (which was empty). This happens under the checks for "we have new" (the block starting in L1256). In section 6.4. Trickle ICMP Processing, the draft says: "The receiver has a new multicast message to offer if any buffered messages does not have an associated SeedID entry in the Trickle ICMP message." In the next round of transmissions, because of what happens near L623, the expired message gets sent. Bingo |
|
George, I will try to tackle the changes, test them out and report back. Thanks for your thorough comments above Melvin |
core/net/uip-mcast6/roll-tm.c
Outdated
There was a problem hiding this comment.
taking into account the fact we now need to take note of the current Tactive timer, we cannot simply assign the count to sl->seq_len. Instead we need to count only the stored messages whose lifetime < Tactive
|
@g-oikonomou |
|
Yeah I was playing around too today. Open a pull on top of g-oikonomou/multicast-push if you have made good progress |
|
What's the status of this patch? Is it ready enough so that we can merge it? Personally, I'd really like to have it in, but I'm not sure what the status is (considering the discussion above). |
|
TM works but it has a couple of non-critical bugs (those discussed above). It would be great if we could start looking at the integration into the core (uIP hooks, the code that puts RPL in MOP3 and sends multicast DAO advertisements) while I'm fixing those. Now that we have cooja PCAPs back it won't take long at all. I'll also need to rebase and make sure it didn't break by #454 or #460 (and make necessary adjustments, if any). |
|
On second thoughts, this really needs a little rebase before we can consider merging it. Closing and will reopen in the short term. |
|
I'm re-opening this:
|
|
Looks like this needs a rebase against master, or is there a problem with gitub? |
|
Yeah, it's rebased but I don't know why the history display is all messed up, it shows up correctly on g-oikonomou/contiki |
|
The history cleaned itself up with the last push. Overall, I'm very happy with this pull's current state and I would therefore kindly request it got brought to the foreground again (shameless ping @nvt , @adamdunkels ) |
|
👍 |
|
@g-oikonomou OK, I will try to find some time to test it and go through the source this week. |
Pending IANA allocation, we currently use private experimentation
We store multicast routes in a separate table since we don't need as much information as we need for normal routes
…lticast datagrams
- init() - process incoming multicast datagram - Pass ICMPv6 trickle messages to the engine
…hops) in a line configuration
…hops) in a line configuration
…it for any packet to arrive.
Don't include a sliding window in the ICMPv6 datagram unless the window has at least one active datagram associated with it
|
The source code looks good, and is properly #ifdef'ed. The Cooja test seems to work correctly as well, so I think this one is ready to merge after a too long wait. 👍 |
|
👍 |
Add Support for IPv6 Multicast
This Pull Request adds multicast support, with an example and some regression tests. It introduces a NETSTACK-style multicast engine driver, with two such engines already implemented.
In a nutshell, this pull does the following:
I must start with the big gotcha. Currently, we only support multicast traffic originating and destined within the same 6LoWPAN. In other words, traffic can not cross 6LoWPAN boundaries in either direction. In order to support this, we'd need the following additions to border routers or other gateway devices:
These are in the ToDo list
I am intentionally calling the Trickle Multicast engine TM and not MPL, since it was implemented when the draft was still at version 1 (i.e. before the name MPL came along). As I've discussed in the mailing list in the past, my personal take on things is that to implement MPL one would benefit by starting from scratch, since the differences are quite significant. During discussions it was agreed that it's better to have support for the old draft soon and implement the new version in due course, rather than have nothing while working on implementing MPL.
SMRF was originally written when we had support for neither the RPL HBHO nor multiple instances. I'm currently working on SMRFv2, which will take advantage of both techniques.
As some of you already know, this implementation has been around for circa 2 years now. I recently modified it to use the new neighbour tables and uip-ds6-route-style forwarding tables.
Looking forward to feedback
Enjoy