Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RPL global DODAG repair #11113

Closed
FruechteBini opened this issue Mar 5, 2019 · 1 comment
Closed

RPL global DODAG repair #11113

FruechteBini opened this issue Mar 5, 2019 · 1 comment

Comments

@FruechteBini
Copy link

FruechteBini commented Mar 5, 2019

Hi guys!

I am currently working on implementing RPL on native in RIOT and need help doing a global DODAG repair.

My Setup:

  • I have a root (tap0) and two nodes (tap1 & tap2) all running on native.

Currently working:

I am doing my initialization of the RPL as shown in the rpl-example (https://github.com/RIOT-OS/RIOT/wiki/Tutorial:-RIOT-and-Multi-Hop-Routing-with-RPL)

  • On the root (tap0) & all nodes(tap1&tap2):
    gnrc_rpl_init(6);

  • On the root (tap0) only:

    ipv6_addr_t dodag_id;
    ipv6_addr_from_str(&dodag_id, "2001:db8::1");
    const gnrc_netif_t* netif = gnrc_netif_get_by_pid(6);
    gnrc_netif_ipv6_addr_add(netif, &dodag_id, 64, GNRC_NETIF_IPV6_ADDRS_FLAGS_STATE_VALID);
    gnrc_rpl_root_init(1, &dodag_id, false, false);

So far, this is working & the rpl/DODAG is built correctly, parents can be shown using "rpl".

My Issue:

But my goal is to later disable connections as described in https://github.com/RIOT-OS/RIOT/wiki/Virtual-riot-network (using ebtables DROP).
So I tried to do a global repair by just incrementing the rpl instance. For this purpose I looked into the dodag.h (https://riot-os.org/api/dodag_8h.html) and tried it like this:

uint8_t dodag_instance = 1;

gnrc_rpl_instance_remove_by_id(dodag_instance)
dodag_instance++;
gnrc_rpl_instance_t* current_instance = gnrc_rpl_root_instance_init(dodag_instance, &dodag_id, 2);
gnrc_rpl_dodag_init(current_instance, &dodag_id, 6);

This is working aswell. BUT only on tap0, the root ("rpl" shows the incremented rpl instance). As it seems, the function gnrc_rpl_instance_remove_by_id is not notifying the other nodes it shuts the DODAD down.

The other nodes (tap1 and tap2) still run on rpl instance 1. I have tried initializing them again but it did not do the trick.

As I am pretty new to network programming & RIOT I am slightly confused on what function to use to completely clean all nodes & set up a new DODAG.

Every hint & reference is appreciated, thanks in advance!

@FruechteBini
Copy link
Author

Figured out that the RIOT rpl implementation does a global repair (i think so at least) on default - after 5 minutes. I just edited sys/include/net/gnrc/rpl.h lines 291-295:


 #ifndef GNRC_RPL_DEFAULT_LIFETIME
 #define GNRC_RPL_DEFAULT_LIFETIME (5)
 #endif
 #ifndef GNRC_RPL_LIFETIME_UNIT
 #define GNRC_RPL_LIFETIME_UNIT (60)

and changed the values (in my case 1&5 to make it refresh every 5 seconds).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant