Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Virtual riot network
RIOT features the native port with networking support. This allows you to run any RIOT application on your Linux or Mac computer and setup a virtual connection between these processes. In order to do this, the nativenet module uses TAP devices (https://www.kernel.org/doc/Documentation/networking/tuntap.txt) and provides a simple script to configure an Ethernet bridge connecting these devices.
Your first topology (2 nodes, 1 hop)
The easiest way is to use the shell script
RIOT/dist/tools/tapsetup/tapsetup -c 2 which sets up a bridge for two tap devices to communicate over (say tap0 and tap1).
Under Ubuntu or Debian, if the script fails because it does not find the
ip command install the iproute2 package, for example using apt
sudo apt install iproute2
Once this is in place, run one RIOT instance using tap0 and another using tap1. For example, example/default/ open a terminal and run
make term PORT=tap0. Then open a second terminal and run
make term PORT=tap1. In each RIOT shell, set node address using the
addr shell command. Then you can send messages between the two RIOT instances using the
txtsnd shell command (and you can view available shell commands as usual, by typing
Once you are done with your tests, remove the bridge by using the script
For the virtualization of the DES-Testbed we developed a Python framework called desvirt (Virtualizing a testbed). Desvirt allows to setup a virtual network, by starting qemu instances, connect them over TAP devices, and implementing the packet loss rates between the interfaces with ebtables and tc.
Desvirt also supports RIOT nativenet. You can use the topology_generator to define a virtual network. For example:
./topology_creator -e /tmp/default-native.elf -n riot_native -r ieee802154 -s2 -tline -l50 -f line2
will create an XML configuration file, for a very basic network, containing two connected RIOT native instances and 50% packet loss. Copy the resulting XML file to .desvirt/, read in the configuration by running
./vnet -n line2 -d
and start the network:
./vnet -n line2 -s
Manual topology control
On Linux ebtables can be used to manually restrict traffic between interfaces of a network bridge. With this, connections between nodes on the same bridge can be severed or restricted to only certain types of traffic.
ebtables is used in these examples. Ebtables uses lists of ordered rules to apply verdicts to packets. Every packet is matched with the rules starting at the first and continuing until the last rule is tried. Once a matching rule is found, the verdict of that rule is executed and no further rules are tried, this means that only the first matching rule is executed for a packet.
In these examples only the FORWARD chain of the filter table is used. The FORWARD chain is specifically for traffic that has to be forwarded from one interface of a bridge to another interface of the bridge, for example from tap0 to tap1. The filter table contains chains that filter packets (accept or drop). By default all packets are accepted (forwarded to the destination), this can be changed by adjusting the policy of the chain to DROP.
Note: These rules are applied to all traffic on all bridge interfaces off your system, keep that in mind if your system has multiple bridges.
Keep in mind that most of these examples assume that the current ruleset is empty.
Restricting traffic between two nodes:
In this scenario a bridge interface is used with 3 native instances attached. The traffic between the tap0 interface and the tap2 interface is dropped to force the traffic flow through the node attached to the tap1 interface. This can be used to test a multihop scenario.
$ sudo ebtables -A FORWARD --in-interface tap0 --out-interface tap2 -j DROP $ sudo ebtables -A FORWARD --in-interface tap2 --out-interface tap0 -j DROP
The first rule restrict traffic originating from the tap0 interface and with the tap2 interface as destination. The second rule restrict traffic originating from the tap2 interface and with the tap0 interface as destination.
Mimicking l2filter module functionality
The l2filter module can be mimicked with ebtables by restricting traffic from one MAC address to a certain interface:
$ sudo ebtables -A FORWARD --source aa:bb:cc:dd:ee:ff --out-interface tap0 -j DROP
Traffic from the node with aa:bb:cc:dd:ee:ff as MAC address is filtered from reaching the node attached to the tap0 interface.
Restricting neighbor solicitations between two nodes
In this scenario two nodes need to have their neighbor solicitations filtered, for example to test the scenario from #8097. This ruleset is a bit ugly as it restricts traffic from one IPv6 address to the solicited node multicast address of the other node, instead of matching on the icmpv6 code of the packet. However this ruleset shows that it is possible to match on network layer protocols (IPv6) to drop packets.
$ sudo ebtables -t filter -A FORWARD -p ip6 --ip6-source fe80::801a:35ff:fe46:ea37 --ip6-destination ff02::1:ff5e:1fa2 -j DROP
Temporarily allow all traffic
In this scenario a previous ruleset is added to restrict some traffic, At some point in time all traffic flows between nodes have to be allowed and at another point in time the restriction of the initial situation have to be functional again. To achieve this, a rule is inserted at the top of the chain to allow all traffic. All traffic will match this rule, and the other filtering rules are effectively ignored. Later, this rule is deleted to make the original filtering rules functional again.
First insert a rule on the first position to allow all traffic:
$ sudo ebtables -I FORWARD 1 -j ACCEPT
And remove this rule, or rather the first rule of the forwarding chain, again
$ sudo ebtables -D FORWARD 1
Flushing the forwarding filter rules
This command can be used to remove all rules in the forwarding chain, clearing all restrictions on traffic.
$ sudo ebtables -F FORWARD
There is a hard limit of 1024 interfaces per bridge in Linux at the moment. If you need more instances in one network you will have to modify and recompile your kernel. Compare https://github.com/dotcloud/docker/issues/1320#issuecomment-22346764 .
On the bright side, 1000 instances only need 200MB of RAM (examples/default with ltc disabled) and about no CPU (while they're not doing anything), so at least the trouble pays off.
Due to a bug in desvirt, it is not possible to define grid networks with more than 26 nodes per line. Compare https://github.com/des-testbed/desvirt/issues/10 .