Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time


Build Status


Our exit node is currently a single relay and exit server. All of the public traffic on access points is routed over an l2tp tunnel with tunneldigger through our exit server. In this way, creating a new exit server would essentially create a new "mesh". For the time being, all sudomesh/ traffic must travel over a single exit server in order to remain on the same network.

work in progress

Tested on digitalocean servers running ubuntu 16.04. Also tested on Debian Stretch.


Get access to a server (e.g. on digitalocean or some other place) running Ubuntu 16.04 or Debian Stretch. Then:

  1. Clone this repo on your local computer.

  2. Copy the script onto your soon-to-be-exitnode server:

scp <user>@<exitnode-ip>
  1. Log into the server:
ssh <user>@<exitnode-ip>
  1. Determine the server's default network interface:
ip route
default via dev eth0

In the above example, this would be eth0.

  1. Execute the script as root, where <exitnode-ip> is the public ip of your server, and <default-interface-name> is the name you just found in step 3:
sudo <exitnode-ip> <default-interface-name>

Expected output should be something like:

Get:1 xenial-security InRelease [102 kB]
Hit:2 xenial InRelease
Get:3 xenial-security/main Sources [108 kB]
Get:5 xenial-security/restricted Sources [2,116 B]
Cloning into '/opt/exitnode'...
tunneldigger.service is not a native service, redirecting to systemd-sysv-install
Executing /lib/systemd/systemd-sysv-install enable tunneldigger
babeld.service is not a native service, redirecting to systemd-sysv-install
Executing /lib/systemd/systemd-sysv-install enable babeld

Check /var/log/syslog for evidence that the tunneldigger broker is running:

[INFO/] Initializing the tunneldigger broker.
[INFO/] Registered script '/opt/tunneldigger/broker/scripts/' for hook 'session.up'.
[INFO/] Registered script '/opt/tunneldigger/broker/scripts/' for hook 'session.down'.
[INFO/] Maximum number of tunnels is 1024.
[INFO/] Tunnel identifier base is 100.
[INFO/] Tunnel port base is 20000.
[INFO/] Namespace is experiments.
[INFO/] Listening on <exitnode-ip>:443.
[INFO/] Listening on <exitnode-ip>:8942.
[INFO/] Broker initialized.

Configure Home Node to use exit node

This section assumes you have a working home node. Here's a walkthrough for how to set one up:

Connect to your node's private SSID, and ssh in:

ssh root@

Now edit the tunneldigger configuration by:

vi /etc/config/tunneldigger

and change the list address from list address '' to list address '[exit node ip]:8942'.

Now, execute /etc/init.d/tunneldigger restart to apply new changes.

Troubleshooting Tunneldigger

If you don't see an l2tp interface on your home node, it might be having issues digging a tunnel. There might be some clues in the tunneldigger logs. This is an active area of debugging, related to sudomesh/bugs#8.

On the home node:

cat /var/log/messages

The tunneldigger client prefixes its logs with "td-client":

cat /var/log/messages | grep td-client

It might look something like this:

td-client: Performing broker selection...
td-client: Broker usage of [exitnode-ip]:8942: 127
td-client: Selected [exitnode-ip]:8942 as the best broker.
td-client: Tunnel successfully established.
td-client: Setting MTU to 1446

On the exit node:

sudo journalctl -u tunneldigger

TODO: Show some healthy log examples here.


Testing Tunnel Digger

In order to check whether a client can establish a functioning tunnel using tunneldigger, assign an ip address to the l2tp0 interface on the client, and create a static route to the exit node address (default


step 1. create tunnel using tunneldigger client (see

step 2. assign some ip to tunneldigger client interface Once the tunnel has been establish, an interface l2tp0 should appear when listing interfaces using ip addr. To assign an ip to that interface, do something like sudo ip addr add dev l2tp0. Now, your ip addr should include:

l2tp0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1446 qdisc pfifo_fast state UNKNOWN group default qlen 1000
    link/ether 62:cb:8a:c9:27:17 brd ff:ff:ff:ff:ff:ff
    inet scope global l2tp0
    valid_lft forever preferred_lft forever
    inet6 fe80::60cb:8aff:fec9:2717/64 scope link 
    valid_lft forever preferred_lft forever

step 3. establish static route from client to tunneldigger broker

Now, for the client to route packets to the tunneldigger broker using the l2tp0 interface, install a route using:

sudo ip r add dev l2tp0

step 4. establish static route from tunneldigger broker to client

After logging into the exitnode/tunneldigger broker, install a static route to the client using sudo ip r add dev l2tp1001, where l2tp1001 is the interface that is created when the client established the tunnel. This can be found using ip addr | grep l2.

step 5. ping from client to broker

Now, on the client, ping the broker using

ping -I l2tp0

If all works well, and the tunnel is working as expected, you should see:

$ ping -I l2tp0
PING ( from l2tp0: 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=228 ms
64 bytes from icmp_seq=2 ttl=64 time=214 ms

If you can ping the broker via the tunnel interface, tunneldigger is doing it's job.

Testing Routing with Babeld Through Tunnel Digger

This assumes that you have an active and functioning tunnel on interface l2tp0 with ip (see previous test).

Now that we have a functioning tunnel, we can test babeld routing as follows:

Step 1. install and build babeld using Please follow install instructions on said repository. Make sure you remove an existing babeld before installing this one.

Step 2. start babeld on l2tp0 Execute sudo babeld l2tp0 and keep running in a separate window.

Step 3. check routes After running ip route you should see entries like: via dev l2tp0  proto babel onlink 

Step 4. ping the mesh routing ip Now, execute ping and you should see something like:

$ ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=207 ms
64 bytes from icmp_seq=2 ttl=64 time=204 ms

Step 5. now, stop the babeld process using ctrl-c

Step 6. repeat steps 3/4 and confirm that the routes are gone and the ping no longer succeeds.

PS If you'd like to see the traffic in the tunnel, you can run sudo tcpdump -i l2tp0 . When running the ping, you should see ICMP ECHO messages and babeld "hello" and "hello ihu" (ihu = I hear you).

Step 7. route to internet

After restarting babeld (step 2), add a route for via mesh router using sudo ip r add via dev l2tp0 proto babel onlink.

Now, when pinging ping you should see the traffic going through the tunnel. As seen from the broker/server :

04:12:49.900483 IP > ICMP echo reply, id 2324, seq 49, length 64
04:12:50.777621 IP6 fe80::fc16:44ff:fe04:e0eb.6696 > ff02::1:6.6696: babel 2 (24) hello ihu
04:12:50.891593 IP > ICMP echo request, id 2324, seq 50, length 64
04:12:50.891873 IP > ICMP echo reply, id 2324, seq 50, length 64
04:12:51.154965 IP6 fe80::9007:afff:fe6a:aa9.6696 > ff02::1:6.6696: babel 2 (24) hello ihu
04:12:54.767561 IP6 fe80::fc16:44ff:fe04:e0eb.6696 > ff02::1:6.6696: babel 2 (44) hello nh router-id update
04:12:55.697947 IP6 fe80::9007:afff:fe6a:aa9.6696 > ff02::1:6.6696: babel 2 (8) hello
04:12:58.646455 IP6 fe80::fc16:44ff:fe04:e0eb.6696 > ff02::1:6.6696: babel 2 (8) hello
04:12:59.443288 IP6 fe80::9007:afff:fe6a:aa9.6696 > ff02::1:6.6696: babel 2 (8) hello
04:13:02.167520 IP6 fe80::fc16:44ff:fe04:e0eb.6696 > ff02::1:6.6696: babel 2 (24) hello ihu
04:13:03.402486 IP6 fe80::9007:afff:fe6a:aa9.6696 > ff02::1:6.6696: babel 2 (156) hello ihu router-id update/prefix update/prefix nh update update up

Test Domain Name Service (DNS)

To test DNS, connect to your home node using a laptop on the SSID . Now, on the commandline, execute something like dig @[ip of exit node] to check whether the domain name resolution (DNS) work. DNS translates domain names into ip addresses.

$ dig @

; <<>> DiG 9.9.5-9+deb8u15-Debian <<>> @
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 39878
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

; EDNS: version: 0, flags:; udp: 512
;          IN  A

;; ANSWER SECTION:       2216    IN  A

;; Query time: 25 msec
;; WHEN: Sat Feb 17 21:15:57 EST 2018
;; MSG SIZE  rcvd: 57


Configuration, script and instructions for exit nodes.






No releases published


No packages published