Skip to content
Erik Nordström edited this page Nov 1, 2013 · 9 revisions

Prerequisites

You need a basic development environment (make, gcc, base libraries, etc.). On Debian-based Linux distributions (e.g., Ubuntu) this is conveniently installed through the 'build-essentials' package. When getting the source directly from the GitHub repository, you also need autoconf and automake.

Other dependencies include (together with corresponding development packages/headers):

  • libtool
  • optional: Linux kernel >= 2.6.35 + kernel headers (for compiling kernel module)
  • optional: Java JDK (for Java bindings)
  • optional: libssl (for some test apps)

Compiling:

From GitHub source:

$ ./autogen.sh
$ ./configure [ OPTIONS ]
$ make

When compiling a release tar-ball, only the configure and make steps are necessary.

Running Serval

$ insmod ./src/stack/serval.ko
$ ./src/test/tcp_server -s 76568 -f send_file

This will start a Serval TCP server that serves a file. Another Serval host can download this file by calling:

$ ./src/test/tcp_client -s 76568 -f receive_file

Note that the server listens on the given serviceID (76568) that the client connects to. This works automatically in case both the server and the client are in the same IP broadcast domain. If they are on different subnets, you need to add a service table rule to the client's service table:

$ ./src/tools/serv service add 76568 IP_SERVER

This will redirect requests for serviceID '76568' to the server's IP address. In a full Serval deployment scenario, clients would configure a default "catch-all" service table rule that redirects all service requests to an upstream service resolver, which then forwards packets onward towards a service instance. A default rule would typically be assigned automatically, e.g., via DHCP.

Setting up a simple load balancer

With Serval, it is easy to run multiple service instances and balance the load between them, without a dedicated load balancer or need for deep packet inspection (in case the load balancer is doing layer 7 switching). The load balancing is typically implemented by adding a dedicated host that balances service requests across a number of service instances. However, the balancing can also be handled by the client itself by simply adding another service table entry on the client:

$ ./src/tools/serv service add 76568 IP_SERVER_2

Then run an identical 'tcp_server' on a second server machine with IP_SERVER_2. The client will balance its requests between the two server machines.

Adding a dedicated load balancing machine requires a bit more configuration. A typical setup would look as follows:

               Server1
              /
Client ----- LB
              \
               Server2

All machines should have the Serval stack, although more IP routers could be added that need not run Serval. The load balancer (LB), also called a service router, should have the following configuration:

$ ./src/tools/serv service add 76568 IP_SERVER_1
$ ./src/tools/serv service add 76568 IP_SERVER_2
$ echo 1 > /proc/sys/net/ipv4/ip_forward
$ echo 1 > /proc/sys/net/serval/sal_forward

This will map the serviceID 76568 to the two service instances and also enable IP and SAL forwarding, respectively. Note that the load balancer only needs to route the first packet through its SAL, while the rest of the packets are IP forwarded directly between the client and the selected server instance. This means the load balancer need not sit on the data path, resulting in a network configuration similar to the following:

           LB   Server1
          /  \ /
Client ----- SW 
               \
                Server2

Automatic registration of service instances

Adding service table rules manually is tedious, especially if the service comprises many instances. The Serval stack can automatically generate events upon launching and shutting down service processes on a host. A bind(serviceID) call generates a service registration event for the bound serviceID, while a close() call unregisters that service. These events can be relayed to an upstream service router, acting, e.g., as a load balancer, like in the scenario above.

Serval has a service daemon (servd) that can forward such end-host events to an upstream service router. Servd can also serve as the daemon receiving these events on the service router, updating the router's service table automatically. Service instances should launch servd as follows:

$ ./src/servd/servd [-rip SERVICE_ROUTER_IP ]

Note that the SERVICE_ROUTER_IP is optional. When the end-host is on the same subnet as a service router, servd will normally find that service router through broadcast rules in the service table (it is also possible to add a service table rule, mapping the service router serviceID to SERVICE_ROUTER_IP).

The service router should launch servd as follows to be able to receive registrations:

$ ./src/servd/servd -r

The service router should also enable IP and SAL forwarding, as instructed above. When a service is now launched on a service instance running servd (server1,2 in the picture above), registrations will automatically be forwarded to the service router. The service router will balance incoming service requests over the registered instances. When services are shut down, or crash, on service instances the Serval stack will automatically unregister with the service router. Note that the balancing is done using weighted proportional split, and not Round-Robbin, which means subsequent requests might sometimes go to the same server.

Flow migration and mobility

Both clients and servers may have multiple interfaces. Mobile clients typically have both a fixed and wireless interface (Ethernet and WiFi), or in the case of mobile phones they may have two wireless interfaces (e.g., 3G and WiFi). Similarly, many blade servers have multiple interfaces, although it is rare that more than one is used at a time. Serval can migrate flows from one interface to another (for balancing load across interfaces or switching from e.g., 3G to WiFi), or reconnect flows on an interface that changes its address (when hosts move from one attachment point to another). The currently established flows are visible under /proc/net/serval/flow_table:

$ cat /proc/net/serval/flow_table 
srcFlowID  dstFlowID  srcIP             dstIP             state      dev
4          5          192.168.1.2       192.168.1.3       CONNECTED  eth0

This shows one flow on interface eth0 with source and destination flowID 4 and 5, respectively. This flow can be migrated to a another interface (e.g., eth1) by issuing the following command:

$ ./src/tools/serv migrate flow 4 eth1

It is also possible to migrate all flows on a specific interface to another:

$ ./src/tools/serv migrate eth0 eth1

When an interface changes its address, the Serval stack will automatically try to migrate any flows on that interace to the new address. In that case, there is no need to manually migrate flows.

Running the Serval stack in user mode

The Serval stack can optionally run as a user-space daemon on top of RAW IP sockets, instead of running in the kernel. However, the user-mode stack has a number of limitations, mostly due to the rather complex IPC required between the stack daemon and applications to emulate a BSD-style sockets API. Do not expect the full functionality of the kernel stack when running in user mode. User-mode Serval can be launched as follows:

$ ./src/stack/serval [-i INTERFACE ]

Service and flow tables can be viewed by connecting to the Serval daemon using a telnet client:

$ telnet localhost 9999

Applications need to link with the libserval library to be able to use the user-mode API. Most of the test applications are compiled into their user-mode equivalents with the added _user suffix. For example, to launch the TCP file server, issue the following:

$ ./src/test/tcp_server_user -f send_file