Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
..
Failed to load latest commit information.
logs
scripts
test
README.md

README.md

Running iSDX with OF-DPA switches

In order for iSDX to be truly useful, it must run at line speed on hardware switches. We have taken the following steps to demonstrate this.

  1. Configure three hardware switches and one software switch (OVS) in the iSDX multi-switch (MS) configuration as shown below.
    The switches we chose are Quanta LY2. To support iSDX's requirement of OpenFlow 1.3, we installed OF-DPA 2.0 running on Open Network Linux. The switches are connected to two Linux machines (Dell R630's) via 10GB links as shown.

Hardware Testbed Wiring Configuration ![Current Wiring Configuration] (https://docs.google.com/drawings/d/1Z31EwSiUv8rKb9EqCd8TIS5-N9iTvfsiXOKnah7KToA/pub?w=960&h=720)

  1. Verify that the types of flow rules generated by iSDX can be supported by OF-DPA on the Quanta switches.
    We performed this verification by studying the [OF-DPA documentation] (https://drive.google.com/file/d/0B_ng-xn3c5pjdjhiajhicUl2Z1E/view?usp=sharing) and experimenting with installing flow rules on one of the switches. iSDX rules require:
    Matching on various combinations of:
  • input port
  • eth_type
  • eth_src
  • eth_dst with arbitrary bit masking
  • TCP dst port
  • TCP src port

Actions to:

  • forward to a port
  • drop packet
  • set eth_src
  • set eth_dst

iSDX originally also used two features that we found are not supported by OF-DPA: matching on the ARP_TPA and multicast in concert with setting the eth_dst (for ARP). To work around these limitations, we moved these features to a software switch (OVS) as shown in the figure.

OF-DPA Matching
[OF-DPA defines a pipeline of flow tables] (https://drive.google.com/file/d/0B_ng-xn3c5pjdjhiajhicUl2Z1E/view?usp=sharing) each of which supports different types of rules. (The formal definition of these rules is described [here] (https://drive.google.com/file/d/0B_ng-xn3c5pjdXE2ZVFRdDg1Y0E/view?usp=sharing) using [OpenFlow Table Type patterns] (https://www.opennetworking.org/images/stories/downloads/sdn-resources/onf-specifications/openflow/OpenFlow%20Table%20Type%20Patterns%20v1.0.pdf).) Only one of these tables, the Policy ACL table, is relevant to iSDX. The Policy ACL table (table 60) supports matching on the fields listed above, however, it also requires that all matched packets be VLAN tagged. The documentation leads one to believe that the VLAN table (earlier in the pipeline) must be programmed to add a VLAN tag to untagged traffic; however, in practice it appears that a VLAN tag of '1' is added to untagged traffic by default and no rules are needed in the VLAN table.

OF-DPA Actions
We found that the OF rules generated by iSDX are not directly supported by OF-DPA. In particular, OF-DPA rules cannot directly include actions required by iSDX. It is, however, possible to generate equivalent OF rules that OF-DPA does support. Actions can be placed into an OF group which can then be referenced by rules in the Policy ACL table. For iSDX, there are two relevant types of groups. An instance of the L2 Interface Group holds an output port number, so when a Policy ACL rule needs to forward a packet to an output port, the rule references the appropriate L2 Interface Group instance. The L2 Rewrite Group is used to overwrite the MAC src or dst address. If a Policy ACL rule needs to overwrite the MAC src or dst address, it references the appropriate L2 Rewrite Group instance which in turn references a L2 Interface Group in order to forward the modified packet to an output port.

  1. Test the functionality and performance of the matching and action features described above using the Quanta switches.
    For the initial testing, we used the following configuration: Perf Testing Configuration On the Quanta switch (ssh in to it via management interface) start the OF-DPA service and agent. The agent will try to connect to the Ryu controller, which in our setup is at 10.0.0.100:6633. The datapath ID (--dpid=1) uniquely identifies this switch to the controller. When additional switches are added to the configuration, they will need to use different dpids. The OF-DPA agent that we run on the switch is brcm-indigo-ofdpa-agent. There is an alternative agent called indigo-ofdpa-agent; however, in our experimentation we found that the latter does not properly handle the ethertype for ARP. [This forum post] (https://groups.google.com/a/openflowhub.org/forum/#!msg/floodlight-dev/VA78jF9LYjg/xhccoffDeMoJ) briefly discusses the two implementations.
# service ofdpa restart
# sleep 20 	 # delay until service is really up
# brcm-indigo-ofdpa-ofagent --controller=10.0.0.100:6633 --dpid=1

On the controller VM, start Ryu with REST interface. (Make sure to wait until the agent in the previous step prints a few 'Connection refused' messages.)

$ bin/ryu-manager --verbose ryu.app.ofctl_rest

Installing Test Flow Rules (and Related Groups)

Use the provided 'ofdpa_*' scripts to set up a test flow. These scripts use curl to send messages to the Ryu REST interface using the syntax found here. First, create an L2 Interface Group for port 7. If a rule references this group, any packets that match the rule will be sent to port 7.

$ ofdpa_add_group_intf 7

Then create an L2 Rewrite Group that overwrites the dest MAC address and forwards the packet out on port 7 via the L2 Interface Group. In this case, the MAC address we are writing corresponds to the host interface connected to port 7.

$ ofdpa_add_group_rewrite 7 44:a8:42:32:5d:59

Do the same thing for port 8:

$ ofdpa_add_group_intf 8
$ ofdpa_add_group_rewrite 8 44:a8:42:32:5d:5b

Finally, create simple flow rules that forward packets from port 7 to port 8 and from port 8 to port 7. These flows rules reference the L2 Interface Groups created above. (At the moment, these scripts do not use the L2 Rewrite Groups that we just created.)

$ ofdpa_add_flow_basic 7 8
$ ofdpa_add_flow_basic 8 7

Dumping Groups and Rules

You can dump the groups and rules either using commands on the switch or via the OF interface.
Using commands on the switch:

# client_grouptable_dump
groupId = 0x00010007 (L2 Interface, VLAN ID = 1, Port ID = 7): duration: 171, refCount:2
   bucketIndex = 0: outputPort = 7 popVlanTag = 1 
groupId = 0x00010008 (L2 Interface, VLAN ID = 1, Port ID = 8): duration: 168, refCount:2
   bucketIndex = 0: outputPort = 8 popVlanTag = 1 
groupId = 0x10000007 (L2 Rewrite, Index = 7): duration: 159, refCount:0
   bucketIndex = 0: referenceGroupId = 0x00010007 vlanId = 0 srcMac: 00:00:00:00:00:00 dstMac: 44:A8:42:32:5D:59 
groupId = 0x10000008 (L2 Rewrite, Index = 8): duration: 156, refCount:0
   bucketIndex = 0: referenceGroupId = 0x00010008 vlanId = 0 srcMac: 00:00:00:00:00:00 dstMac: 44:A8:42:32:5D:5B

# client_flowtable_dump
Table ID 60 (ACL Policy):   Retrieving all entries. Max entries = 1792, Current entries = 2.
-- inPort:mask = 7:0xffffffff srcMac:mask = 0000.0000.0000:0000.0000.0000 destMac:mask = 0000.0000.0000:0000.0000.0000 etherType = 0000 vlanId:mask = 1:0xfff srcIp4 = 0.0.0.0/0.0.0.0 dstIp4 = 0.0.0.0/0.0.0.0 srcIp6 = ::/:: dstIp6 = ::/:: DSCP = 0 VRF = 0 DEI = 0 ECN = 0 IP Protocol = 0x00 Source L4 Port = 0 Destination L4 Port = 0 ICMP Type = 0 ICMP Code = 0 | Set output group ID = 0x   10008 outPort = 0 | priority = 4 hard_time = 0 idle_time = 0 cookie = 1
-- inPort:mask = 8:0xffffffff srcMac:mask = 0000.0000.0000:0000.0000.0000 destMac:mask = 0000.0000.0000:0000.0000.0000 etherType = 0000 vlanId:mask = 1:0xfff srcIp4 = 0.0.0.0/0.0.0.0 dstIp4 = 0.0.0.0/0.0.0.0 srcIp6 = ::/:: dstIp6 = ::/:: DSCP = 0 VRF = 0 DEI = 0 ECN = 0 IP Protocol = 0x00 Source L4 Port = 0 Destination L4 Port = 0 ICMP Type = 0 ICMP Code = 0 | Set output group ID = 0x   10007 outPort = 0 | priority = 4 hard_time = 0 idle_time = 0 cookie = 2

Using scripts based on the OF interface (in ~/iSDX/bin directory):

$ of_show_groups 
"group_id": 65544	    (0x10008)	    "actions": ["OUTPUT:8", "POP_VLAN"]
"group_id": 268435464	(0x10000008)	"actions": ["GROUP:65544", "SET_FIELD: {eth_dst:44:a8:42:32:5d:5b}"]
"group_id": 65543	    (0x10007)	    "actions": ["OUTPUT:7", "POP_VLAN"]
"group_id": 268435463	(0x10000007)	"actions": ["GROUP:65543", "SET_FIELD: {eth_dst:44:a8:42:32:5d:59}"]

$ of_show_flows 
"match": {"dl_vlan": "1", "in_port": 7}, "actions": ["GROUP:65544"], "priority": 4, "packet_count": 300
"match": {"dl_vlan": "1", "in_port": 8}, "actions": ["GROUP:65543"], "priority": 4, "packet_count": 299

Testing End-to-End Functionality and Performance

We created and installed rules of all the different types needed by iSDX -- i.e., rules with various combinations of matching and actions -- and verified that they worked correctly by sending traffic through the switch with ping and iperf (not shown).
We tested performance with iperf as shown below. The client and server iperf processes ran in different containers to force the packets to go through the switch. We also dumped the packet counts from the switch to verify that the traffic was truly going through the switch. Note that the transfer rate is 9.4Gb/s, close to the limit on the 10Gb/s links.

em2:~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.1.2 port 5001 connected with 192.168.1.1 port 50006
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  11.0 GBytes  9.41 Gbits/sec
em1:~# iperf  -c 192.168.1.2
------------------------------------------------------------
Client connecting to 192.168.1.2, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.1.1 port 50006 connected with 192.168.1.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  11.0 GBytes  9.42 Gbits/sec
  1. Port iSDX to the OF-DPA-Based Hardware Configuration
    The 'Refmon' (a.k.a. 'flanc', or 'Fabric Manager') component of SDX has been modified to support OF-DPA based switches and the modified component has been tested on the Quanta switches.

To use SDX with an OF-DPA switch, specify it in the global configuration file. E.g., the following indicates that main, inbound, and outbound are OF-DPA switches:

   "RefMon Settings" : {
       "fabric options": {
               "ofdpa": ["main", "inbound", "outbound"],
   	...

In order to leverage the mininet-based system configuration as much as possible, we reused the mininet-based system's Vagrant VM, making a few changes to adapt it to the hardware switch environment. The changes to the VM required for the hardware environment include:

  • remove mininet
  • add a script (init_arp_switch.sh) to create and configure the OVS instance for the ARP switch. This script is invoked from /etc/rc.local at boot.
  • modify the network parameters in the Vagrantfile so that the correct interfaces are used for the ARP and BGP links from the main switch
  • create a separate configuration directory -- test -- corresponding to the hardware configuration. The differences in this configuration are in the file ``sdx_global.cfg`. These include:
    • specify the OF-DPA switches as shown above
    • change the switch port numbers to correspond to the hardware configuration
    • change the name of the interface used by the ARP Proxy

Other than these changes, the configuration remains the same as in Mininet-based example. I.e., it uses the same IP and MAC addresses.

The modified Vagrantfile used for this test configuration is iSDX/examples/test-ms/ofdpa/test/Vagrantfile. To launch this VM, on the host for the control software do:

$ cd iSDX/examples/test-ms/ofdpa/test
$ vagrant up

Test ASes

The ASes for the experiment are implemented as 4 Docker containers on a single host machine. Each container runs the Quagga routing software. They run a slightly modified version of the Docker image alectolytic/quagga-bgp-tutorial. The network interface of each container is connected to a different physical interface on the host which is in turn connected to a port on the main switch.