Skip to content

PreciseTrafGen

manya edited this page Nov 9, 2017 · 1 revision

Table of Contents

Precise and Closed-loop Traffic Generation with Caliper

Caliper is a precise traffic generator based on the NetFPGA platform with highly-accurate packet injection times that can be easily integrated with various software-based traffic generation tools. Caliper has the same accuracy level as the NetFPGA-based packet generator, but provides a key additional feature that makes it useful in a larger range of network experiments: Caliper injects dynamically created packets, and thus, it can react to feedback and model the closed-loop behavior of TCP and other protocols. The ability to produce live traffic makes Caliper useful to explore a variety of what-if scenarios by tuning user, application, and network parameters.

Caliper is built on NetThreads, a platform we have created for developing packet processing applications on FPGA-based devices and the NetFPGA in particular. Caliper's main objective is to precisely control the transmission times of packets which are created in the host computer, continually streamed to the NetFPGA, and transmitted on the wire. The generated packets are sent out of a single Ethernet port of the NetFPGA, according to any given sequence of requested inter-transmission times. Unlike previous works that replay packets with prerecorded transmission times from a trace file, Caliper generates live packets and supports closed-loop traffic. Therefore, Caliper is easily coupled with existing traffic generators (such as Iperf) to improve their accuracy at small time scales. Some knowledge of NetThreads is necessary to build and run Caliper.

One page project description, 12 page paper submitted to ANCS 2010, 20 page project documentation, 3 page demo abstract at SIGCOMM 2010.

Status :
Released
Version :
1.1
Authors :
Monia Ghobadi, Geoffrey Salmon, Yashar Ganjali, Martin Labrecque, J. Gregory Steffan
Base source :
NetThreads 1.0

Download

Get Dependencies

  1. Download the NetFPGA Base Package
  2. Download NetThreads

Download Caliper

  • Download Caliper, the Precise and Closed-loop Traffic Generator tarball caliper_1.1.tar.gz.

Install

Preparation

  • Ensure NetThreads is correctly installed. Before proceeding, you should be able to build and run the example programs that come with NetThreads on the the NetFPGA.

Installation

Extract the file caliper_1.1.tar.gz in the same directory that you extracted the NetThreads package in.

The file caliper_1.1.tar.gz contains two directories. One will be extracted in the current directory and the other will be extracted within the NetThreads directory. After the command

 tar xzvf caliper_1.1.tar.gz

the directory structure should be

  caliper/  <-- New directory
     bitfiles/
     common/
     doc/
     driver/
         common/
         kernel/
     pktgen/
     lib/
     regress/
     netthreads/
       compiler/
       src/ 
         bench/
             ping/
             pipe/
             template/
             common/
             precisegen/  <-- New directory
     loader/

Project Contents

Caliper is actually three separate components

  1. The NetThreads program
    • The C program that runs on the NetThreads platform
    • Located in the netthreads/src/bench/precisegen directory
  1. The modified NetFPGA driver
    • A modified version of the driver distributed with NetFPGA
    • Located in the caliper/driver directory
  1. pktgen
    • A userspace C++ program that tells the driver when to send packets
    • Located in the caliper/pktgen directory
All three components are required to generate packets. However, the pktgen program is just one example of a program to that instructs the driver to send a packet. It could be replaced by a network simulator that decides when packets should be sent, for example.

Compilation

Each component has it's own Makefile and is compiled separately. Either go to each individual directory and type make , or compile them all from the directory you extracted the tarball in with the following commands

 make -C caliper/pktgen
 make -C caliper/driver/kernel
 make -C netthreads/src/bench/precisegen

The NetFPGA port that the packet generator sends packets out is a compile time option of the precisegen NetFPGA program. The default is to send packets on port 0, although this can be changed by recompiling it:

 make -C netthreads/src/bench/precisegen clean
 # build with OUTPUT_PORT=4, which sends packets out the MAC 2
 make -C netthreads/src/bench/precisegen OUTPUT_PORT=4

Valid values of OUTPUT_PORT are 0,2,4,6 for the four NetFPGA MACs.

Install Driver

Install the modified nf2 driver using modprobe. The includes Makefile has a modprobe target that will remove the existing driver and install the new one.

 make -C caliper/driver/kernel modprobe

Note: Remember to reinstall the original nf2 driver from the NF2 package whenever you want to use NetFPGA for other purposes. The modified version is not compatible with most uses of the NetFPGA.

Load NetFPGA

As with using any NetThreads program the first steps are to run cpci_reprogrammer.pl and nf2download to start the NetThreads platform. The normal NetThreads bitfile will work with the packet generator, but see the next section for a description of another version of NetThreads that is packaged with the precise packet generator.

Once NetThreads is ready, load the precisegen program using the standard NetThreads loader program.

 # netthreads/loader/loader -i netthreads/src/bench/precisegen/precisegen.instr.mif
 # netthreads/loader/loader -d netthreads/src/bench/precisegen/precisegen.data.mif -nodebug

Alternate Bitfiles

The caliper/bitfiles directory contains two NetFPGA bitfiles.

netthreads_nooutqueues.bit :
This is a modified version of netthreads.bit, which is distributed with NetThreads itself. In this bitfile, the output queues of the standard NetFPGA pipeline are not used. Without the extra buffering of the output queues, the software running in NetThreads receives improved feedback of when packets are actually transmitted and can control the times between packet transmissions more accurately. This is useful in the precise traffic generator's use case, but it hasn't been tested with more general NetThreads applications. You should think twice about using it with anything but the packet generator.
measurement_router.bit :
This is a modified version of router_buffer_sizing.bit which is a part of the standard NF2 distribution. This version's purpose is to measure the arrival time of packets more accurately. In addition to routing packets, router_buffer_sizing.bit instruments the output queues and generates events recording when packets arrive at the output queues. To better use these events to measure when packets arrive at the router itself, we have removed most of the logic between the input ports and output ports in the normal NetFPGA router pipeline and synthesized measurement_router.bit. Specifically, we removed the input arbiter and output port lookup modules and hardcoded that packets from port 0 (MAC 0) will always be sent to output port 2 (MAC 1). Packets that arrive at any port beside 0 are ignored. Obviously, this bitfile is useless as a router, but it provides more accurate measurements for packets arriving at port 0 because there is a smaller, more deterministic delay between when packets arrive from the wire and when they are enqueued in the output queue.

Usage

Before using the traffic generator always bring the nf2c0 interface up.

 ifconfig nf2c0 up
This prepares the generator to send packets.

The generator can be reset by bringing the interface down and up

 ifconfig nf2c0 down
 ifconfig nf2c0 up

This is useful if you want it to stop generating packets immediately, ie. you have accidentally set an inter-transmission time of minutes and don't want to wait for all of the packets to be sent.

Userspace Packet Generator (aka pktgen)

This is a normal userspace process that tells Caliper what packets to send and when to send them. It communicates with the NetFPGA driver using NetLink. It is written in C++ and is located in the directory precisegen/pktgen . The Makefile in the directory produces the pktgen executable, which has the following command-line options:

-n NUM :
Controls how many packets will be sent in total.
-t TIME :
Controls how long long packets will be sent for.
-d DELAY :
Sets the delay between sending each packet. Units is nanoseconds.
-D FILE :
Sets a file to read delays from. FILE is a text file with delay amounts in nanoseconds separated by white space. The delays are used sequentially, and once the last delay is used pktgen will continue from beginning of the file.
-l PAYLOAD :
Sets the size of the UDP payloads in the packets sent. Units is bytes.
-L FILE :
Sets a file to read payload sizes from. FILE is a text file with UDP payload sizes in bytes separated by white space. The sizes are used sequentially, and once the last delay is used pktgen will continue from beginning of the file.
-i DST_IP :
Chooses the destination IP to send packets to.
-I SRC_IP :
Chooses the source IP to set in packets.
-m DST_MAC :
Sets the destination MAC address to send packets to.
-N :
Decides whether packets will be sent through the NetFPGA. If this option is not present then pktgen will send packets through the normal network stack and use busy-waiting to time the delays between packets. Some options, such as -m and -I do not work unless this option is present.
-h :
Prints help message.

Some of the above options conflict with each other. Both -n and -t cannot be present. The inter-transmission delay must be set with either -d or -D, but not both. Similarly, -l and -L cannot both be used to set the payload length.

Example

Here's a simple example of running pktgen on one host and running tcpdump on another host to see the generated packets.

 host2:~/# ifconfig eth1 10.0.0.2 netmask 255.255.255.0
 host2:~/# tcpdump -i eth1 -n
 21:41:38.827986 IP 10.0.0.1.2000 > 10.0.0.2.2001: UDP, length 500
 21:41:39.077988 IP 10.0.0.1.2000 > 10.0.0.2.2001: UDP, length 500
 21:41:39.327997 IP 10.0.0.1.2000 > 10.0.0.2.2001: UDP, length 500
 21:41:39.578005 IP 10.0.0.1.2000 > 10.0.0.2.2001: UDP, length 500
 21:41:39.828009 IP 10.0.0.1.2000 > 10.0.0.2.2001: UDP, length 500
 21:41:40.078018 IP 10.0.0.1.2000 > 10.0.0.2.2001: UDP, length 500
 21:41:40.328022 IP 10.0.0.1.2000 > 10.0.0.2.2001: UDP, length 500

Regression Test

In this section we explain a simple regression test, directory regress contains more examples. Create the following topology as illustrated in Figure 1 below.

http://www.cs.toronto.edu/~monia/topology.jpg

Figure 1: Regression test topology

host_A (nf2c0) ------- (nf2c0) host_B (nf2c1) ------- (eth1) host_C

(Note: host_B and host_C can be the same host)

host_A is the sender, host_B is a router to measure the packet inter-arrival times, and host_C is the receiver.

Step 1: Host setup

on host_B run:

cd pathtocaliper/regress/UDP_caliperAPI
./host_B.sh

on host_A run:

cd pathtocaliper/regress/UDP_caliperAPI
./host_A.sh

on host_C run:

cd pathtocaliper/regress/UDP_caliperAPI
./host_C.sh

Step 2: Generate traffic

To capture the event packets, on host_B (the router), run:

pathtocaliper/lib/gulp -i nf2c0 > 1.pcap

To generate traffic using Caliper API on host_A run:

pathtocaliper/pktgen/pktgen -N -i 10.0.0.2 -I 10.0.0.4 -m MAC_ADDRESS -d 13560 -t 10 -l 1470

Where: 10.0.0.2 is the IP address of eth1 of host_C (receiver), 10.0.0.4 is the IP address of nf2c0 of host_A (sender), MAC_ADDRESS is the MAC address of eth1 of host_C, 13560 is the requested packet inter-transmission time in nano seconds, 10 is the duration of traffic generation, 1470 is the packet payload size in bytes.

This corresponds to roughly 860 Mbps traffic between host_A and host_C.

Step 3: Verify packet inter-arrival times

After the transmission is finished, exit the gulp program on host_C. Then on host_B run:

pathtocaliper/lib/fixed_interval_error.sh 1.pcap 13560
Where: 1.pcap is the name of the capture file in Step 2. 13560 is the requested packet inter-transmission time in nano seconds in Step 2.

The output should be a line like:

mean 13560.00 var 32.59 avgerr 0.36
Where: mean is the measured mean packet inter-transmission times in ns. var is the variance of measured packet inter-transmission times compared to the requested packet inter-transmission, avgerr is the absolute average error between requested inter-transmission time and the measured packet inter-transmissions.

To observe the absolute packet inter-transmissions on host_B run:

pathtocaliper/lib/rcv_evts2 -o 1.pcap | egrep '^S ' | pathtocaliper/lib/delta.py -f 1 -q | awk '{print $1*8}'
to see the exact packet inter-arrival times and verify that they are all within 8ns of the requested inter-transmission time.

The details on how to transmit TCP traffic is provided in regress folder.

Evaluation

Sending UDP Packets at Fixed Intervals

The simplest test case for Caliper is to generate UDP packets with a fixed inter-transmission time. Comparing the requested inter-transmission time with the observed inter-arrival times demonstrates Caliper’s degree of precision. As explained above, Caliper leverages software running on what has previously been a hardware-only network device, the NetFPGA. Even executing software, NetThreads should provide sufficient performance and control for precise packet generation. To evaluate the above criteria we compare Caliper’s transmission times against those of Stanford Packet Generator (hereafter called SPG), which is implemented on the NetFPGA solely in hardware. Moreover, we demonstrate the lack of precision when using a commodity NIC transmitting Iperf traffic. Figure 2 shows the 95th percentile of absolute error between the measured inter-arrival times (D_M) and the requested inter-transmission times (D_R) corresponding to various packet transmission rates (T_R). It is important to note that the 95th percentile error is a more conservative metric than the average error as it captures the 5% largest errors.

For each transmission rate, we send 1,500,000 UDP packets of size 1518 bytes (including Ethernet headers) using Caliper, SPG, or an Intel commodity Ethernet NIC. To generate constant bit rate traffic over the commodity NIC we use the Iperf traffic generator with rate T_R . We then capture a portion of traffic in a trace file and replay it with SPG while configuring SPG with the exact same packet inter-arrival time that we used with Caliper, D_R .

http://www.cs.toronto.edu/~monia/udp_color.jpg http://www.cs.toronto.edu/~monia/tcp_2_color.jpg
Figure 2: The 9th percentile error when injecting UDP packets. Figure 3: The 9th percentile error when injecting TCP packets.

As Figure 2 illustrates, for all range of transmission times up to 1 Gbps, the 95th percentile absolute error is around 8 ns for both Caliper and SPG. The clock period of the sending and measuring NetFPGA systems is 8 ns, and hence an error of 8 ns implies that most of the inter-transmission times are within one clock cycle (the measurement resolution). This shows that even though NetThreads is executing software, it still allows precise control on when packets are transmitted. On the other hand, note that the commodity NIC’s error is almost three orders of magnitude higher than both Caliper and SPG. At 1 Gbps rate, we notice that the error is minimum for Caliper, SPG as well as the commodity NIC case because the network is operating at its maximum utilization and packets are sent and received back-to-back.

Although both Caliper and STG packet generators are of similar accuracy, SPG has a limitation that makes it unsuitable for the role we intend for Caliper. The packets sent by SPG must first be loaded onto the NetFPGA as a pcap file before they can be transmitted. This two-stage process means that SPG can only replay relatively short traces that have been previously captured.4 Although SPG can optionally replay the same short trace multiple times, it can not dynamically be instructed to send packets by a software packet generator or network emulator using a series of departure times that are not known a priori. Caliper, on the other hand, can be used to improve the precision of packet transmissions streamed by any existing packet generation software.

Sending TCP Packets at Fixed Intervals

As explained above, Caliper has the ability to receive packets from the Linux network stack and hence it can be used to produce live TCP connections and closed-loop sessions. In this section, we evaluate the performance of Caliper to inject smoothed TCP packets (paced TCP) at precise time intervals and compare the precision of using TCP Iperf traffic with Caliper and a commodity NIC. Note that in this set of experiments we are unable to include SPG due to its open-loop limitation. We use the Precise Software Pacer (PSPacer) package as a loadable kernel module to enforce pacing while using the commodity NIC.

Similar to UDP experiments above, we calculate the absolute error (|{D_R - D_M}|) between the measured inter-arrival times (D_M) and the requested inter-transmission times (D_R) corresponding to the requested packet transmission rate (T_R) of Iperf. As illustrated in Figure 3, Caliper improves the 95th percentile of absolute error by almost three orders of magnitude compared to the commodity NIC. Hence, Caliper's accuracy enables researchers to perform live and time-sensitive network experiments with confidence on the accuracy of packet injection times. As in Figure 3, at 1 Gbps rate, the error of both Caliper and the commodity NIC is minimal because packets are sent and received back-to-back.

For more results regarding variable packet sizes and inter-transmission times, we refer the interested reader to our paper.

Implementation Details

This sections describes the implementation of theCaliper driver and NetThreads program in greater detail. This information is useful if you intent to modify Caliper.

NetFPGA Driver

The driver used by the packet generator is a modified version of the original NetFPGA driver. It is located in the directory caliper/driver/kernel . The Makefile in that directory will build the kernel module nf2.ko. It also contains the target modprobe which will build, install and replace the currently inserted kernel module with the new one.

The main task of Caliper's NetFPGA driver is to receive packet descriptions over NetLink, assemble real packet headers, combine them together and copy them to the NetFPGA card using DMA over the PCI bus. It avoids overflowing buffers within the NetFPGA by waiting for explicit feedback before sending the packet headers.

Getting familiar with the driver code takes some time. Here are the three most important source files (all are in the caliper/driver/kernel directory):

nf2kernel.h :
Defines important structs that hold the driver's state.
nf2main.c :
Has the modules entrance and exit functions, sets up hooks for the PCI bus, and handles receiving NetLink messages.
nf2_control.c :
Initializes and communicates with the NetFPGA card. This is where most of the real work is done.

All communication between the driver and the NetThreads packet generator goes over the PCI bus. Both the driver and the NetFPGA can DMA transfer messages to each other. The types of messages are defined in drivercomm.h. Note there is an identical drivercomm.h file in the source code of the NetThreads packet generator (see the netthreads/src/bench/precisegen directory). If one is changed, the other must be updated.

Here are the main concepts and execution paths present in the driver.

Transmit Buffers: txbuff

At initialization, the driver allocates a fixed number of txbuff structs. Each one stores multiple packet headers that are waiting to be sent to the NetFPGA. As soon as a DMA transfer of a txbuff to the NetFPGA has completed, the txbuff is added back to the free list.

Headers for multiple packets are combined together because each DMA transfers incurs some overhead. It is not possible to send each packet header individually fast enough. For example, when using the NetFPGA as a normal NIC, the total throughput is only 260Mb/s when transferring individually 1518 byte packets.

Each of the allocated txbuffs is always in one of three states: (i) it is unused and is in the free_txbuffs list, (ii) it contains some packet headers but still has room for more then it is pointed to by the txbuff_building pointer, or (iii) it is waiting to be copy to the NetFPGA card and is in the transmit_txbuffs list. The txbuff at the head of the transmit_txbuffs list is either currently being copied or will be copied at the first opportunity.

Receiving Packets to Send Via NetLink

Each arriving NetLink message is sent to the function nl_remakeceive_msg in nf2main.c. If it contains packet descriptions it is passed to nf2c_inject_pkt in nf2_control.c, which builds the actual packet header and calls nf2c_tx_pkt . This function adds the packet header to a txbuff struct. If there is a partially filled txbuff with room for the packet header, it will be used. Otherwise, the function gets a new txbuff from the free_txbuffs list. Next, nf2c_attempt_send is called which will start a DMA transfer to the NetFPGA if none is currently in progress. At most one DMA transfer can be occurring at any one time.

Handling Interrupts

To communicate with the driver the NetFPGA card can raise an interrupt, which causes nf2c_intr to be called. This function reads a status register of the NetFPGA to determine why the interrupt was raised and handles it appropriately. Here are the most interesting interrupts:

INT_DMA_TX_COMPLETE :
Tells the driver that a DMA to the NetFPGA has completed. The driver will call nf2c_attempt_send to start sending the next waiting txbuff, if appropriate
INT_PKT_AVAIL :
Tells the driver that the NetFPGA has a packet ready to DMA to the driver.
INT_DMA_RX_COMPLETE :
Occurs when the NetFPGA has finished sending a packet to the driver. This then calls nf2c_rx with the actual packet.

Transmit Quota

NetThreads and the NetFPGA have a limited amount of buffering available. If the driver sends too many packets to the NetFPGA, then they will be dropped at the input queues in the NetFPGA. To avoid this, the driver maintains a quota of the number of packets it can send to the NetFPGA. The allowed_send variable stores the number of packets that the driver can send. If it reaches zero, then the driver will stop doing DMA transfers. When the driver receives a quota message from the NetFPGA, it increases allowed_send . The logic for this is in the function nf2c_rx .

Marking Packets

Marking is a feature added to avoid using all of the available txbuffs in the driver and dropping packets. It provides explicit acknowledgement back to a process that is sending packets to the driver.

A process can send a NetLink message to the driver to set a mark. When the driver receives the message, the function nf2c_mark is called which sets some fields of the txbuff containing the last packet that the driver received over NetLink. When that txbuff is eventually copied to the NetFPGA, the driver will send a NetLink message to the process that originally set the mark (see the notify_of_mark function).

Starting and Resetting

When nf2 module is first inserted, either when Linux boots or when you run modprobe explicitly, the network interface it creates will be down and the driver will not send packets to the NetFPGA and will ignore interrupts.

Bring the interface up by running ifconfig nf2c0 up , which causes the driver function nf2c_open to be called. The function initializes the NetFPGA card and sends a reset command message to the NetFPGA. In response, the NetThreads packet generator will respond with another reset message. Before this reply is received from the NetFPGA, no packets will be sent to or from the NetFPGA.

Bring down the interface with ifconfig nf2c0 down . This clears all packets that are waiting in the driver to be sent and ignores any future interrupts from the NetFPGA.

Multiple Network Interfaces and NetFPGA Cards

The original NetFPGA driver creates four network interfaces in Linux and supports multiple NetFPGA cards in a single computer. For the first card, it would create interfaces nf2c0, nf2c1, nf2c2, and nf2c3. For the second card, it would create nf2c4, nf2c5, and so on. The modified driver for packet generation creates only a single network interface. Also the modified driver only supports a single NetFPGA card in the machine. This is partly because it simplified the API for sending packets (you don't need to pick an interface when sending a packet, there is only one), and partly because I did not test with multiple NetFPGA cards in a single computer.

Sending packets received over NetLink requires the driver get access to the state for the NetFPGA. Currently this is simply stored in the global variable inject_dev . To support multiple NetFPGA cards, the driver must select and use the state for the correct NetFPGA instead.

NetThreads Packet Generator

This section describes Caliper NetThreads application, called the precisegen hereafter. The precisegen receives packets sent from the NetFPGA driver containing multiple packet headers and builds and sends packets out of one Ethernet port at the appropriate times. The precisegen code is in the directory netthreads/src/bench/precisegen .

Thread Roles

There are eight threads in NetThreads. To easily send packets in the correct order and at the correct time, only one thread ever sends packets in the precisegen. This thread has thread id 0 and is called the sending thread . Near the end of the main function the sending thread calls manage_sending , which never returns, while the other seven threads serve jobs from a work queue.

 if (nf_tid() == 0) {
   manage_sending();
 } else {
   workq_serve(&jobs_queue, -1);
 }

There are only two types of jobs in the work queue:

  • Receive a packet by calling the pop_work function;
  • Prepare a packet in the output memory by calling the prepare_work function.
Each packet that the NetFPGA driver sends to the NetFPGA contains the descriptions of multiple packets to send. Eventually a NetThreads thread will pop the packet and call process_pkt_pkts , which examines the packet descriptions and adds a job for each description to the jobs_queue . Next threads will pop the jobs from the queue and begin copying the packet headers from the descriptions in the input memory to free slots in the output memory. Once the packets to send are prepared, they are given to the sending thread in the order they should be sent. The time it takes to prepare each packet may vary so only the thread that has finished preparing the packet that is scheduled to be sent first will notify the sending thread. See the variable next_pkt_tosend and the circular array prepared_pkts for how this is done.

The sending thread spends all its time looping in the manage_sending function. It continually checks for newly prepared packets to send. Once it has a packet ready, it continually calculates how much time is left before the packet's scheduled departure. When the departure is imminent (currently that means it's within 5000 clock-cycles) it calls tight_send_loop to actually send the packet. As its name suggests, originally this function would continually loop checking the time. However, it now uses a hardware-triggered send by calling nf_pktout_send_schedule . This tells the hardware when to send the packet and is more accurate than the previous method. If the time has already passed then the packet will be sent immediately. Otherwise, the hardware waits until the clock reaches the schedules time before passing the packet to the rest of the NetFPGA pipeline.

Work Queues and Multi-threaded Deques

The precisegen uses two handy data structures: the mtdeque and the work_queue . Both are in the the bench/common directory. In brief, the mtdeque is a singly linked-list that multiple threads can push or pop from at the same time. The work_queue uses an mtdeque to store function pointers and data which represent a piece of work for a thread to do. Different threads serve the queue by popping off work and passing the data to the specified function. See the header files mtdeque.h and workqueue.h for documentation of each relevant function.

Managing Input and Output Memory

The input and output memories in NetThreads are 16KB each. In the precisegen, as in most NetThreads applications, both memories are divided into ten 1600 byte slots. The precisegen must carefully manage slots in both.

As described in the driver section above, the NetFPGA driver avoids filling all of the input memory slots by waiting for explicit feedback from the precisegen. The feedback is a quota packet sent by the sending thread from the function send_special .

Once an packet is popped the input memory slot cannot be freed until the packet is no longer needed. Multiple threads can be reading different packet descriptions from the same packet at the same time. To avoid freeing the packet too early, or multiple times, there is a reference count in the struct associated with each input memory slot. See the end of the function prepare_work where it decrements the count and optionally calls free_input_pkt .

Each slot in the output memory used to send packets is represented by a pkt_out struct which holds information about the slot and is used to schedule the prepare_work jobs in the work queue. Empty slots are stored in the free_pkts queue. Only 8 slots are used to send packets to the Ethernet ports. The remaining 2 are used to send feedback quota packets back the the NetFPGA driver. The following code in the main function allocates the output memory slots for these two purposes:

 quota_pkt = nf_pktout_alloc(1520);
 /* Add all pkt_out structs to the free list */
 mtdeque_lock(&free_pkts);
 for (i = 0; i < NUM_PKT_OUTS; i++) {
   struct pkt_out *po = pkt_outs + i;
   po->output_mem = nf_pktout_alloc(1520);
   mtdeque_push_nolock(&free_pkts, &po->item);
 }
 mtdeque_unlock(&free_pkts);

One slot is assigned to the quota_pkt and 8 are added to pkt_out structs and added to the free_pkts queue. The number 1520 passed to nf_pktout_alloc is meaningless and ignored. Note that this is the only time that the nf_pktout_alloc function is called in precisegen. Unlike most NetThreads applications, precisegen explicitly collects the previously sent packet when sending a new packet. Compare the nf_pktout_send_finish_nocollect() in precisegen.c with nf_pktout_send_finish() in support.c. Collecting the free packet directly is more efficient that going through the normal nf_pktout_alloc mechanism, and if the packets are scheduled to depart close together then the sending thread cannot afford to waste clock cycles.

Time

The sending thread maintains the current time in two global variables: time_hi and time_lo . It stores the last value returned by nf_time() in time_lo and counts the number of times that the 32-bit time value wraps in time_hi . Together these variables store a 64-bit time value with a granularity of 8ns which moves ahead to the current time whenever update_time() is called. The time_until function returns the time between the current time and a given time.

Their are two ways to specify a packet's departure time. Either the time can be the absolute time or a relative time from the last packet transmission. Currently, the userspace packet generator (pktgen) sends the first packet with an absolute time and all subsequent packets with relative times delays, but there is nothing in the driver or precisegen that requires this.

Receiving Packets

The precisegen does not handle receiving packets. If packets do arrive at the NetFPGA's Ethernet ports they will fill slots in the input memory and could cause packets sent by the driver to be dropped. To support receiving packets and avoid dropping packets, a thread must continually call nf_pktin_pop() to learn of incoming packets. Each incoming packet must be quickly dealt with and freed to make room for packets that may arrive later.

There are several different options for processing incoming packets. The precisegen could copy the entire packet to output memory and send it over the PCI bus to the driver. The bandwidth and overhead of the PCI bus limits how many packets can be received though. To mitigate the bandwidth problem, instead of sending the entire packet to the driver, the precisegen could send only the packet headers, either individually or by combining headers from multiple packets together. Another option is avoiding the PCI bus entirely and sending incoming packets out another Ethernet port which could be connected to another NIC.

Regardless of which method is chosen, some slots in both the input and output memories need to be reserved for incoming packets. Currently at most 8 slots in the incoming memory can be used by packets sent to the NetFPGA from the driver (this is controlled by the RCV_QUOTA_START definition). This leaves only 2 slots of incoming packets, which may not be enough depending on how quickly incoming packets are processed. Also, currently 8 slots in output memory are used for packets that are being transmitted (controlled by NUM_PKT_OUTS ). One output slot is used to send feedback quota messages to the driver. Instead, some slots would be needed to process and send received packets. How many depends on where the packets are to be sent and whether the entire packets are kept or only the packet headers. Choosing the best numbers requires some experimentation. It may be necessary to dynamically change the allocation of packet slots to adapt different traffic conditions.

Clone this wiki locally
You can’t perform that action at this time.