Skip to content

Reference NIC

Yuta edited this page Aug 18, 2021 · 1 revision

Name

reference_nic

Location

hw/projects/reference_nic

IP Cores

Software

Description

The division of the hardware into modules was hinted at in the previous section. Understanding these modules is essential in making the most of the available designs. The reference projects in NetFPGA platform, including the NIC, all follow the same modular structure. This design is a pipeline where each stage is a separate module. A diagram of the pipeline is shown on the next section.

The CMAC subsystem on Xilinx open-nic-shell sends received packet to the input arbiter module via nf_mac_attachment. The input arbiter has three input interfaces: two from the CMAC subsystem on open-nic-shell and one from QDMA subsystem on open-nic-shell. Each input to the arbiter connects to an input queue, which is in fact a small fall-through FIFO. The simple arbiter rotates between all the input queues in a round robin manner, each time selecting a non-empty queue and writing one full packet from it to the next stage in the data-path, which is the output port lookup module.

The output port lookup module is responsible for deciding which port a packet goes out of. After that decision is made, the packet is then handed to the output queues module. The lookup module implements a very basic lookup scheme, sending all packets from 10G ports to the CPU and vice versa, based on the source port indicated in the packet's header. Notice that although we only have one physical DMA module in Verilog, there are 4 virtual DMA ports. The virtual DMA ports are distinguished by SRC_PORT/DST_PORT field.

Once a packet arrives to the output_queues module, it already has a marked destination (provided on a side channel - The TUSER field). According to the destination it is entered to a dedicated output queue. There are five such output queues: one per each 100G port and one to the DMA block. Note that a packet may be dropped if its output queue is full or almost full. When a packet reaches the head of its output queue, it is sent to the corresponding output port, being either CMAC subsystem or QDMA subsystem. The output queues are arranged in an interleaved order: one physical Ethernet port, one DMA port etc. Even queues are therefore assigned to physical Ethernet ports, and odd queues are assigned to the virtual DMA ports.

The QDMA subsystem on open-nic-shell serves as a DMA engine for the reference switch design. It includes QDMA IP, a DMA engine and AXI4 Interconnect module.

Testing

  1. Make sure you clone the latest version of the NetFPGA package. Please ensure that you have the necessary packages installed. The current testing infrastructure is Python based.
git clone https://github.com/NetFPGA/NetFPGA-SUME-live.git
  1. Make sure to update the following environment variables in the file {user-path}/NetFPGA-PLUS-dev/tools/settings.sh
  • PLUS_FOLDER
  • BOARD_NAME
  • NF_PROJECT_NAME

To set the environment variables, source both relevant setting files:

source {user-path}/NetFPGA-SUME-live/tools/settings.sh
source /tools/Xilinx/Vivado/2020.2/settings64.sh
  1. Compile the library of IP cores. (It is unnecessary to compile the library every time for a new project unless you have made any changes to the IP cores.)
$ cd $NFPLUS_FOLDER 
$ make 
  1. Program the FPGA
  • If you want to run the Hardware tests with the pre-existing bitfile provided in the base repo:
$ cd $NF_DESIGN_DIR/bitfiles
$ xsdb

On the xsdb console, use connect and then fpga -f reference_switch_lite.bit to program the FPGA with the bitfile and exit to close xsdb console. Reboot the machine.

  • If you want to create your own bitfile and run the Hardware tests:
$ cd $NF_DESIGN_DIR
$ make
$ cd bitfiles
$ xsct

On the xsct console, use connect and then fpga -f reference_switch_lite.bit to program the FPGA with the bitfile and exit to close xsct console. Reboot the machine.

  1. Check if the bit file is loaded using the following command.
 $ lspci –vxx | grep Xilinx

If the host machine doesn't detect the Xilinx device, you need to reprogram the FPGA and reboot as mentioned in the previous step.

  1. Build the driver for the NetFPGA SUME board and check if the built kernel module is loaded. When building as non-root user you need to append sudo to all commands of the modules_install target.
# cd sw/driver/
# make
# insmod onic.ko

Then run ip a to check if you are able to see the 'nfX' interfaces.

  1. Running the test

The top level file nf_test.py can be found inside NetFPGA-PLUS-dev/tools/scripts. Tests are run using the nf_test.py command followed by the arguments indicating if it is a hardware or simulation test and what is the specific test that we would like to run. So when running the test, test mode should be specified (sim or hw). For instance:

# ./nf_test.py sim --major loopback --minor minsize 

or

# sudo -E env PYTHONPATH=`echo $PYTHONPATH` zsh -c 'source $XILINX_PATH/settings64.sh && ./nf_test.py hw --major loopback --minor minsize '

For a complete list of arguments type ./nf_test.py --help.

You can find more information related to hardware and simulation tests here:

The test infrastructure is based on the python. You can find the tests inside the hw/projects/{project_name}/test folder.

The 100G NIC on NetFPGA is similar to other NICs. In the following sections, we will show how to run a iperf test between NetFPGA and another machine.

Testing Hardware using two or more machines

To run the test, you need two machines, A and B. Let's say Machine A is equipped with NetFPGA and Machine B is equipped with a third-party 100G NIC.

Download the reference_nic bitfile from hw/projects/reference_nic/bitfiles/reference_nic.bit. (Refer to Hardware Tests if you don't know how to download the bitfile and/or not setup JTAG cable yet.)

Connect Machine A and Machine B using a 100G cable. Assume we use nf0 (the port farthest from the PCI Express) on Machine A and eth1 on Machine B.

Build and Install the open-nic-driver

Here is a Quick Start.

Setup IP address

On Machine A

    sudo ip addr add 192.168.0.1 dev nf0

On Machine B

    sudo ip addr add 192.168.0.2 dev nf0

Test 1: Ping

On Machine A

    [user@machine_A ~]$ ping 192.168.0.2
    PING 192.168.0.2 192.168.0.2) 56(84) bytes of data.
    64 bytes from 192.168.0.2: icmp_req=1 ttl=50 time=1.04 ms
    64 bytes from 192.168.0.2: icmp_req=2 ttl=50 time=1.04 ms
    64 bytes from 192.168.0.2: icmp_req=3 ttl=50 time=1.04 ms
    64 bytes from 192.168.0.2: icmp_req=4 ttl=50 time=1.04 ms

Test 2: iperf

iperf is a utility to measure the performance over an IP link.

First, make sure you have iperf installed on both machines. If not,

sudo apt install iperf 

Setup iperf server on Machine A.

iperf -s

Setup iperf client on Machine B.
[user@machine_B ~]$ iperf -c 192.168.0.1    
------------------------------------------------------------ 
Client connecting to localhost, TCP port 5001 
TCP window size:  132 KByte (default) 
------------------------------------------------------------ 
[  3] local 192.168.0.2 port 52787 connected with 192.168.0.1 port 5001 
[ ID] Interval       Transfer     Bandwidth 
[  3]  0.0-10.0 sec  9.35 GBytes  935 Mbits/sec