NetFPGA 10G Reference NIC 1G
- NetFPGA-10G Specific cores
- Xilinx AXI Peripheral
- Microblaze Subsystem
The division of the hardware into modules was hinted at in the previous section. Understanding these modules is essential in making the most of the available designs. The distributed projects in the NFP, including the NIC, all follow the same modular structure. This design is a pipeline where each stage is a separate module. A diagram of the pipeline is shown on the next section.
The first stage in the pipeline consists of several queues which we call the Rx queues. These queues receive packets from IO ports such as the Ethernet ports and the DMA port and provide a unified interface (AXI) to the rest of the system. The current design has 4 1G Ethernet Rx queues and 1 CPU DMA queue over PCIe (OPED RX). Notice that although we only have one physical DMA queue in Verilog, the 4 virtual DMA queues are distinguished by SRC_PORT/DST_PORT field in TUSER. See standard IP interface spec for more information. Packets that arrive into CPU DMA virtual Rx Queue X are packets that have been sent by the software out of interface nf10cX.
In the main datapath, the first module a packet passes through is the Input Arbiter. The input arbiter decides which Rx queue to service next, and pulls the packet from that Rx queue and hands it to the next module in the pipeline: The output port lookup module. The output port lookup module is responsible for deciding which port a packet goes out of. After that decision is made, the packet is then handed to the output queues module which stores the packet in the output queues corresponding to the output port until the Tx queue is ready to accept the packet for transmission. For more information regarding output port lookup, please see here.
The Tx queues are analogous to the Rx queues and they send packets out of the IO ports instead of receiving. Tx queues are also interleaved so that packets sent out of User Data Path port 0 are sent to Ethernet Tx queue 0, and packets sent out of User Data Path port 1 are sent to CPU DMA Tx queue 0, and so on. Packets that are handed to virtual DMA Tx queue X pop out of interface nf2cX through OPED TX.
The 10G NIC on NetFPGA is similar to other NICs. In the following sections, we will show how to run a iperf test between NetFPGA and another machine.
To run the test, you need two machines, A and B. Let's say Machine A is equipped with NetFPGA and Machine B is equipped with a third-party 1G NIC.
Download the reference_nic bitfile from projects/reference_nic/bitfiles/reference_nic.bit. (Refer to Production Test Manual if you don't know how to download the bitfile and/or not setup JTAG cable yet.)
Connect Machine A and Machine B using a 1G cable (you will need a SFP-CAT5 converter if you use regular CAT5/6 copper cable). Assume we use nf0 (the port nearest to the PCI Express) on Machine A and eth1 on Machine B.
Build and Install the NetFPGA-10G NIC Driver
Here is a Quick Start.
Setup IP address
On Machine A
sudo ifconfig nf0 192.168.0.1
On Machine B
sudo ifconfig eth1 192.168.0.2
Test 1: Ping
On Machine A
[hyzeng@machine_A ~]$ ping 192.168.0.2 PING 192.168.0.2 192.168.0.2) 56(84) bytes of data. 64 bytes from 192.168.0.2: icmp_req=1 ttl=50 time=1.04 ms 64 bytes from 192.168.0.2: icmp_req=2 ttl=50 time=1.04 ms 64 bytes from 192.168.0.2: icmp_req=3 ttl=50 time=1.04 ms 64 bytes from 192.168.0.2: icmp_req=4 ttl=50 time=1.04 ms
Test 2: iperf
iperf is a utility to measure the performance over an IP link.
First, make sure you have iperf installed on both machines. If not,
sudo yum install iperf
Setup iperf server on Machine A.
Setup iperf client on Machine B.
[hyzeng@machine_B ~]$ iperf -c 192.168.0.1 ------------------------------------------------------------ Client connecting to localhost, TCP port 5001 TCP window size: 132 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.0.2 port 52787 connected with 192.168.0.1 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 9.35 GBytes 935 Mbits/sec