Skip to content

NetFPGA 1G CML Reference NIC

jhirata edited this page Oct 2, 2014 · 1 revision

Name

reference_nic_nf1_cml

Location

projects/reference_nic_nf1_cml

IP Cores

Software


Description

The division of the hardware into modules was hinted at in the previous section. Understanding these modules is essential in making the most of the available designs. The distributed projects in the NFP, including the NIC, all follow the same modular structure. This design is a pipeline where each stage is a separate module. A diagram of the pipeline is shown on the next section.

Packets first enter the device through the nf1_cml_interface module. This core translates the signals from the Ethernet PHY into an AXI Stream of packets when receiving, and translate the AXI Stream of packets into the RGMII signals that the PHY will understand when transmitting.

The nf1_cml_interface modules RX connect next to the input arbiter module. The input arbiter has five input interfaces: four from the nf1_cml_interface modules and one from a DMA module (to be described later on). Each input to the arbiter connects to an input queue, which is in fact a small fall-through FIFO. The simple arbiter rotates between all the input queues in a round robin manner, each time selecting a non-empty queue and writing one full packet from it to the next stage in the data-path, which is the output port lookup module.

The output port lookup module is responsible for deciding which port a packet goes out of. After that decision is made, the packet is then handed to the output queues module. The lookup module implements a very basic lookup scheme, sending all packets from 1G ports to the CPU and vice versa, based on the source port indicated in the packet's header. Notice that although we only have one physical DMA module in Verilog, there are 4 virtual DMA ports. The virtual DMA ports are distinguished by SRC_PORT/DST_PORT field.

Once a packet arrives to the nf10_bram_output_queues module, it already has a marked destination (provided on a side channel). According to the destination it is entered to a dedicated output queue. There are five such output queues: one per each 1G port and one to the DMA block. Note that a packet may be dropped if its output queue is full or almost full. When a packet reaches the head of its output queue, it is sent to the corresponding output port, being either an nf1_cml_interface module or the DMA module. The output queues are arranged in an interleaved order: one physical Ethernet port, one DMA port etc. Even queues are therefore assigned to physical Ethernet ports, and odd queues are assigned to the virtual DMA ports.

The DMA module serves as a DMA engine for the reference NIC design. It includes Xilinx' PCIe core and AXI4-LITE master module. To the other NetFPGA modules it exposes AXIS (master+slave) interfaces for sending/receiving packets, as well as a AXI4-LITE master interface through which all AXI registers can be accessed from the host (over PCIe). To this end it connects to the axi_interconnect module.

The reference NIC design implements a Xilinx Microblaze subsystem, including also a BRAM memory block and its controller. For more information, please refer to the Microblaze reference links provided above.

Block Diagram



Testing

The Reference NIC on NetFPGA is similar to other NICs. In the following sections, we will show how to run a iperf test between NetFPGA and another machine.

Each projects has some features that are verified by doing Simulation tests and HW tests. The test infrastructure is based on the python. You can find the tests inside the projects/{project_name}/test folder.

Testing Hardware using two or more machines

To run the test, you need two machines, A and B. Let's say Machine A is equipped with NetFPGA and Machine B is equipped with a third-party 1G NIC.

Download the reference_nic_nf1_cml bitfile from projects/reference_nic_nf1_cml/bitfiles/reference_nic_nf1_cml.bit. (Refer to Production Test Manual if you don't know how to download the bitfile and/or not setup JTAG cable yet.)

Connect Machine A and Machine B using a cat-5e or greater cable. Assume we use nf0 (the port nearest to the PCI Express) on Machine A and eth1 on Machine B.

Build and Install the NetFPGA-1G-CML NIC Driver

Here is a Quick Start.

Setup IP address

On Machine A

sudo ifconfig nf0 192.168.0.1

On Machine B

sudo ifconfig eth1 192.168.0.2
Test 1: Ping

On Machine A

[hyzeng@machine_A ~]$ ping 192.168.0.2

PING 192.168.0.2 192.168.0.2) 56(84) bytes of data.

64 bytes from 192.168.0.2: icmp_req=1 ttl=50 time=1.04 ms

64 bytes from 192.168.0.2: icmp_req=2 ttl=50 time=1.04 ms

64 bytes from 192.168.0.2: icmp_req=3 ttl=50 time=1.04 ms

64 bytes from 192.168.0.2: icmp_req=4 ttl=50 time=1.04 ms
Test 2: iperf

iperf is a utility to measure the performance over an IP link.

First, make sure you have iperf installed on both machines. If not,

sudo yum install iperf 

Setup iperf server on Machine A.

iperf -s

Setup iperf client on Machine B.

[hyzeng@machine_B ~]$ iperf -c 192.168.0.1

------------------------------------------------------------

Client connecting to localhost, TCP port 5001

TCP window size:  132 KByte (default)

------------------------------------------------------------

[  3] local 192.168.0.2 port 52787 connected with 192.168.0.1 port 5001

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0-10.0 sec  9.35 GBytes  935 Mbits/sec
Clone this wiki locally