-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Summary of networking issues found with NF testing #44
Comments
Google doc with Summary of networking issues found with NF testing |
Network configuration requirements for PacketFor the initial box-by-box benchmark and comparison we are only interested in the performance of individual VNFs and CNFs, with focus on the data plane performance (throughput) and memory usage (resident set size, RSS). For these tests the data plane network should be as simple as possible, which can be realized by attaching VFs (Virtual Functions) directly to the VNF or CNF being tested. The traffic generator (Pktgen) runs on a separate instance and can be connected via either PFs (Physical Functions) or VFs depending on the network configuration provided by Packet. Given that the current configuration runs both data plane and management / external networks through the same NIC, the connections will likely be based on VFs created from single port / PF, as the other port will be handling management and external network. Below is a small diagram showing how this implementation can be realized using two Packet instances. Note that the data plane network will need to be configured with a VLAN to act as an L2 connection between instances. The main requirement for this to work is that the necessary flags for SR-IOV are set in the BIOS (it should be possible to configure this via the Packet.net customer portal) There are other configurations for the “System Under Test” instance that involves the use of VPP between the NIC and the VNFs and CNFs, but this only changes the software requirements, and should not change the requirements for Packet. |
Network configuration requirements for fd.ioThe requirements for fd.io are very similar to those for Packet. The biggest difference is seen in the connections between instances, as the fd.io CSIT testbeds have NICs dedicated for data plane traffic using point-to-point connections that removes the need for configuring the data plane network. The diagram below shows the configuration that has been used for benchmarks. By default the testbeds don’t fully support IOMMU. This can be fixed by enabling Intel. VT for Directed I/O (VT-d) in the BIOS (listed under Chipset -> North Bridge -> IIO Configuration). |
Networking requirements for VPP/DPDK NFsFocusing on the “System Under Test” instance, there are several ways that this can be configured to support multiple VNFs and CNFs. Examples of how this can be done is shown in the diagram below.
Most of these connections have been partially tested on Packet hardware. |
Issues seen during deployment and testingInitial deployments were done on a single “all-in-one” instance, meaning both traffic generator and NF was running side by side. The data plane network was implemented using the default bridge implementations available in the frameworks used for virtualization, Vagrant (libvirt) for VMs and Docker for containers. Both of these work in similar fashion, as can be seen in the diagram below. While both of these deployments did work, the amount of traffic that can be handled by these host bridges is very limited, to the point where the VNFs/CNFs would only be utilizing a few percent of their available resources. Variations based on these configurations were also tested, e.g. using TCP tunnels between the traffic generator and NF, but the results were similar to what was observed using the host bridges. A different approach using an “all-in-one” instance was also tested, this one using VPP as the data plane network inside a single instance. The diagram below shows the configuration differences when testing either VNFs or CNFs. The traffic generator is deployed as a VNF in both scenarios, as it currently only supports attaching to PCI devices, which is done through the Vhost to “Virtio PCI” mapping that happens in the VM. This solution removes the bottleneck that was seen previously with host bridges. List of issues with references
|
Write up on networking issues with NF testing, including but not limited to:
The text was updated successfully, but these errors were encountered: