Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions _topic_map.yml
Original file line number Diff line number Diff line change
Expand Up @@ -1555,6 +1555,8 @@ Topics:
File: optimizing-storage
- Name: Optimizing routing
File: routing-optimization
- Name: Optimizing networking
File: optimizing-networking
- Name: What huge pages do and how they are consumed by apps
File: what-huge-pages-do-and-how-they-are-consumed-by-apps
- Name: Performance Addon Operator for low latency nodes
Expand Down
10 changes: 10 additions & 0 deletions modules/ipsec-impact-networking.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
// Module included in the following assemblies:
//
// * scalability_and_performance/optimizing-networking.adoc

[id="ipsec-impact_{context}"]
= Impact of IPsec

Because encrypting and decrypting node hosts uses CPU power, performance is affected both in throughput and CPU usage on the nodes when encryption is enabled, regardless of the IP security system being used.

IPSec encrypts traffic at the IP payload level, before it hits the NIC, protecting fields that would otherwise be used for NIC offloading. This means that some NIC acceleration features might not be usable when IPSec is enabled and will lead to decreased throughput and increased CPU usage.
19 changes: 19 additions & 0 deletions modules/optimizing-mtu-networking.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
// Module included in the following assemblies:
//
// * scalability_and_performance/optimizing-networking.adoc

[id="optimizing-mtu_{context}"]
= Optimizing the MTU for your network

There are two important maximum transmission units (MTUs): the network interface card (NIC) MTU and the cluster network MTU.

The NIC MTU is only configured at the time of {product-title} installation. The MTU must be less than or equal to the maximum supported value of the NIC of your network. If you are optimizing for throughput, choose the largest possible value. If you are optimizing for lowest latency, choose a lower value.

The SDN overlay's MTU must be less than the NIC MTU by 50 bytes at a minimum. This accounts for the SDN overlay header. So, on a normal ethernet network, set this to `1450`. On a jumbo frame ethernet network, set this to `8950`.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And I believe its minus 100 bytes for OVN. We can also include it as reference for OVN as geneve/ovn is already being discussed in this doc. wdyt

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anuragthehatter I applied your suggestion. Can you please review the latest change? Thanks!

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ahardin-rh Thank you for applying the change. I guess we can put line For OVN and Geneve, the MTU must be less than the NIC MTU by 100 bytes at a minimum. after we finish sdn discussion. I means post On a jumbo frame ethernet network, set this to `8950`.. Currently seems like we are sandwiching ovn details in btw continued SDN dicussion. Will leave up to you to decide. Thank you. Rest LGTM

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I moved the OVN discussion to the right spot.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/LGTM


For OVN and Geneve, the MTU must be less than the NIC MTU by 100 bytes at a minimum.

[NOTE]
====
This 50 byte overlay header is relevant to the OpenShift SDN. Other SDN solutions might require the value to be more or less.
====
37 changes: 37 additions & 0 deletions scalability_and_performance/optimizing-networking.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
[id="optimizing-networking"]
= Optimizing networking
include::modules/common-attributes.adoc[]
:context: optimizing-networking

toc::[]

The xref:../networking/openshift_sdn/about-openshift-sdn.adoc#about-openshift-sdn[OpenShift SDN] uses OpenvSwitch, virtual extensible LAN (VXLAN) tunnels, OpenFlow rules, and iptables. This network can be tuned by using jumbo frames, network interface cards (NIC) offloads, multi-queue, and ethtool settings.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It probably makes sense to include OVN-Kubernetes as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, it uses Geneve / GENEVE instead of VXLAN as the tunnel protocol.


xref:../networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc#about-ovn-kubernetes[OVN-Kubernetes] uses Geneve (Generic Network Virtualization Encapsulation) instead of VXLAN as the tunnel protocol.

VXLAN provides benefits over VLANs, such as an increase in networks from 4096 to over 16 million, and layer 2 connectivity across physical networks. This allows for all pods behind a service to communicate with each other, even if they are running on different systems.

VXLAN encapsulates all tunneled traffic in user datagram protocol (UDP) packets. However, this leads to increased CPU utilization. Both these outer- and
inner-packets are subject to normal checksumming rules to guarantee data is not corrupted during transit. Depending on CPU performance, this additional
processing overhead can cause a reduction in throughput and increased latency when compared to traditional, non-overlay networks.

Cloud, VM, and bare metal CPU performance can be capable of handling much more than one Gbps network throughput. When using higher bandwidth links such as 10 or 40 Gbps, reduced performance can occur. This is a known issue in VXLAN-based environments and is not specific to containers or {product-title}. Any network that relies on VXLAN tunnels will perform similarly because of the VXLAN implementation.

If you are looking to push beyond one Gbps, you can:

* Evaluate network plug-ins that implement different routing techniques, such as border gateway protocol (BGP).
* Use VXLAN-offload capable network adapters. VXLAN-offload moves the packet checksum calculation and associated CPU overhead off of the system CPU and onto dedicated hardware on the network adapter. This frees up CPU cycles for use by pods and applications, and allows users to utilize the full bandwidth of their network infrastructure.

VXLAN-offload does not reduce latency. However, CPU utilization is reduced even in latency tests.

include::modules/optimizing-mtu-networking.adoc[leveloffset=+1]

include::modules/recommended-install-practices.adoc[leveloffset=+1]

include::modules/ipsec-impact-networking.adoc[leveloffset=+1]

.Additional resources

* xref:../installing/installing_aws/installing-aws-network-customizations.adoc#modifying-nwoperator-config-startup_installing-aws-network-customizations[Modifying advanced network configuration parameters]
* xref:../networking/cluster-network-operator.adoc#nw-operator-configuration-parameters-for-ovn-sdn_cluster-network-operator[Configuration parameters for the OVN-Kubernetes default CNI network provider]
* xref:../networking/cluster-network-operator.adoc#nw-operator-configuration-parameters-for-openshift-sdn_cluster-network-operator[Configuration parameters for the OpenShift SDN default CNI network provider]