-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Bug 1787281, Added Optimizing networking section #26917
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,10 @@ | ||
| // Module included in the following assemblies: | ||
| // | ||
| // * scalability_and_performance/optimizing-networking.adoc | ||
|
|
||
| [id="ipsec-impact_{context}"] | ||
| = Impact of IPsec | ||
|
|
||
| Because encrypting and decrypting node hosts uses CPU power, performance is affected both in throughput and CPU usage on the nodes when encryption is enabled, regardless of the IP security system being used. | ||
|
|
||
| IPSec encrypts traffic at the IP payload level, before it hits the NIC, protecting fields that would otherwise be used for NIC offloading. This means that some NIC acceleration features might not be usable when IPSec is enabled and will lead to decreased throughput and increased CPU usage. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,19 @@ | ||
| // Module included in the following assemblies: | ||
| // | ||
| // * scalability_and_performance/optimizing-networking.adoc | ||
|
|
||
| [id="optimizing-mtu_{context}"] | ||
| = Optimizing the MTU for your network | ||
|
|
||
| There are two important maximum transmission units (MTUs): the network interface card (NIC) MTU and the cluster network MTU. | ||
|
|
||
| The NIC MTU is only configured at the time of {product-title} installation. The MTU must be less than or equal to the maximum supported value of the NIC of your network. If you are optimizing for throughput, choose the largest possible value. If you are optimizing for lowest latency, choose a lower value. | ||
|
|
||
| The SDN overlay's MTU must be less than the NIC MTU by 50 bytes at a minimum. This accounts for the SDN overlay header. So, on a normal ethernet network, set this to `1450`. On a jumbo frame ethernet network, set this to `8950`. | ||
|
|
||
| For OVN and Geneve, the MTU must be less than the NIC MTU by 100 bytes at a minimum. | ||
|
|
||
| [NOTE] | ||
| ==== | ||
| This 50 byte overlay header is relevant to the OpenShift SDN. Other SDN solutions might require the value to be more or less. | ||
| ==== | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,37 @@ | ||
| [id="optimizing-networking"] | ||
| = Optimizing networking | ||
| include::modules/common-attributes.adoc[] | ||
| :context: optimizing-networking | ||
|
|
||
| toc::[] | ||
|
|
||
| The xref:../networking/openshift_sdn/about-openshift-sdn.adoc#about-openshift-sdn[OpenShift SDN] uses OpenvSwitch, virtual extensible LAN (VXLAN) tunnels, OpenFlow rules, and iptables. This network can be tuned by using jumbo frames, network interface cards (NIC) offloads, multi-queue, and ethtool settings. | ||
|
||
|
|
||
| xref:../networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.adoc#about-ovn-kubernetes[OVN-Kubernetes] uses Geneve (Generic Network Virtualization Encapsulation) instead of VXLAN as the tunnel protocol. | ||
|
|
||
| VXLAN provides benefits over VLANs, such as an increase in networks from 4096 to over 16 million, and layer 2 connectivity across physical networks. This allows for all pods behind a service to communicate with each other, even if they are running on different systems. | ||
|
|
||
| VXLAN encapsulates all tunneled traffic in user datagram protocol (UDP) packets. However, this leads to increased CPU utilization. Both these outer- and | ||
| inner-packets are subject to normal checksumming rules to guarantee data is not corrupted during transit. Depending on CPU performance, this additional | ||
| processing overhead can cause a reduction in throughput and increased latency when compared to traditional, non-overlay networks. | ||
|
|
||
| Cloud, VM, and bare metal CPU performance can be capable of handling much more than one Gbps network throughput. When using higher bandwidth links such as 10 or 40 Gbps, reduced performance can occur. This is a known issue in VXLAN-based environments and is not specific to containers or {product-title}. Any network that relies on VXLAN tunnels will perform similarly because of the VXLAN implementation. | ||
|
|
||
| If you are looking to push beyond one Gbps, you can: | ||
|
|
||
| * Evaluate network plug-ins that implement different routing techniques, such as border gateway protocol (BGP). | ||
| * Use VXLAN-offload capable network adapters. VXLAN-offload moves the packet checksum calculation and associated CPU overhead off of the system CPU and onto dedicated hardware on the network adapter. This frees up CPU cycles for use by pods and applications, and allows users to utilize the full bandwidth of their network infrastructure. | ||
|
|
||
| VXLAN-offload does not reduce latency. However, CPU utilization is reduced even in latency tests. | ||
|
|
||
| include::modules/optimizing-mtu-networking.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/recommended-install-practices.adoc[leveloffset=+1] | ||
|
|
||
| include::modules/ipsec-impact-networking.adoc[leveloffset=+1] | ||
|
|
||
| .Additional resources | ||
|
|
||
| * xref:../installing/installing_aws/installing-aws-network-customizations.adoc#modifying-nwoperator-config-startup_installing-aws-network-customizations[Modifying advanced network configuration parameters] | ||
| * xref:../networking/cluster-network-operator.adoc#nw-operator-configuration-parameters-for-ovn-sdn_cluster-network-operator[Configuration parameters for the OVN-Kubernetes default CNI network provider] | ||
| * xref:../networking/cluster-network-operator.adoc#nw-operator-configuration-parameters-for-openshift-sdn_cluster-network-operator[Configuration parameters for the OpenShift SDN default CNI network provider] | ||
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And I believe its minus 100 bytes for OVN. We can also include it as reference for OVN as geneve/ovn is already being discussed in this doc. wdyt
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anuragthehatter I applied your suggestion. Can you please review the latest change? Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ahardin-rh Thank you for applying the change. I guess we can put line
For OVN and Geneve, the MTU must be less than the NIC MTU by 100 bytes at a minimum.after we finish sdn discussion. I means postOn a jumbo frame ethernet network, set this to `8950`.. Currently seems like we are sandwiching ovn details in btw continued SDN dicussion. Will leave up to you to decide. Thank you. Rest LGTMThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I moved the OVN discussion to the right spot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/LGTM