From de9bd9838878abb5b839993518f95271c9d5500c Mon Sep 17 00:00:00 2001 From: Ashley Hardin Date: Mon, 6 May 2019 17:26:23 -0400 Subject: [PATCH] Move networking optimization content to installation topic --- _topic_map.yml | 2 - modules/configuring-network-subnets.adoc | 48 ------------------- modules/nw-install-config-parameters.adoc | 2 +- .../network-optimization.adoc | 44 ----------------- 4 files changed, 1 insertion(+), 95 deletions(-) delete mode 100644 modules/configuring-network-subnets.adoc delete mode 100644 scalability_and_performance/network-optimization.adoc diff --git a/_topic_map.yml b/_topic_map.yml index 69a01160da8b..58b698fb2502 100644 --- a/_topic_map.yml +++ b/_topic_map.yml @@ -418,8 +418,6 @@ Topics: File: scaling-cluster-monitoring-operator - Name: Planning your environment according to object limits File: planning-your-environment-according-to-object-limits -- Name: Optimizing networking - File: network-optimization - Name: Optimizing storage File: optimizing-storage - Name: Optimizing routing diff --git a/modules/configuring-network-subnets.adoc b/modules/configuring-network-subnets.adoc deleted file mode 100644 index 2da574b38e84..000000000000 --- a/modules/configuring-network-subnets.adoc +++ /dev/null @@ -1,48 +0,0 @@ -// Module included in the following assemblies: -// -// networking/network-optimization.adoc - -[id="Configuring-network-subnets_{context}"] -= Configuring network subnets - -All Pods are assigned IPs. This enables Pod to Pod communication and Pod to node -communication without network address translation (NAT). For a CIDR in the range -of 10.128.0.0/14, IPs in this range are, by default, assigned to the Pods. If -you want to change this, you must adjust the `ClusterNetwork`. If you want to -customize the IPs rolled out to services, then you must adjust the -`ServiceNetwork`. - -.Procedure - -To configure a custom IP range, also called a _subnet_, complete the following. - -. Run: -+ ----- -$ ./openshift-install --dir=new-install create install-config ----- - -. Change to the `new-install` directory: -+ ----- -$ cd new-install ----- - -. Edit the `install-config.yaml` file, setting the required fields under -`networking`: -+ ----- -networking: - clusterNetwork: - - cidr: 10.128.0.0/14 - hostPrefix: 23 - machineCIDR: 10.0.0.0/16 - serviceNetwork: 172.30.0.0/16 - networkType: OpenshiftSDN ----- - -. Consume the customized `install-config.yaml` file to deploy the cluster: -+ ----- -$ ./openshift-install --dir=new-install create cluster ----- diff --git a/modules/nw-install-config-parameters.adoc b/modules/nw-install-config-parameters.adoc index 53578624e863..d17da0e943ba 100644 --- a/modules/nw-install-config-parameters.adoc +++ b/modules/nw-install-config-parameters.adoc @@ -6,7 +6,7 @@ = Network configuration parameters You can modify your cluster network configuration parameters in the -`install-cluster.yaml` configuration file. The following table describes the +`install-config.yaml` configuration file. The following table describes the parameters. [NOTE] diff --git a/scalability_and_performance/network-optimization.adoc b/scalability_and_performance/network-optimization.adoc deleted file mode 100644 index 2688af21a335..000000000000 --- a/scalability_and_performance/network-optimization.adoc +++ /dev/null @@ -1,44 +0,0 @@ -[id="network-optimization"] -= Optimizing networking -include::modules/common-attributes.adoc[] -:context: networking - -toc::[] - -The OpenShift SDN uses Open vSwitch, Virtual Extensible LAN (VXLAN) tunnels, -OpenFlow rules, and iptables. This network can be tuned by using jumbo frames, -network interface cards (NIC) offloads, multi-queue, and ethtool settings. - -VXLAN provides benefits over VLANs, such as an increase in networks from 4096 to -over 16 million, and layer 2 connectivity across physical networks. This allows -for all Pods behind a service to communicate with each other, even if they are -running on different systems. - -VXLAN encapsulates all tunneled traffic in user datagram protocol (UDP) packets. -However, this leads to increased CPU utilization. Both these outer- and -inner-packets are subject to normal checksumming rules to guarantee data has not -been corrupted during transit. Depending on CPU performance, this additional -processing overhead can cause a reduction in throughput and increased latency -when compared to traditional, non-overlay networks. - -Cloud, VM, and bare metal CPU performance can be capable of handling much more -than one Gbps network throughput. When using higher bandwidth links such as 10 -or 40 Gbps, reduced performance can occur. This is a known issue in VXLAN-based -environments and is not specific to containers or {product-title}. Any network -that relies on VXLAN tunnels performs similarly because of the VXLAN -implementation. - -If you are looking to push beyond one Gbps, you can: - -* Evaluate network plug-ins that implement different routing techniques, such as -border gateway protocol (BGP). -* Use VXLAN-offload capable network adapters. VXLAN-offload moves the packet -checksum calculation and associated CPU overhead off of the system CPU and onto -dedicated hardware on the network adapter. This frees up CPU cycles for use by -Pods and applications, and allows users to utilize the full bandwidth of their -network infrastructure. - -VXLAN-offload does not reduce latency. However, CPU utilization is reduced even -in latency tests. - -include::modules/configuring-network-subnets.adoc[leveloffset=+1]