From 85354bcfb0d61303423c8d260e478ddbea4f679f Mon Sep 17 00:00:00 2001 From: John Wilkins Date: Wed, 18 Sep 2024 15:01:53 -0700 Subject: [PATCH] Cherry pick for HCIDOCS-348-4.12 Signed-off-by: John Wilkins --- modules/ipi-install-network-requirements.adoc | 53 ++++++++++++++++--- 1 file changed, 46 insertions(+), 7 deletions(-) diff --git a/modules/ipi-install-network-requirements.adoc b/modules/ipi-install-network-requirements.adoc index 3564052e324c..669fca62c83a 100644 --- a/modules/ipi-install-network-requirements.adoc +++ b/modules/ipi-install-network-requirements.adoc @@ -6,10 +6,47 @@ [id="network-requirements_{context}"] = Network requirements -Installer-provisioned installation of {product-title} involves several network requirements. First, installer-provisioned installation involves an optional non-routable `provisioning` network for provisioning the operating system on each bare metal node. Second, installer-provisioned installation involves a routable `baremetal` network. +Installer-provisioned installation of {product-title} involves multiple network requirements. First, installer-provisioned installation involves an optional non-routable `provisioning` network for provisioning the operating system on each bare-metal node. Second, installer-provisioned installation involves a routable `baremetal` network. image::210_OpenShift_Baremetal_IPI_Deployment_updates_0122_2.png[Installer-provisioned networking] +[id="network-requirements-ensuring-required-ports-are-open_{context}"] +== Ensuring required ports are open + +Certain ports must be open between cluster nodes for installer-provisioned installations to complete successfully. In certain situations, such as using separate subnets for far edge worker nodes, you must ensure that the nodes in these subnets can communicate with nodes in the other subnets on the following required ports. + +.Required ports +[options="header"] +|==== +|Port|Description + +|`67`,`68` | When using a provisioning network, cluster nodes access the `dnsmasq` DHCP server over their provisioning network interfaces using ports `67` and `68`. + +| `69` | When using a provisioning network, cluster nodes communicate with the TFTP server on port `69` using their provisioning network interfaces. The TFTP server runs on the bootstrap VM. The bootstrap VM runs on the provisioner node. + +| `80` | When not using the image caching option or when using virtual media, the provisioner node must have port `80` open on the `baremetal` machine network interface to stream the {op-system-first} image from the provisioner node to the cluster nodes. + +| `123` | The cluster nodes must access the NTP server on port `123` using the `baremetal` machine network. + +|`5050`| The Ironic Inspector API runs on the control plane nodes and listens on port `5050`. The Inspector API is responsible for hardware introspection, which collects information about the hardware characteristics of the bare-metal nodes. + +|`5051`| Port `5050` uses port `5051` as a proxy. + +|`6180`| When deploying with virtual media and not using TLS, the provisioner node and the control plane nodes must have port `6180` open on the `baremetal` machine network interface so that the baseboard management controller (BMC) of the worker nodes can access the {op-system} image. Starting with {product-title} 4.13, the default HTTP port is `6180`. + +|`6183`| When deploying with virtual media and using TLS, the provisioner node and the control plane nodes must have port `6183` open on the `baremetal` machine network interface so that the BMC of the worker nodes can access the {op-system} image. + +|`6385`| The Ironic API server runs initially on the bootstrap VM and later on the control plane nodes and listens on port `6385`. The Ironic API allows clients to interact with Ironic for bare-metal node provisioning and management, including operations such as enrolling new nodes, managing their power state, deploying images, and cleaning the hardware. + +|`6388`| Port `6385` uses port `6388` as a proxy. + +|`8080`| When using image caching without TLS, port `8080` must be open on the provisioner node and accessible by the BMC interfaces of the cluster nodes. + +|`8083`| When using the image caching option with TLS, port `8083` must be open on the provisioner node and accessible by the BMC interfaces of the cluster nodes. + +|`9999`| By default, the Ironic Python Agent (IPA) listens on TCP port `9999` for API calls from the Ironic conductor service. Communication between the bare-metal node where IPA is running and the Ironic conductor service uses this port. + +|==== [id="network-requirements-increase-mtu_{context}"] == Increase the network MTU @@ -53,6 +90,8 @@ test-cluster.example.com {product-title} includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS. +CoreDNS requires both TCP and UDP connections to the upstream DNS server to function correctly. Ensure the upstream DNS server can receive both TCP and UDP connections from {product-title} cluster nodes. + In {product-title} deployments, DNS name resolution is required for the following components: * The Kubernetes API @@ -97,7 +136,7 @@ Network administrators must reserve IP addresses for each node in the {product-t [id="network-requirements-reserving-ip-addresses_{context}"] == Reserving IP addresses for nodes with the DHCP server -For the `baremetal` network, a network administrator must reserve a number of IP addresses, including: +For the `baremetal` network, a network administrator must reserve several IP addresses, including: . Two unique virtual IP addresses. + @@ -111,7 +150,7 @@ For the `baremetal` network, a network administrator must reserve a number of IP [IMPORTANT] .Reserving IP addresses so they become static IP addresses ==== -Some administrators prefer to use static IP addresses so that each node's IP address remains constant in the absence of a DHCP server. To configure static IP addresses with NMState, see "(Optional) Configuring host network interfaces" in the "Setting up the environment for an OpenShift installation" section. +Some administrators prefer to use static IP addresses so that each node's IP address remains constant in the absence of a DHCP server. To configure static IP addresses with NMState, see "(Optional) Configuring node network interfaces" in the "Setting up the environment for an OpenShift installation" section. ==== [IMPORTANT] @@ -125,7 +164,7 @@ External load balancing services and the control plane nodes must run on the sam The storage interface requires a DHCP reservation or a static IP. ==== -The following table provides an exemplary embodiment of fully qualified domain names. The API and Nameserver addresses begin with canonical name extensions. The hostnames of the control plane and worker nodes are exemplary, so you can use any host naming convention you prefer. +The following table provides an exemplary embodiment of fully qualified domain names. The API and name server addresses begin with canonical name extensions. The hostnames of the control plane and worker nodes are exemplary, so you can use any host naming convention you prefer. [width="100%", cols="3,5,2", options="header"] |===== @@ -143,7 +182,7 @@ The following table provides an exemplary embodiment of fully qualified domain n [NOTE] ==== -If you do not create DHCP reservations, the installer requires reverse DNS resolution to set the hostnames for the Kubernetes API node, the provisioner node, the control plane nodes, and the worker nodes. +If you do not create DHCP reservations, the installation program requires reverse DNS resolution to set the hostnames for the Kubernetes API node, the provisioner node, the control plane nodes, and the worker nodes. ==== [id="network-requirements-provisioner_{context}"] @@ -156,7 +195,7 @@ The provisioner node requires layer 2 connectivity for network booting, DHCP and [id="network-requirements-ntp_{context}"] == Network Time Protocol (NTP) -Each {product-title} node in the cluster must have access to an NTP server. {product-title} nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync. +Each {product-title} node in the cluster must have access to an NTP server. {product-title} nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL/TLS certificates that require validation, which might fail if the date and time between the nodes are not in sync. [IMPORTANT] ==== @@ -168,4 +207,4 @@ You can reconfigure the control plane nodes to act as NTP servers on disconnecte [id="network-requirements-out-of-band_{context}"] == Port access for the out-of-band management IP address -The out-of-band management IP address is on a separate network from the node. To ensure that the out-of-band management can communicate with the provisioner during installation, the out-of-band management IP address must be granted access to port `80` on the bootstrap host and port `6180` on the {product-title} control plane hosts. TLS port `6183` is required for virtual media installation, for example, via Redfish. +The out-of-band management IP address is on a separate network from the node. To ensure that the out-of-band management can communicate with the provisioner node during installation, the out-of-band management IP address must be granted access to port `6180` on the provisioner node and on the {product-title} control plane nodes. TLS port `6183` is required for virtual media installation, for example, by using Redfish.