Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 39 additions & 4 deletions release_notes/ocp-4-18-release-notes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -554,7 +554,7 @@ For more information about configuring machine types, see xref:../installing/ins

[id="ocp-4-18-installation-and-update-gcp-byo-vpc-phz_{context}"]
==== Provide your own private hosted zone when installing a cluster on {gcp-full}
With this release, you can provide your own private hosted zone when installing a cluster on {gcp-short} into a shared VPC. If you do, the requirements for the bring your own (BYO) zone are that the zone must use a DNS name such as `<cluster name>.<base domain>.` and that you bind the zone to the VPC network of the cluster.
With this release, you can provide your own private hosted zone when installing a cluster on {gcp-short} into a shared VPC. If you do, the requirements for the bring your own (BYO) zone are that the zone must use a DNS name such as `<cluster_name>.<base_domain>.` and that you bind the zone to the VPC network of the cluster.

For more information, see xref:../installing/installing_gcp/installing-gcp-shared-vpc.adoc#installation-gcp-shared-vpc-prerequisites_installing-gcp-shared-vpc[Prerequisites for installing a cluster on GCP into a shared VPC] and xref:../installing/installing_gcp/installing-gcp-user-infra-vpc.adoc#prerequisites[Prerequisites for installing a cluster into a shared VPC on GCP using Deployment Manager templates].

Expand All @@ -572,12 +572,20 @@ You can now deploy single-stack IPv6 clusters on {rh-openstack}.
You must configure {rh-openstack} prior to deploying your {product-title} cluster. For more information, see xref:../installing/installing_openstack/installing-openstack-installer-custom.adoc#installation-configuring-shiftstack-single-ipv6_installing-openstack-installer-custom[Configuring a cluster with single-stack IPv6 networking].

[id="ocp-4-18-installation-and-update-nutanix-multiple-nics_{context}"]
==== Installing a cluster on Nutanix with up to 32 subnets
With this release, Nutanix supports more than one subnet for the Prism Element where you deployed an {product-title} cluster to. A maximum of 32 subnets for each Prism Element is supported.
==== Installing a cluster on Nutanix with multiple subnets
With this release, you can install a Nutanix cluster with more than one subnet for the Prism Element into which you are deploying an {product-title} cluster.

For more information, see xref:../installing/installing_nutanix/installing-nutanix-installer-provisioned.adoc#installation-configuring-nutanix-failure-domains_installing-nutanix-installer-provisioned[Configuring failure domains] and xref:../installing/installing_nutanix/installation-config-parameters-nutanix.adoc#installation-configuration-parameters-additional-nutanix_installation-config-parameters-nutanix[Additional Nutanix configuration parameters].

For an existing Nutanix cluster, you can add multiple subnets by using machine sets. For more information, see xref:../installing/installing_nutanix/nutanix-failure-domains.adoc#post-installation-configuring-nutanix-failure-domains_nutanix-failure-domains[Adding failure domains to the Infrastructure CR].
For an existing Nutanix cluster, you can add multiple subnets by using xref:../machine_management/creating_machinesets/creating-machineset-nutanix.adoc#machineset-yaml-nutanix_creating-machineset-nutanix[compute] or xref:../machine_management/control_plane_machine_management/cpmso_provider_configurations/cpmso-config-options-nutanix.adoc#cpmso-yaml-provider-spec-nutanix_cpmso-config-options-nutanix[control plane] machine sets.

[id="ocp-4-18-installation-and-update-vsphere-multiple-nics_{context}"]
==== Installing a cluster on {vmw-full} with multiple network interface controllers (Technology Preview)
With this release, you can install a {vmw-full} cluster with multiple network interface controllers (NICs) for a node.

For more information, see xref:../installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-network-customizations.adoc#installation-vsphere-multiple-nics_installing-vsphere-installer-provisioned-network-customizations[Configuring multiple NICs].

For an existing {vmw-short} cluster, you can add multiple subnets by using xref:../machine_management/creating_machinesets/creating-machineset-vsphere.adoc#machineset-vsphere-multiple-nics_creating-machineset-vsphere[compute machine sets].

[id="ocp-release-notes-agent-5-node-control-plane_{context}"]
==== Configuring 4 and 5 node control planes with the Agent-based Installer
Expand Down Expand Up @@ -1922,6 +1930,11 @@ In the following tables, features are marked with the following statuses:
|Not Available
|Not Available
|General Availability

|Installing a cluster on {vmw-full} with multiple network interface controllers
|Not Available
|Not Available
|Technology Preview
|====

[discrete]
Expand Down Expand Up @@ -1999,6 +2012,11 @@ In the following tables, features are marked with the following statuses:
|Removed
|Removed

|Adding multiple subnets to an existing {vmw-full} cluster by using compute machine sets
|Not Available
|Not Available
|Technology Preview

|====

[discrete]
Expand Down Expand Up @@ -2416,6 +2434,23 @@ In the following tables, features are marked with the following statuses:

* A regression in the behaviour of `libreswan` caused some nodes with IPsec enabled to lose communication with pods on other nodes in the same cluster. To resolve this issue, consider disabling IPsec for your cluster. (link:https://issues.redhat.com/browse/OCPBUGS-43713[*OCPBUGS-43713*])

* There is a known issue in {product-title} version 4.18 that prevents configuring multiple subnets in the failure domain of a Nutanix cluster during installation.
There is no workaround for this issue.
(link:https://issues.redhat.com/browse/OCPBUGS-49885[*OCPBUGS-49885*])

* The following known issues exist for configuring multiple subnets for an existing Nutanix cluster by using a control plane machine set:
+
--
** Adding subnets above the existing subnet in the `subnets` stanza causes a control plane node to become stuck in the `Deleting` state.
As a workaround, only add subnets below the existing subnet in the `subnets` stanza.

** Sometimes, after adding a subnet, the updated control plane machines appear in the Nutanix console but the {product-title} cluster is unreachable.
There is no workaround for this issue.
--
+
These issues occur on clusters that use a control plane machine set to configure subnets regardless of whether subnets are specified in a failure domain or the provider specification.
(link:https://issues.redhat.com/browse/OCPBUGS-50904[*OCPBUGS-50904*])

* There is a known issue with {op-base-system} 8 worker nodes that use `cgroupv1` Linux Control Groups (cgroup). The following is an example of the error message displayed for impacted nodes: `UDN are not supported on the node ip-10-0-51-120.us-east-2.compute.internal as it uses cgroup v1.` As a workaround, users should migrate worker nodes from `cgroupv1` to `cgroupv2`. (link:https://issues.redhat.com/browse/OCPBUGS-49933[*OCPBUGS-49933*])

* The current PTP grandmaster clock (T-GM) implementation has a single National Marine Electronics Association (NMEA) sentence generator sourced from the GNSS without a backup NMEA sentence generator. If NMEA sentences are lost before reaching the e810 NIC, the T-GM cannot synchronize the devices in the network synchronization chain and the PTP Operator reports an error. A proposed fix is to report a `FREERUN` event when the NMEA string is lost. Until this limitation is addressed, T-GM does not support PTP clock holdover state. (link:https://issues.redhat.com/browse/OCPBUGS-19838[*OCPBUGS-19838*])
Expand Down