Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 34 additions & 0 deletions release_notes/ocp-4-12-release-notes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,10 @@ For more information, see xref:../installing/installing_ibm_z/installing-ibm-z-k
[id="ocp-4-12-installation-and-upgrade"]
=== Installation and upgrade

[id="nutanix-machine-api-post-installation-configuration"]
==== Assisted Installer SaaS provides platform integration support for Nutanix
{ai-full} SaaS on link:https://console.redhat.com[console.redhat.com] supports installation of {product-title} on the Nutanix platform with Machine API integration using either the {ai-full} user interface or the REST API. Integration enables Nutanix Prism users to manage their infrastructure from a single interface, and enables auto-scaling. There are a few additional installation steps to enable Nutanix integration with {ai-full} SaaS. See the {ai-full} documentation for details.

[id="ocp-4-12-aws-load-balancer-customization"]
==== Specify the load balancer type in AWS during installation
Beginning with {product-title} {product-version}, you can specify either Network Load Balancer (NLB) or Classic as a persistent load balancer type in AWS during installation. Afterwards, if an Ingress Controller is deleted, the load balancer type persists with the lbType configured during installation.
Expand Down Expand Up @@ -596,6 +600,12 @@ For more information, see xref:../security/security_profiles_operator/spo-overvi
[id="ocp-4-12-networking"]
=== Networking

[id="support-for-dual-stack-addressing-for-the-API-VIP-and-Ingress-VIP"]
==== Support for dual-stack addressing for the API VIP and Ingress VIP
{ai-full} supports installation of {product-title} 4.12 and later versions with dual stack networking for the API VIP and Ingress VIP on bare metal only. This support introduces two new configuration settings: `api_vips` and `ingress_vips`, which can take a list of IP addresses. The legacy settings, `api_vip` and `ingress_vip` must also be set in {product-title} 4.12; however, since they only take one IP address, you must set the IPv4 address when configuring dual stack networking for the API VIP and Ingress VIP with the legacy `api_vip` and `ingress_vip` configuration settings.

The API VIP address and the Ingress VIP address must be of the primary IP address family when using dual-stack networking. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. However, Red Hat does support dual-stack networking with IPv4 as the primary IP address family. Therefore, you must place the IPv4 entries before the IPv6 entries. See the {ai-full} documentation for details.

[id="ocp-4-12-redhat-openshift-networking"]
==== Red Hat OpenShift Networking

Expand Down Expand Up @@ -1293,6 +1303,30 @@ Any registry on this list is not required to have an entry in the pull secret us

For more information, see xref:../scalability_and_performance/ztp_far_edge/ztp-preparing-the-hub-cluster.adoc#ztp-configuring-the-hub-cluster-to-use-unauthenticated-registries_ztp-preparing-the-hub-cluster[Configuring the hub cluster to use unauthenticated registries].

[id="ocp-4-12-0-ironic-agent-image-ztp"]
==== Ironic agent mirroring in disconnected GitOps ZTP installations

For disconnected installations using GitOps ZTP, if you are deploying {product-title} version 4.11 or earlier to a spoke cluster with converged flow enabled, you must mirror the default Ironic agent image to the local image repository. The default Ironic agent images are the following:

* AMD64 Ironic agent image: `quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3f1d4d3cd5fbcf1b9249dd71d01be4b901d337fdc5f8f66569eb71df4d9d446`

* AArch64 Ironic agent image: `quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb0edf19fffc17f542a7efae76939b1e9757dc75782d4727fb0aa77ed5809b43`

For more information about mirroring images, see xref:../installing/disconnected_install/installing-mirroring-installation-images.adoc#installation-mirror-repository_installing-mirroring-installation-images[Mirroring the OpenShift Container Platform image repository].

[id="configuring-kernel-arguments-for-the-Discovery-ISO-by-using-GitOps-ZTP"]
==== Configuring kernel arguments for the Discovery ISO by using GitOps ZTP
{product-title} now supports specifying kernel arguments for the Discovery ISO in GitOps ZTP deployments. In both manual and automated GitOps ZTP deployments, the Discovery ISO is part of the {product-title} installation process on managed bare-metal hosts. You can now edit the `InfraEnv` resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements. For example, you can define the `rd.net.timeout.carrier` kernel argument to help configure the cluster for static networking.

For more information about how to specify kernel arguments, see xref:../scalability_and_performance/ztp_far_edge/ztp-deploying-far-edge-sites.adoc#setting-managed-bare-metal-host-kernel-arguments_ztp-deploying-far-edge-sites[Configuring kernel arguments for the Discovery ISO by using GitOps ZTP] and xref:../scalability_and_performance/ztp_far_edge/ztp-manual-install.adoc#setting-managed-bare-metal-host-kernel-arguments_ztp-manual-install[Configuring kernel arguments for the Discovery ISO for manual installations by using GitOps ZTP].

[id="ocp-4-12-1-assisted-installer-api-mixed-arch-clusters"]
==== Deploy heterogeneous spoke clusters from a hub cluster

With this update, you can create {product-title} mixed-architecture clusters, also known as heterogeneous clusters, that feature hosts with both AMD64 and AArch64 CPU architectures. You can deploy a heterogeneous spoke cluster from a hub cluster managed by Red Hat Advanced Cluster Management (RHACM). To create a heterogeneous spoke cluster, add an AArch64 worker node to a deployed AMD64 cluster.

To add an AArch64 worker node to a deployed AMD64 cluster, you can specify the AArch64 architecture, the multi-architecture release image, and the operating system required for the node by using an `InfraEnv` custom resource (CR). You can then provision the AArch64 worker node to the AMD64 cluster by using the {ai-full} API and the `InfraEnv` CR.

[id="ocp-4-12-insights-operator"]
=== Insights Operator

Expand Down