Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 39 additions & 1 deletion release_notes/ocp-4-14-release-notes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -1065,6 +1065,13 @@ With this release, control plane machine sets are supported for Nutanix clusters

For more information, see xref:../machine_management/control_plane_machine_management/cpmso-getting-started.adoc#cpmso-getting-started[Getting started with the Control Plane Machine Set Operator].

[id="ocp-4-14-mapi-cpms-shiftstack-support"]
==== Support for control plane machine sets on {rh-openstack} clusters

With this release, control plane machine sets are supported for clusters that run on {rh-openstack}.

For more information, see xref:../machine_management/control_plane_machine_management/cpmso-getting-started.adoc#cpmso-getting-started[Getting started with the Control Plane Machine Set Operator].

[id="ocp-4-14-mapi-aws-placement-groups"]
==== Support for assigning AWS machines to placement groups

Expand Down Expand Up @@ -1231,6 +1238,22 @@ For further information about controlling pod C-states, see xref:../scalability_
==== Support for provisioning IPv6 spoke clusters from dual-stack hub clusters
With this update, you can provision IPv6 address spoke clusters from dual-stack hub clusters. In a zero touch provisioning (ZTP) environment, the HTTP server on the hub cluster that hosts the boot ISO now listens on both IPv4 and IPv6 networks. The provisioning service also checks the baseboard management controller (BMC) address scheme on the target spoke cluster and provides a matching URL for the installation media. These updates offer the ability to provision single-stack, IPv6 spoke clusters from a dual-stack hub cluster.

[id="ocp-4-14-nw-shiftstack-dual-stack"]
==== Support for dual-stack networking for {rh-openstack} clusters (Technology Preview)

Dual-stack network configuration is now available for clusters that run on {rh-openstack}. You can configure dual-stack networking during the deployment of a cluster on installer-provisioned infrastructure.

For more information, see xref:../installing/installing_openstack/installing-openstack-installer-custom.adoc#install-osp-dualstack_installing-openstack-installer-custom[Configuring a cluster with dual-stack networking].

[id="ocp-4-14-nw-shiftstack-manage-security-groups"]
==== Security group management for {rh-openstack} clusters

In {product-title} 4.14, security for clusters that run on {rh-openstack} is enhanced. By default, the OpenStack cloud provider now sets the `manage-security-groups` option for load balancers to `true`, ensuring that only node ports that are required for cluster operation are open. Previously, security groups for both compute and control plane machines were configured to open a wide range of node ports for all incoming traffic.

You can opt to use the previous configuration by setting the `manage-security-groups` option to `false` in the configuration of a load balancer and ensuring that the security group rules permit traffic from `0.0.0.0/0` on the node ports range 30000 through 32767.

For clusters that are upgraded to 4.14, you must manually remove permissive security group rules that open the deployment to all traffic. For example, you must remove a rule that permits traffic from `0.0.0.0/0` on the node ports range 30000 through 32767.

[id="ocp-custom-crs-with-pgt-ztp"]
==== Using custom CRs with PolicyGenTemplate CRs in the {ztp-first} pipeline

Expand Down Expand Up @@ -2640,6 +2663,17 @@ In the following tables, features are marked with the following statuses:
[cols="4,1,1,1",options="header"]
|====
|Feature |4.12 |4.13 |4.14

|External load balancers with installer-provisioned infrastructure
|Not Available
|Technology Preview
|General Availability

|Dual-stack networking with installer-provisioned infrastructure
|Not Available
|Not Available
|Technology Preview

|====


Expand Down Expand Up @@ -2942,7 +2976,11 @@ It is anticipated that an upcoming z-stream release will include a fix for this

* Creating pods with Microsoft Azure File NFS volumes that are scheduled to the control plane node causes the mount to be denied.
+
To work around this issue: If your control plane nodes are schedulable, and the pods can run on worker nodes, use `nodeSelector` or Affinity to schedule the pod in worker nodes.(link:https://issues.redhat.com/browse/OCPBUGS-18581[*OCPBUGS-18581*])
To work around this issue: If your control plane nodes are schedulable, and the pods can run on worker nodes, use `nodeSelector` or Affinity to schedule the pod in worker nodes. (link:https://issues.redhat.com/browse/OCPBUGS-18581[*OCPBUGS-18581*])

* For clusters that run on {rh-openstack} 17.1 and use network function virtualization (NFV), a known issue in {rh-openstack} prevents successful cluster deployment. There is no workaround for this issue. Contact Red Hat Support to request a hotfix. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2228643[*BZ2228643*])

* There is no support for Kuryr installations on {rh-openstack} 17.1.

[id="ocp-4-14-asynchronous-errata-updates"]
== Asynchronous errata updates
Expand Down