Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 18 additions & 1 deletion release_notes/ocp-4-6-release-notes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ If you are creating machine configs for day 1 or day 2 operations that use Ignit
[id="ocp-4-6-additional-steps-to-add-nodes-to-clusters"]
==== Additional steps to add nodes to existing clusters

For clusters that have been upgraded to {product-title} 4.6, you can add more nodes to your {product-title} cluster. These instructions are only applicable if you originally installed a cluster prior to {product-title} 4.6 and have since upgraded to 4.6.
For clusters that have been upgraded to {product-title} 4.6, you can add more nodes to your {product-title} cluster. These instructions are only applicable if you originally installed a cluster prior to {product-title} 4.6 and have since upgraded to 4.6.

If you installed a user-provisioned cluster on bare metal or vSphere, you must ensure that your boot media or OVA image matches the version that your cluster was upgraded to. Additionally, your Ignition configuration file must be modified to be spec v3 compatible. For more detailed instructions and an example Ignition config file, see the link:https://access.redhat.com/solutions/5514051[Adding new nodes to UPI cluster fails after upgrading to OpenShift 4.6+] Knowledgebase Solution article.

Expand Down Expand Up @@ -2421,6 +2421,23 @@ This caused image pulls to fail against the inaccessible private registry, norma

* Currently, a Kubernetes port collision issue can cause a breakdown in pod-to-pod communication, even after pods are redeployed. For detailed information and a workaround, see the Red Hat Knowledge Base solution link:https://access.redhat.com/solutions/5940711[Port collisions between pod and cluster IPs on OpenShift 4 with OVN-Kubernetes]. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1939676[*BZ#1939676*], link:https://bugzilla.redhat.com/show_bug.cgi?id=1939045[*BZ#1939045*])

[id="ocp-4-6-devdocs-3418"]
* There is currently a known issue. By default, build pods do not pull images from the following Red Hat registries: `registry.redhat.io`, `registry.access.redhat.com`, and `quay.io` registries. This happens because the link:https://github.com/openshift/builder/commit/32f5b57382cedf4329039752f0e15a14b7f98366[pull request] for link:https://bugzilla.redhat.com/show_bug.cgi?id=1826183[BZ#1826183] introduced a default `registries.conf` file for buildah that does not include those Red Hat registries in the default search list. As a result, the Red Hat registries are not included in searches for image references.
+
Workaround: If you have `<tbd>` privileges, update `registries.conf` with the following lines:
+
----
tbd
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

before you ask @rolfedh :-) I am not an SME when it comes to updating registries.conf on a cluster (build just consume this)

I see this doc link from the RHCOS / MCO section https://docs.openshift.com/container-platform/4.9/architecture/architecture-rhcos.html reference registries.conf

But I think this link https://docs.openshift.com/container-platform/4.9/openshift_images/image-configuration.html#images-configuration-shortname_image-configuration has the instructions for setting them up.

That doc again mentions the MCO watches that config object. So I would consider them being the SMEs for that.

That said, I think you can just for the work around cite https://docs.openshift.com/container-platform/4.9/openshift_images/image-configuration.html#images-configuration-shortname_image-configuration as the instructions for configuring things so those Red Hat registries are included.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The short name feature is brand new and only applies to cri-o - we shouldn't reference it as a known issue in a prior release.

Builds do not support this because we couldn't guarantee that buildah sent the right credentials to the right container registry. The work-around is "use fully qualified names in image pull specifications."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, @adambkaplan and @gabemontero. I'll update the known issue to reflect your comments. In BZ 2011293, the Version is 4.7. Should it be 4.9?

----
+
Otherwise, if you cannot update `registries.conf`, include the `registry.redhat.io`, `registry.access.redhat.com`, or `quay.io` registry names when you specify an image. For example, in the `<filename>` file specify an image like this:
+
----
<example of fully qualified image name>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

registry.redhat.io/ubi8/ubi:latest and registry.access.redhat.com/rhel7.7:latest are examples of fully qualfied image refs.

If you do the configuration ^ to add those registries to registries.conf, then you can have builds with Dockerfile's that have

FROM rhel7.7:latest

or

FROM ubi8/ubi:latest

----
+
(link:https://bugzilla.redhat.com/show_bug.cgi?id=2011293[*BZ#2011293*])

[id="ocp-4-6-asynchronous-errata-updates"]
== Asynchronous errata updates

Expand Down