-
Notifications
You must be signed in to change notification settings - Fork 1.8k
RHDEVDOCS-3418 Create known issue for 2011293 #37846
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -71,7 +71,7 @@ If you are creating machine configs for day 1 or day 2 operations that use Ignit | |
| [id="ocp-4-6-additional-steps-to-add-nodes-to-clusters"] | ||
| ==== Additional steps to add nodes to existing clusters | ||
|
|
||
| For clusters that have been upgraded to {product-title} 4.6, you can add more nodes to your {product-title} cluster. These instructions are only applicable if you originally installed a cluster prior to {product-title} 4.6 and have since upgraded to 4.6. | ||
| For clusters that have been upgraded to {product-title} 4.6, you can add more nodes to your {product-title} cluster. These instructions are only applicable if you originally installed a cluster prior to {product-title} 4.6 and have since upgraded to 4.6. | ||
|
|
||
| If you installed a user-provisioned cluster on bare metal or vSphere, you must ensure that your boot media or OVA image matches the version that your cluster was upgraded to. Additionally, your Ignition configuration file must be modified to be spec v3 compatible. For more detailed instructions and an example Ignition config file, see the link:https://access.redhat.com/solutions/5514051[Adding new nodes to UPI cluster fails after upgrading to OpenShift 4.6+] Knowledgebase Solution article. | ||
|
|
||
|
|
@@ -2421,6 +2421,23 @@ This caused image pulls to fail against the inaccessible private registry, norma | |
|
|
||
| * Currently, a Kubernetes port collision issue can cause a breakdown in pod-to-pod communication, even after pods are redeployed. For detailed information and a workaround, see the Red Hat Knowledge Base solution link:https://access.redhat.com/solutions/5940711[Port collisions between pod and cluster IPs on OpenShift 4 with OVN-Kubernetes]. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1939676[*BZ#1939676*], link:https://bugzilla.redhat.com/show_bug.cgi?id=1939045[*BZ#1939045*]) | ||
|
|
||
| [id="ocp-4-6-devdocs-3418"] | ||
| * There is currently a known issue. By default, build pods do not pull images from the following Red Hat registries: `registry.redhat.io`, `registry.access.redhat.com`, and `quay.io` registries. This happens because the link:https://github.com/openshift/builder/commit/32f5b57382cedf4329039752f0e15a14b7f98366[pull request] for link:https://bugzilla.redhat.com/show_bug.cgi?id=1826183[BZ#1826183] introduced a default `registries.conf` file for buildah that does not include those Red Hat registries in the default search list. As a result, the Red Hat registries are not included in searches for image references. | ||
| + | ||
| Workaround: If you have `<tbd>` privileges, update `registries.conf` with the following lines: | ||
| + | ||
| ---- | ||
| tbd | ||
| ---- | ||
| + | ||
| Otherwise, if you cannot update `registries.conf`, include the `registry.redhat.io`, `registry.access.redhat.com`, or `quay.io` registry names when you specify an image. For example, in the `<filename>` file specify an image like this: | ||
| + | ||
| ---- | ||
| <example of fully qualified image name> | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
If you do the configuration ^ to add those registries to registries.conf, then you can have builds with Dockerfile's that have
or
|
||
| ---- | ||
| + | ||
| (link:https://bugzilla.redhat.com/show_bug.cgi?id=2011293[*BZ#2011293*]) | ||
|
|
||
| [id="ocp-4-6-asynchronous-errata-updates"] | ||
| == Asynchronous errata updates | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
before you ask @rolfedh :-) I am not an SME when it comes to updating registries.conf on a cluster (build just consume this)
I see this doc link from the RHCOS / MCO section https://docs.openshift.com/container-platform/4.9/architecture/architecture-rhcos.html reference registries.conf
But I think this link https://docs.openshift.com/container-platform/4.9/openshift_images/image-configuration.html#images-configuration-shortname_image-configuration has the instructions for setting them up.
That doc again mentions the MCO watches that config object. So I would consider them being the SMEs for that.
That said, I think you can just for the work around cite https://docs.openshift.com/container-platform/4.9/openshift_images/image-configuration.html#images-configuration-shortname_image-configuration as the instructions for configuring things so those Red Hat registries are included.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The short name feature is brand new and only applies to cri-o - we shouldn't reference it as a known issue in a prior release.
Builds do not support this because we couldn't guarantee that buildah sent the right credentials to the right container registry. The work-around is "use fully qualified names in image pull specifications."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @adambkaplan and @gabemontero. I'll update the known issue to reflect your comments. In BZ 2011293, the Version is 4.7. Should it be 4.9?