OCPBUGS-84528: clarify pull secret in-place update behavior and add CP watches#8327
OCPBUGS-84528: clarify pull secret in-place update behavior and add CP watches#8327enxebre wants to merge 2 commits intoopenshift:mainfrom
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: LGTM mode |
|
@enxebre: This pull request explicitly references no jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThe pull request introduces comprehensive documentation and controller instrumentation for pull secret cycling in HyperShift. Changes include documentation of how 🚥 Pre-merge checks | ✅ 11 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (11 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
docs/content/how-to/common/global-pull-secret.md (2)
106-107: Clarify the pull secret propagation path terminology.The documentation alternates between "HostedControlPlane pull secret" and "
HostedCluster.spec.pullSecret" which may confuse readers about the actual data flow. Consider adding a brief clarification early in this section explaining the path:
HostedCluster.spec.pullSecret(field referencing a Secret in the management cluster)- Secret reconciled into the HostedControlPlane namespace by HyperShift operator
- HCCO copies that Secret's data into
kube-system/original-pull-secretin the data planeThis would help readers understand that "HostedControlPlane pull secret" and "the Secret referenced by
HostedCluster.spec.pullSecret" have the same content but exist at different points in the reconciliation chain.Also applies to: 112-112, 134-134
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/how-to/common/global-pull-secret.md` around lines 106 - 107, The doc alternates terms and needs an explicit clarification of the pull-secret propagation path: add a short paragraph early in this section that states the chain as: the field HostedCluster.spec.pullSecret (a reference to a Secret in the management cluster) → the Secret reconciled into the HostedControlPlane namespace by the HyperShift operator → HCCO copies that Secret's data into kube-system/original-pull-secret in the data plane (and note that “HostedControlPlane pull secret” and the Secret referenced by HostedCluster.spec.pullSecret contain the same content but live in different namespaces/steps); reference HCCO, HostedControlPlane namespace, HostedCluster.spec.pullSecret, kube-system/original-pull-secret, additional-pull-secret and global-pull-secret so readers can map the terminology used later in lines ~106, ~112 and ~134.
335-335: Minor phrasing ambiguity in MCD conflict explanation.Line 335 states "without colliding with MCD on InPlace upgrade paths" which might be read as referring to InPlace paths on Replace workers. Consider rephrasing for clarity:
"so the DaemonSet can run there without the MCD conflicts that would occur on InPlace workers"
This makes it clearer that the conflict avoidance is about not running on InPlace workers, rather than about Replace workers having some relationship to InPlace upgrade paths.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/how-to/common/global-pull-secret.md` at line 335, Replace the ambiguous phrase "without colliding with MCD on InPlace upgrade paths" (near the sentence mentioning hypershift.openshift.io/nodepool-globalps-enabled=true) with a clearer wording such as "so the DaemonSet can run there without the MCD conflicts that would occur on InPlace workers" to make explicit that conflict avoidance refers to not running on InPlace workers; update the sentence in the same paragraph to use that phrasing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@api/hypershift/v1beta1/hostedcluster_types.go`:
- Around line 682-683: The API comment contains a commit-pinned GitHub URL to
the Hypershift docs (the link with SHA 3e0e06f7...), which will drift as docs
change; update the comment in hostedcluster_types.go to point to a non-pinned
docs location instead — either the published docs path or a branch-relative URL
(e.g., the docs path under the repository root or a /tree/main or /blob/main
link) so the CRD/godoc reference stays current; replace the SHA-pinned URL in
the comment block with the chosen stable URL.
---
Nitpick comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Around line 106-107: The doc alternates terms and needs an explicit
clarification of the pull-secret propagation path: add a short paragraph early
in this section that states the chain as: the field
HostedCluster.spec.pullSecret (a reference to a Secret in the management
cluster) → the Secret reconciled into the HostedControlPlane namespace by the
HyperShift operator → HCCO copies that Secret's data into
kube-system/original-pull-secret in the data plane (and note that
“HostedControlPlane pull secret” and the Secret referenced by
HostedCluster.spec.pullSecret contain the same content but live in different
namespaces/steps); reference HCCO, HostedControlPlane namespace,
HostedCluster.spec.pullSecret, kube-system/original-pull-secret,
additional-pull-secret and global-pull-secret so readers can map the terminology
used later in lines ~106, ~112 and ~134.
- Line 335: Replace the ambiguous phrase "without colliding with MCD on InPlace
upgrade paths" (near the sentence mentioning
hypershift.openshift.io/nodepool-globalps-enabled=true) with a clearer wording
such as "so the DaemonSet can run there without the MCD conflicts that would
occur on InPlace workers" to make explicit that conflict avoidance refers to not
running on InPlace workers; update the sentence in the same paragraph to use
that phrasing.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: b724835d-8ccc-4f33-bd5d-703b48df6202
⛔ Files ignored due to path filters (19)
api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamldocs/content/reference/aggregated-docs.mdis excluded by!docs/content/reference/aggregated-docs.mddocs/content/reference/api.mdis excluded by!docs/content/reference/api.mdvendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.gois excluded by!vendor/**,!**/vendor/**
📒 Files selected for processing (5)
AGENTS.mdapi/hypershift/v1beta1/hostedcluster_types.gocontrol-plane-operator/hostedclusterconfigoperator/controllers/globalps/setup.gocontrol-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.godocs/content/how-to/common/global-pull-secret.md
| // See https://github.com/openshift/hypershift/blob/3e0e06f78460ac1100dd8432dd7eee4e980a87d0/docs/content/how-to/common/global-pull-secret.md | ||
| // for details. |
There was a problem hiding this comment.
Avoid a commit-pinned docs link in the API comment.
Hardcoding a blob URL to a specific SHA means the CRD/godoc description will drift as the docs evolve or get backported. Prefer the published docs path or a branch-relative reference.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@api/hypershift/v1beta1/hostedcluster_types.go` around lines 682 - 683, The
API comment contains a commit-pinned GitHub URL to the Hypershift docs (the link
with SHA 3e0e06f7...), which will drift as docs change; update the comment in
hostedcluster_types.go to point to a non-pinned docs location instead — either
the published docs path or a branch-relative URL (e.g., the docs path under the
repository root or a /tree/main or /blob/main link) so the CRD/godoc reference
stays current; replace the SHA-pinned URL in the comment block with the chosen
stable URL.
There was a problem hiding this comment.
@jparrill, thanks for confirming! Here's the fix to replace the commit-pinned URL with a stable main-branch reference:
- Replace the SHA-pinned blob URL in the
PullSecretgodoc with a/blob/main/URL
Failed to handle agent chat message. Please try again.
There was a problem hiding this comment.
that's intentional so behaviour is versioned. If you point to a floating link, a consumer reading this for version N will see the description for whatever behaviour is for latest (if that link still exists) which doesn't necessarily apply
There was a problem hiding this comment.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
eb4f4ec to
0c991ce
Compare
0c991ce to
88bfa0c
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Around line 139-143: The docs text says that deleting additional-pull-secret
causes HCCO to remove global-pull-secret, revert to syncing only
original-pull-secret, and that the DaemonSet continues to run and keeps
/var/lib/kubelet/config.json aligned; update the architecture diagram to match
by removing the "Remove DaemonSet" cleanup path (or change it to a "DaemonSet
remains" state), and ensure arrows/labels show reversion to original-pull-secret
and retention of the DaemonSet rather than deletion.
- Around line 104-109: Update the paragraph describing HCCO watches to
explicitly list both Secret watch sources: the data-plane `kube-system` Secrets
(e.g., `kube-system/original-pull-secret`) and the
HostedControlPlane/HostedCluster namespace `pull-secret` (the
HostedCluster.spec.pullSecret source) so readers know HCCO watches both the
management-cluster Secret and the in-place HCP pull-secret for prompt updates;
keep references to `global-pull-secret`, `additional-pull-secret`, and the
DaemonSet sync behavior unchanged while adding the explicit mention that
reconciliation watches the HCP namespace `pull-secret` in addition to
`kube-system`.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: 5bbc574d-9eca-4a3e-a997-901b97d9f287
⛔ Files ignored due to path filters (19)
api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamldocs/content/reference/aggregated-docs.mdis excluded by!docs/content/reference/aggregated-docs.mddocs/content/reference/api.mdis excluded by!docs/content/reference/api.mdvendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.gois excluded by!vendor/**,!**/vendor/**
📒 Files selected for processing (5)
AGENTS.mdapi/hypershift/v1beta1/hostedcluster_types.gocontrol-plane-operator/hostedclusterconfigoperator/controllers/globalps/setup.gocontrol-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.godocs/content/how-to/common/global-pull-secret.md
✅ Files skipped from review due to trivial changes (2)
- control-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.go
- AGENTS.md
🚧 Files skipped from review as they are similar to previous changes (1)
- api/hypershift/v1beta1/hostedcluster_types.go
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
docs/content/how-to/common/global-pull-secret.md (2)
104-109:⚠️ Potential issue | 🟡 MinorDocument both watch sources for completeness.
Line 105 mentions only
kube-systemnamespace watches, but the PR objectives state that a control-plane pull-secret watch was added to the globalps HCCO controller. To give readers the complete picture of how in-place Secret updates trigger reconciliation, please mention both:
- the data-plane
kube-systemSecrets, and- the control-plane pull-secret (the
HostedCluster.spec.pullSecretsource in the HCP namespace).This aligns with the "pull secret cycling" model where in-place management-cluster Secret data updates flow through HCCO reconciliation into the guest cluster.
📝 Suggested doc update
### Automatic detection and baseline sync -- The Hosted Cluster Config Operator (HCCO) continuously reconciles Global Pull Secret resources and watches Secrets in the `kube-system` namespace of the data plane. +- The Hosted Cluster Config Operator (HCCO) continuously reconciles Global Pull Secret resources and watches: + - Secrets in the `kube-system` namespace of the data plane, and + - the control-plane pull-secret Secret in the HostedControlPlane namespace, ensuring in-place pull-secret updates trigger prompt reconciliation. - On every reconcile, HCCO copies the HostedControlPlane pull secret (sourced from **`HostedCluster.spec.pullSecret`**) into `kube-system/original-pull-secret` so the DaemonSet can mount it on the node.Based on learnings: Treat HostedCluster spec.pullSecret, management-cluster Secret data, HCCO reconciliation, and optional Global Pull Secret as one system when changing worker authentication to registries.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/how-to/common/global-pull-secret.md` around lines 104 - 109, Update the paragraph under "Automatic detection and baseline sync" to mention that HCCO watches both the data-plane kube-system Secrets and the control-plane HostedCluster pull-secret (HostedCluster.spec.pullSecret / HostedControlPlane pull secret in the HCP namespace), so clarify that reconciliation is triggered by in-place updates to either the kube-system secrets or the control-plane pull-secret and that HCCO copies the HostedControlPlane pull secret into kube-system/original-pull-secret, removes or merges the global-pull-secret depending on the presence of additional-pull-secret, and then the DaemonSet syncs the resulting secret into /var/lib/kubelet/config.json on eligible nodes.
301-302:⚠️ Potential issue | 🟡 MinorUpdate diagram to match documented cleanup behavior.
Line 302 shows the cleanup process removing the DaemonSet, but lines 139-143 clearly state: "The DaemonSet continues to run on eligible nodes and keeps
/var/lib/kubelet/config.jsonaligned with that HostedCluster pull secret." The code (context snippet 1) confirms that the DaemonSet is reconciled even whenadditional-pull-secretdoesn't exist—it simply syncsoriginal-pull-secretinstead of the mergedglobal-pull-secret.Please update the diagram to reflect that cleanup deletes
global-pull-secretbut the DaemonSet remains active and reverts to syncing onlyoriginal-pull-secret.🎨 Suggested diagram fix
GlobalPSController --> |If additional PS deleted| CleanupProcess[Cleanup Process] CleanupProcess --> |Deletes global PS| DeleteGlobalPS[Delete global-pull-secret] - CleanupProcess --> |Removes DaemonSet| RemoveDaemonSet[Remove DaemonSet] + CleanupProcess --> |DaemonSet remains active| RevertToOriginal[Revert to syncing original-pull-secret only]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/how-to/common/global-pull-secret.md` around lines 301 - 302, The diagram incorrectly shows CleanupProcess removing the DaemonSet; update it so CleanupProcess only deletes the global-pull-secret (DeleteGlobalPS/Delete global-pull-secret) while the DaemonSet (RemoveDaemonSet node should be removed or changed) remains active and continues reconciling by syncing the original-pull-secret when additional-pull-secret is absent. Specifically, remove the arrow/statement that deletes the DaemonSet (RemoveDaemonSet) and instead add/label a connection indicating the DaemonSet stays running and reverts to syncing original-pull-secret (not global-pull-secret) when additional-pull-secret is not present.
🧹 Nitpick comments (2)
docs/content/how-to/common/global-pull-secret.md (2)
333-336: Minor wording improvement for positive selection explanation.Lines 333-336 accurately describe the positive-selection model, but "not NodeAffinity on an InPlace label" could be clearer. Consider rephrasing to emphasize it's an opt-in approach where only labeled nodes are eligible, rather than defining it by what it's not.
✏️ Suggested rewording
-### How scheduling avoids InPlace conflicts +### How node scheduling works (positive selection) -Eligibility is **positive selection**, not NodeAffinity on an InPlace label: InPlace workers simply **never** receive `hypershift.openshift.io/nodepool-globalps-enabled=true`, so the sync DaemonSet does not place pods on them. Replace workers on AWS/Azure **do** receive the label so the DaemonSet can run there without colliding with MCD on InPlace upgrade paths. +The DaemonSet uses **positive selection** via `nodeSelector`: only nodes labeled `hypershift.openshift.io/nodepool-globalps-enabled=true` are eligible to run sync pods. InPlace workers **never** receive this label (avoiding conflicts with Machine Config Daemon), while AWS/Azure Replace workers **do** receive it (enabling kubelet config sync via the DaemonSet).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/how-to/common/global-pull-secret.md` around lines 333 - 336, Rephrase the sentence about positive selection to make it explicit that scheduling is opt-in: replace the fragment "not NodeAffinity on an InPlace label" with wording that states only nodes carrying the hypershift.openshift.io/nodepool-globalps-enabled=true label are eligible (i.e., the sync DaemonSet only targets labeled nodes), and clarify that InPlace workers never receive that label while Replace workers on AWS/Azure do so to avoid colliding with the MCD daemonset.
274-277: Clarify conditional Secret reading in diagram.Lines 276-277 show arrows suggesting both
global-pull-secretandoriginal-pull-secretare read concurrently, but the actual behavior (documented at lines 134-135) is: readglobal-pull-secretwhen present, otherwise readoriginal-pull-secret.Consider updating the diagram flow to show this is a conditional choice rather than parallel reads—perhaps by routing through a decision node first, or adding "(fallback)" to the original-pull-secret arrow.
💡 Suggested diagram improvement
Modify the flow to show the choice more clearly:
LoadingSyncCommand --> |Reads mounted files| SyncController[sync-global-pullsecret loop] SyncController --> |Check presence| CheckGlobalPS{global-pull-secret exists?} CheckGlobalPS --> |Yes| ReadGlobalPS[Read global-pull-secret mount] CheckGlobalPS --> |No| ReadOriginalPS[Read original-pull-secret mount]Or add labels to existing arrows to indicate the conditional nature:
LoadingSyncController --> |Reads if present| ReadGlobalPS[Read global-pull-secret mount] SyncController --> |Reads as fallback| ReadOriginalPS[Read original-pull-secret mount]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/how-to/common/global-pull-secret.md` around lines 274 - 277, The diagram incorrectly implies concurrent reads of global-pull-secret and original-pull-secret; update the flow so SyncController (sync-global-pullsecret loop) performs a conditional check: add a decision node (e.g., CheckGlobalPS{global-pull-secret exists?}) between SyncController and the read nodes, routing Yes -> ReadGlobalPS[Read global-pull-secret mount] and No -> ReadOriginalPS[Read original-pull-secret mount]; alternatively, if you prefer minimal change, keep SyncController -> ReadGlobalPS and SyncController -> ReadOriginalPS but change the arrow labels to "Reads if present" and "Reads as fallback" respectively to clearly indicate the conditional/fallback behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Line 215: The "Node eligibility assessment" bullet incorrectly says "Labels
nodes from InPlace NodePools"—update this to reflect the actual behavior:
InPlace NodePools are intentionally NOT labeled with
hypershift.openshift.io/nodepool-globalps-enabled=true and only AWS/Azure
Replace NodePool nodes receive that label; change the text under "Node
eligibility assessment" to state that InPlace nodes are excluded from labeling
and that the DaemonSet scheduling restrictions apply only to nodes labeled
hypershift.openshift.io/nodepool-globalps-enabled=true (i.e., Replace NodePool
nodes).
---
Duplicate comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Around line 104-109: Update the paragraph under "Automatic detection and
baseline sync" to mention that HCCO watches both the data-plane kube-system
Secrets and the control-plane HostedCluster pull-secret
(HostedCluster.spec.pullSecret / HostedControlPlane pull secret in the HCP
namespace), so clarify that reconciliation is triggered by in-place updates to
either the kube-system secrets or the control-plane pull-secret and that HCCO
copies the HostedControlPlane pull secret into kube-system/original-pull-secret,
removes or merges the global-pull-secret depending on the presence of
additional-pull-secret, and then the DaemonSet syncs the resulting secret into
/var/lib/kubelet/config.json on eligible nodes.
- Around line 301-302: The diagram incorrectly shows CleanupProcess removing the
DaemonSet; update it so CleanupProcess only deletes the global-pull-secret
(DeleteGlobalPS/Delete global-pull-secret) while the DaemonSet (RemoveDaemonSet
node should be removed or changed) remains active and continues reconciling by
syncing the original-pull-secret when additional-pull-secret is absent.
Specifically, remove the arrow/statement that deletes the DaemonSet
(RemoveDaemonSet) and instead add/label a connection indicating the DaemonSet
stays running and reverts to syncing original-pull-secret (not
global-pull-secret) when additional-pull-secret is not present.
---
Nitpick comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Around line 333-336: Rephrase the sentence about positive selection to make it
explicit that scheduling is opt-in: replace the fragment "not NodeAffinity on an
InPlace label" with wording that states only nodes carrying the
hypershift.openshift.io/nodepool-globalps-enabled=true label are eligible (i.e.,
the sync DaemonSet only targets labeled nodes), and clarify that InPlace workers
never receive that label while Replace workers on AWS/Azure do so to avoid
colliding with the MCD daemonset.
- Around line 274-277: The diagram incorrectly implies concurrent reads of
global-pull-secret and original-pull-secret; update the flow so SyncController
(sync-global-pullsecret loop) performs a conditional check: add a decision node
(e.g., CheckGlobalPS{global-pull-secret exists?}) between SyncController and the
read nodes, routing Yes -> ReadGlobalPS[Read global-pull-secret mount] and No ->
ReadOriginalPS[Read original-pull-secret mount]; alternatively, if you prefer
minimal change, keep SyncController -> ReadGlobalPS and SyncController ->
ReadOriginalPS but change the arrow labels to "Reads if present" and "Reads as
fallback" respectively to clearly indicate the conditional/fallback behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: 96dc9db1-2b71-4010-8917-9daf5195b483
⛔ Files ignored due to path filters (19)
api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamldocs/content/reference/aggregated-docs.mdis excluded by!docs/content/reference/aggregated-docs.mddocs/content/reference/api.mdis excluded by!docs/content/reference/api.mdvendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.gois excluded by!vendor/**,!**/vendor/**
📒 Files selected for processing (5)
AGENTS.mdapi/hypershift/v1beta1/hostedcluster_types.gocontrol-plane-operator/hostedclusterconfigoperator/controllers/globalps/setup.gocontrol-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.godocs/content/how-to/common/global-pull-secret.md
✅ Files skipped from review due to trivial changes (2)
- control-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.go
- AGENTS.md
🚧 Files skipped from review as they are similar to previous changes (2)
- control-plane-operator/hostedclusterconfigoperator/controllers/globalps/setup.go
- api/hypershift/v1beta1/hostedcluster_types.go
88bfa0c to
bd5fa9f
Compare
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #8327 +/- ##
=======================================
Coverage 36.46% 36.47%
=======================================
Files 765 765
Lines 93256 93278 +22
=======================================
+ Hits 34010 34022 +12
- Misses 56532 56542 +10
Partials 2714 2714
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
bd5fa9f to
c635b8b
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (2)
docs/content/how-to/common/global-pull-secret.md (2)
104-106:⚠️ Potential issue | 🟡 MinorDocument both Secret watch sources in the baseline-sync section.
Line 105 currently mentions only
kube-systemwatches, but the implementation also watches the control-plane namespacepull-secretfor prompt reconciliation on in-place data updates. Please list both watch sources explicitly.📝 Suggested doc patch
-### Automatic detection and baseline sync -- The Hosted Cluster Config Operator (HCCO) continuously reconciles Global Pull Secret resources and watches Secrets in the `kube-system` namespace of the data plane. +### Automatic detection and baseline sync +- The Hosted Cluster Config Operator (HCCO) continuously reconciles Global Pull Secret resources and watches: + - Secrets in the `kube-system` namespace of the data plane, and + - the control-plane `pull-secret` Secret in the HostedControlPlane namespace.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/how-to/common/global-pull-secret.md` around lines 104 - 106, The baseline-sync section should explicitly list both Secret watch sources: update the paragraph under "Automatic detection and baseline sync" to state that HCCO watches Secrets in the data plane `kube-system` namespace and also watches the control-plane namespace `pull-secret` (the HostedControlPlane source) so it reconciles promptly on in-place updates; keep references to HostedControlPlane.spec.pullSecret and the copied target `kube-system/original-pull-secret` to make the flow clear.
139-142:⚠️ Potential issue | 🟡 MinorCleanup prose conflicts with the architecture diagram cleanup path.
Line 142 says the DaemonSet keeps running, but the diagram still shows a “Remove DaemonSet” branch. Please align the diagram with the documented behavior.
📝 Suggested diagram adjustment
- CleanupProcess --> |Removes DaemonSet| RemoveDaemonSet[Remove DaemonSet] + CleanupProcess --> |DaemonSet remains scheduled by node eligibility| KeepDaemonSet[DaemonSet remains]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/content/how-to/common/global-pull-secret.md` around lines 139 - 142, The prose and diagram conflict: the text under "Automatic cleanup" says deleting `additional-pull-secret` leaves the DaemonSet running (syncing `/var/lib/kubelet/config.json` from `original-pull-secret`), but the architecture diagram shows a "Remove DaemonSet" branch; pick one behavior and make both consistent. Either update the diagram to remove the "Remove DaemonSet" branch and show the DaemonSet remaining active (annotate that `global-pull-secret` is removed and the DaemonSet continues using `original-pull-secret`), or change the prose to state that deleting `additional-pull-secret` triggers DaemonSet removal; ensure references to `additional-pull-secret`, `global-pull-secret`, `original-pull-secret`, and the DaemonSet are consistent across text and diagram.
🧹 Nitpick comments (1)
api/.golangci.yml (1)
1529-1532: Replace the old PullSecret variant with the fully qualified field name to keep exclusions clean.Lines 1526–1528 contain a PullSecret omitempty exclusion that is now superseded by the new variant at lines 1529–1532. The new entry uses the fully qualified field name (
HostedClusterSpec.PullSecret) which matches the current kubeapilinter message format. Remove the old variant to avoid stale entries in the exclusion list.Proposed cleanup
Replace lines 1526–1528 (old:
field PullSecret) with the new fully qualified variant (lines 1529–1532:field HostedClusterSpec.PullSecret).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/.golangci.yml` around lines 1529 - 1532, Remove the stale kubeapilinter exclusion that references the short field name "PullSecret" and keep the new fully-qualified exclusion using "HostedClusterSpec.PullSecret"; specifically, delete the old entry for field PullSecret and ensure only the fully qualified text 'requiredfields: field HostedClusterSpec.PullSecret should have the omitempty tag' remains in the linters exclusions so they match the current kubeapilinter message format.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Around line 104-106: The baseline-sync section should explicitly list both
Secret watch sources: update the paragraph under "Automatic detection and
baseline sync" to state that HCCO watches Secrets in the data plane
`kube-system` namespace and also watches the control-plane namespace
`pull-secret` (the HostedControlPlane source) so it reconciles promptly on
in-place updates; keep references to HostedControlPlane.spec.pullSecret and the
copied target `kube-system/original-pull-secret` to make the flow clear.
- Around line 139-142: The prose and diagram conflict: the text under "Automatic
cleanup" says deleting `additional-pull-secret` leaves the DaemonSet running
(syncing `/var/lib/kubelet/config.json` from `original-pull-secret`), but the
architecture diagram shows a "Remove DaemonSet" branch; pick one behavior and
make both consistent. Either update the diagram to remove the "Remove DaemonSet"
branch and show the DaemonSet remaining active (annotate that
`global-pull-secret` is removed and the DaemonSet continues using
`original-pull-secret`), or change the prose to state that deleting
`additional-pull-secret` triggers DaemonSet removal; ensure references to
`additional-pull-secret`, `global-pull-secret`, `original-pull-secret`, and the
DaemonSet are consistent across text and diagram.
---
Nitpick comments:
In `@api/.golangci.yml`:
- Around line 1529-1532: Remove the stale kubeapilinter exclusion that
references the short field name "PullSecret" and keep the new fully-qualified
exclusion using "HostedClusterSpec.PullSecret"; specifically, delete the old
entry for field PullSecret and ensure only the fully qualified text
'requiredfields: field HostedClusterSpec.PullSecret should have the omitempty
tag' remains in the linters exclusions so they match the current kubeapilinter
message format.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: 6114ac10-94b6-4911-8f79-b327d7ea779c
⛔ Files ignored due to path filters (19)
api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamldocs/content/reference/aggregated-docs.mdis excluded by!docs/content/reference/aggregated-docs.mddocs/content/reference/api.mdis excluded by!docs/content/reference/api.mdvendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.gois excluded by!vendor/**,!**/vendor/**
📒 Files selected for processing (6)
AGENTS.mdapi/.golangci.ymlapi/hypershift/v1beta1/hostedcluster_types.gocontrol-plane-operator/hostedclusterconfigoperator/controllers/globalps/setup.gocontrol-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.godocs/content/how-to/common/global-pull-secret.md
✅ Files skipped from review due to trivial changes (1)
- AGENTS.md
🚧 Files skipped from review as they are similar to previous changes (1)
- api/hypershift/v1beta1/hostedcluster_types.go
jparrill
left a comment
There was a problem hiding this comment.
Dropped a minor thing and agreeing with bot about one thing to fix. Otherwise lgtm
| // See https://github.com/openshift/hypershift/blob/3e0e06f78460ac1100dd8432dd7eee4e980a87d0/docs/content/how-to/common/global-pull-secret.md | ||
| // for details. |
| @@ -197,16 +215,15 @@ The implementation consists of several key components working together: | |||
| - **Node eligibility assessment**: Labels nodes from InPlace NodePools and configures DaemonSet scheduling restrictions | |||
9987cf7 to
0c704a1
Compare
7065e93 to
87b1ccf
Compare
|
@enxebre: This pull request references Jira Issue OCPBUGS-84528, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/jira refresh |
|
@enxebre: This pull request references Jira Issue OCPBUGS-84528, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/hold cancel |
Test Resultse2e-aws
|
3a0541b to
668c6dc
Compare
|
Scheduling tests matching the |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bryan-cox, enxebre The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/test e2e-aws |
AI Test Failure AnalysisJob: Generated by hypershift-analyze-e2e-failure post-step using Claude claude-opus-4-6 |
|
/pipline required |
|
/pipeline required |
|
Scheduling tests matching the |
|
/retest |
Clarify spec.pullSecret API godoc, add CP pull-secret watches to HCCO resources and globalps controllers, extract testable predicates with unit tests, and overhaul global-pull-secret docs.
Add subtest to EnsureGlobalPullSecret validating the full propagation chain when management-cluster pull secret data is updated in-place without triggering a NodePool rollout.
668c6dc to
9f69bce
Compare
|
New changes are detected. LGTM label has been removed. |
|
/pipeline required |
|
Scheduling tests matching the |
Test Failure Analysis CompleteJob Information
Test Failure AnalysisErrorSummaryThis is a transient Konflux infrastructure failure, not a code or test issue. The Tekton PipelineRun was created but failed to execute — the entire pipeline completed in exactly 1 second, indicating the pipeline never ran any build tasks. All other CI checks on the same commit (unit tests, envtests across 12 Kubernetes versions, lint, verify/build, codespell) passed. The same Root CauseThe root cause is a transient Konflux/Tekton PipelineRun scheduling failure. The specific sequence of events:
This is confirmed as infrastructure-only because:
Recommendations
Evidence
|
Summary
HostedCluster.spec.pullSecretAPI godoc to accurately describe in-place Secret update propagation: no NodePool rollout, but on AWS/Azure Replace NodePools credentials still reach guest cluster and kubelet via the Global Pull Secret DaemonSet pathglobalpsHCCO controller sokube-system/original-pull-secretreconciles promptly on in-place updates (theresourcescontroller already had a watch added foropenshift-config/pull-secret)global-pull-secret.mdto document platform/NodePool eligibility, positive-selection scheduling model, and baseline sync behavior withoutadditional-pull-secretAGENTS.mdso AI agents treatspec.pullSecret, management-cluster Secret data, HCCO reconciliation, and optional Global Pull Secret as one system when making changesTest plan
make verifypasses (API generation, CRD manifests, lint)openshift-config/pull-secret,kube-system/original-pull-secret, and kubeletconfig.jsonall update promptly without NodePool rollout🤖 Generated with Claude Code
Summary by CodeRabbit