Skip to content

OCPBUGS-84528: clarify pull secret in-place update behavior and add CP watches#8327

Open
enxebre wants to merge 2 commits intoopenshift:mainfrom
enxebre:enxebre/pull-secret-docs-and-watch
Open

OCPBUGS-84528: clarify pull secret in-place update behavior and add CP watches#8327
enxebre wants to merge 2 commits intoopenshift:mainfrom
enxebre:enxebre/pull-secret-docs-and-watch

Conversation

@enxebre
Copy link
Copy Markdown
Member

@enxebre enxebre commented Apr 24, 2026

Summary

  • Clarify HostedCluster.spec.pullSecret API godoc to accurately describe in-place Secret update propagation: no NodePool rollout, but on AWS/Azure Replace NodePools credentials still reach guest cluster and kubelet via the Global Pull Secret DaemonSet path
  • Add CP pull-secret watch to the globalps HCCO controller so kube-system/original-pull-secret reconciles promptly on in-place updates (the resources controller already had a watch added for openshift-config/pull-secret)
  • Overhaul global-pull-secret.md to document platform/NodePool eligibility, positive-selection scheduling model, and baseline sync behavior without additional-pull-secret
  • Add "Pull secret cycling" section to AGENTS.md so AI agents treat spec.pullSecret, management-cluster Secret data, HCCO reconciliation, and optional Global Pull Secret as one system when making changes

Test plan

  • Verify make verify passes (API generation, CRD manifests, lint)
  • Verify HCCO starts cleanly with both new watches (resources + globalps)
  • Test in-place pull secret update on AWS Replace NodePool: confirm openshift-config/pull-secret, kube-system/original-pull-secret, and kubelet config.json all update promptly without NodePool rollout
  • Test on a non-AWS/Azure platform: confirm secrets update in-cluster but no DaemonSet pods schedule (positive-selection label absent)

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Documentation
    • Clarified pull secret propagation semantics and when node pool rollouts are triggered
    • Added guidance on platform and node pool eligibility for global pull secret synchronization
    • Updated global pull secret reconciliation behavior and merge mechanics

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: LGTM mode

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Apr 24, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@enxebre: This pull request explicitly references no jira issue.

Details

In response to this:

Summary

  • Clarify HostedCluster.spec.pullSecret API godoc to accurately describe in-place Secret update propagation: no NodePool rollout, but on AWS/Azure Replace NodePools credentials still reach guest cluster and kubelet via the Global Pull Secret DaemonSet path
  • Add CP pull-secret watch to the globalps HCCO controller so kube-system/original-pull-secret reconciles promptly on in-place updates (the resources controller already had a watch added for openshift-config/pull-secret)
  • Overhaul global-pull-secret.md to document platform/NodePool eligibility, positive-selection scheduling model, and baseline sync behavior without additional-pull-secret
  • Add omitzero JSON tag to PullSecret field and update generated CRDs

Test plan

  • Verify make verify passes (API generation, CRD manifests, lint)
  • Verify HCCO starts cleanly with both new watches (resources + globalps)
  • Test in-place pull secret update on AWS Replace NodePool: confirm openshift-config/pull-secret, kube-system/original-pull-secret, and kubelet config.json all update promptly without NodePool rollout
  • Test on a non-AWS/Azure platform: confirm secrets update in-cluster but no DaemonSet pods schedule (positive-selection label absent)

🤖 Generated with Claude Code

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci Bot added do-not-merge/needs-area needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Apr 24, 2026
@openshift-ci openshift-ci Bot requested a review from Nirshal April 24, 2026 08:25
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 24, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

The pull request introduces comprehensive documentation and controller instrumentation for pull secret cycling in HyperShift. Changes include documentation of how HostedCluster.spec.pullSecret modifications trigger NodePool rollouts while in-place Secret data updates do not, clarification of propagation behavior across control-plane and guest namespaces, and updated global pull secret mechanics showing baseline reconciliation with optional merge logic. Two new controller watches are added to monitor pull-secret changes in the control-plane namespace and respond with reconciliation requests upon Secret modification events.

🚥 Pre-merge checks | ✅ 11 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (11 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Stable And Deterministic Test Names ✅ Passed This PR contains no Ginkgo test code or test file modifications; it only includes documentation updates and controller implementation changes.
Test Structure And Quality ✅ Passed The custom check for Ginkgo test quality is not applicable; this PR contains standard Go unit tests with Gomega assertions, not Ginkgo tests.
Microshift Test Compatibility ✅ Passed The pull request does not add any new Ginkgo e2e tests. Modifications include documentation updates, API type comments, and controller watch registration code only.
Single Node Openshift (Sno) Test Compatibility ✅ Passed This PR contains no new Ginkgo e2e tests, only documentation and controller code updates.
Topology-Aware Scheduling Compatibility ✅ Passed PR introduces no scheduling constraints; changes are watch registrations and documentation updates only.
Ote Binary Stdout Contract ✅ Passed This PR does not contain OTE test binary code; it only modifies production controller code and documentation files.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed PR contains no new Ginkgo e2e tests; only documentation and API type updates.
Title check ✅ Passed The pull request title accurately summarizes the main changes: clarifying pull secret in-place update behavior and adding control-plane watches to the codebase.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci Bot requested a review from sjenning April 24, 2026 08:25
@openshift-ci openshift-ci Bot added area/api Indicates the PR includes changes for the API area/cli Indicates the PR includes changes for CLI area/control-plane-operator Indicates the PR includes changes for the control plane operator - in an OCP release area/documentation Indicates the PR includes changes for documentation and removed do-not-merge/needs-area labels Apr 24, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
docs/content/how-to/common/global-pull-secret.md (2)

106-107: Clarify the pull secret propagation path terminology.

The documentation alternates between "HostedControlPlane pull secret" and "HostedCluster.spec.pullSecret" which may confuse readers about the actual data flow. Consider adding a brief clarification early in this section explaining the path:

  1. HostedCluster.spec.pullSecret (field referencing a Secret in the management cluster)
  2. Secret reconciled into the HostedControlPlane namespace by HyperShift operator
  3. HCCO copies that Secret's data into kube-system/original-pull-secret in the data plane

This would help readers understand that "HostedControlPlane pull secret" and "the Secret referenced by HostedCluster.spec.pullSecret" have the same content but exist at different points in the reconciliation chain.

Also applies to: 112-112, 134-134

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/content/how-to/common/global-pull-secret.md` around lines 106 - 107, The
doc alternates terms and needs an explicit clarification of the pull-secret
propagation path: add a short paragraph early in this section that states the
chain as: the field HostedCluster.spec.pullSecret (a reference to a Secret in
the management cluster) → the Secret reconciled into the HostedControlPlane
namespace by the HyperShift operator → HCCO copies that Secret's data into
kube-system/original-pull-secret in the data plane (and note that
“HostedControlPlane pull secret” and the Secret referenced by
HostedCluster.spec.pullSecret contain the same content but live in different
namespaces/steps); reference HCCO, HostedControlPlane namespace,
HostedCluster.spec.pullSecret, kube-system/original-pull-secret,
additional-pull-secret and global-pull-secret so readers can map the terminology
used later in lines ~106, ~112 and ~134.

335-335: Minor phrasing ambiguity in MCD conflict explanation.

Line 335 states "without colliding with MCD on InPlace upgrade paths" which might be read as referring to InPlace paths on Replace workers. Consider rephrasing for clarity:

"so the DaemonSet can run there without the MCD conflicts that would occur on InPlace workers"

This makes it clearer that the conflict avoidance is about not running on InPlace workers, rather than about Replace workers having some relationship to InPlace upgrade paths.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/content/how-to/common/global-pull-secret.md` at line 335, Replace the
ambiguous phrase "without colliding with MCD on InPlace upgrade paths" (near the
sentence mentioning hypershift.openshift.io/nodepool-globalps-enabled=true) with
a clearer wording such as "so the DaemonSet can run there without the MCD
conflicts that would occur on InPlace workers" to make explicit that conflict
avoidance refers to not running on InPlace workers; update the sentence in the
same paragraph to use that phrasing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@api/hypershift/v1beta1/hostedcluster_types.go`:
- Around line 682-683: The API comment contains a commit-pinned GitHub URL to
the Hypershift docs (the link with SHA 3e0e06f7...), which will drift as docs
change; update the comment in hostedcluster_types.go to point to a non-pinned
docs location instead — either the published docs path or a branch-relative URL
(e.g., the docs path under the repository root or a /tree/main or /blob/main
link) so the CRD/godoc reference stays current; replace the SHA-pinned URL in
the comment block with the chosen stable URL.

---

Nitpick comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Around line 106-107: The doc alternates terms and needs an explicit
clarification of the pull-secret propagation path: add a short paragraph early
in this section that states the chain as: the field
HostedCluster.spec.pullSecret (a reference to a Secret in the management
cluster) → the Secret reconciled into the HostedControlPlane namespace by the
HyperShift operator → HCCO copies that Secret's data into
kube-system/original-pull-secret in the data plane (and note that
“HostedControlPlane pull secret” and the Secret referenced by
HostedCluster.spec.pullSecret contain the same content but live in different
namespaces/steps); reference HCCO, HostedControlPlane namespace,
HostedCluster.spec.pullSecret, kube-system/original-pull-secret,
additional-pull-secret and global-pull-secret so readers can map the terminology
used later in lines ~106, ~112 and ~134.
- Line 335: Replace the ambiguous phrase "without colliding with MCD on InPlace
upgrade paths" (near the sentence mentioning
hypershift.openshift.io/nodepool-globalps-enabled=true) with a clearer wording
such as "so the DaemonSet can run there without the MCD conflicts that would
occur on InPlace workers" to make explicit that conflict avoidance refers to not
running on InPlace workers; update the sentence in the same paragraph to use
that phrasing.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: b724835d-8ccc-4f33-bd5d-703b48df6202

📥 Commits

Reviewing files that changed from the base of the PR and between 3e0e06f and eb4f4ec.

⛔ Files ignored due to path filters (19)
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • docs/content/reference/aggregated-docs.md is excluded by !docs/content/reference/aggregated-docs.md
  • docs/content/reference/api.md is excluded by !docs/content/reference/api.md
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.go is excluded by !vendor/**, !**/vendor/**
📒 Files selected for processing (5)
  • AGENTS.md
  • api/hypershift/v1beta1/hostedcluster_types.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/globalps/setup.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.go
  • docs/content/how-to/common/global-pull-secret.md

Comment on lines +682 to +683
// See https://github.com/openshift/hypershift/blob/3e0e06f78460ac1100dd8432dd7eee4e980a87d0/docs/content/how-to/common/global-pull-secret.md
// for details.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid a commit-pinned docs link in the API comment.

Hardcoding a blob URL to a specific SHA means the CRD/godoc description will drift as the docs evolve or get backported. Prefer the published docs path or a branch-relative reference.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/hypershift/v1beta1/hostedcluster_types.go` around lines 682 - 683, The
API comment contains a commit-pinned GitHub URL to the Hypershift docs (the link
with SHA 3e0e06f7...), which will drift as docs change; update the comment in
hostedcluster_types.go to point to a non-pinned docs location instead — either
the published docs path or a branch-relative URL (e.g., the docs path under the
repository root or a /tree/main or /blob/main link) so the CRD/godoc reference
stays current; replace the SHA-pinned URL in the comment block with the chosen
stable URL.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jparrill, thanks for confirming! Here's the fix to replace the commit-pinned URL with a stable main-branch reference:

  • Replace the SHA-pinned blob URL in the PullSecret godoc with a /blob/main/ URL

Failed to handle agent chat message. Please try again.

Copy link
Copy Markdown
Member Author

@enxebre enxebre Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's intentional so behaviour is versioned. If you point to a floating link, a consumer reading this for version N will see the description for whatever behaviour is for latest (if that link still exists) which doesn't necessarily apply

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch from eb4f4ec to 0c991ce Compare April 24, 2026 08:43
@openshift-ci openshift-ci Bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 24, 2026
@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch from 0c991ce to 88bfa0c Compare April 24, 2026 08:47
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Around line 139-143: The docs text says that deleting additional-pull-secret
causes HCCO to remove global-pull-secret, revert to syncing only
original-pull-secret, and that the DaemonSet continues to run and keeps
/var/lib/kubelet/config.json aligned; update the architecture diagram to match
by removing the "Remove DaemonSet" cleanup path (or change it to a "DaemonSet
remains" state), and ensure arrows/labels show reversion to original-pull-secret
and retention of the DaemonSet rather than deletion.
- Around line 104-109: Update the paragraph describing HCCO watches to
explicitly list both Secret watch sources: the data-plane `kube-system` Secrets
(e.g., `kube-system/original-pull-secret`) and the
HostedControlPlane/HostedCluster namespace `pull-secret` (the
HostedCluster.spec.pullSecret source) so readers know HCCO watches both the
management-cluster Secret and the in-place HCP pull-secret for prompt updates;
keep references to `global-pull-secret`, `additional-pull-secret`, and the
DaemonSet sync behavior unchanged while adding the explicit mention that
reconciliation watches the HCP namespace `pull-secret` in addition to
`kube-system`.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 5bbc574d-9eca-4a3e-a997-901b97d9f287

📥 Commits

Reviewing files that changed from the base of the PR and between eb4f4ec and 0c991ce.

⛔ Files ignored due to path filters (19)
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • docs/content/reference/aggregated-docs.md is excluded by !docs/content/reference/aggregated-docs.md
  • docs/content/reference/api.md is excluded by !docs/content/reference/api.md
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.go is excluded by !vendor/**, !**/vendor/**
📒 Files selected for processing (5)
  • AGENTS.md
  • api/hypershift/v1beta1/hostedcluster_types.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/globalps/setup.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.go
  • docs/content/how-to/common/global-pull-secret.md
✅ Files skipped from review due to trivial changes (2)
  • control-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.go
  • AGENTS.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • api/hypershift/v1beta1/hostedcluster_types.go

Comment thread docs/content/how-to/common/global-pull-secret.md
Comment thread docs/content/how-to/common/global-pull-secret.md
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
docs/content/how-to/common/global-pull-secret.md (2)

104-109: ⚠️ Potential issue | 🟡 Minor

Document both watch sources for completeness.

Line 105 mentions only kube-system namespace watches, but the PR objectives state that a control-plane pull-secret watch was added to the globalps HCCO controller. To give readers the complete picture of how in-place Secret updates trigger reconciliation, please mention both:

  • the data-plane kube-system Secrets, and
  • the control-plane pull-secret (the HostedCluster.spec.pullSecret source in the HCP namespace).

This aligns with the "pull secret cycling" model where in-place management-cluster Secret data updates flow through HCCO reconciliation into the guest cluster.

📝 Suggested doc update
 ### Automatic detection and baseline sync
-- The Hosted Cluster Config Operator (HCCO) continuously reconciles Global Pull Secret resources and watches Secrets in the `kube-system` namespace of the data plane.
+- The Hosted Cluster Config Operator (HCCO) continuously reconciles Global Pull Secret resources and watches:
+  - Secrets in the `kube-system` namespace of the data plane, and
+  - the control-plane pull-secret Secret in the HostedControlPlane namespace, ensuring in-place pull-secret updates trigger prompt reconciliation.
 - On every reconcile, HCCO copies the HostedControlPlane pull secret (sourced from **`HostedCluster.spec.pullSecret`**) into `kube-system/original-pull-secret` so the DaemonSet can mount it on the node.

Based on learnings: Treat HostedCluster spec.pullSecret, management-cluster Secret data, HCCO reconciliation, and optional Global Pull Secret as one system when changing worker authentication to registries.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/content/how-to/common/global-pull-secret.md` around lines 104 - 109,
Update the paragraph under "Automatic detection and baseline sync" to mention
that HCCO watches both the data-plane kube-system Secrets and the control-plane
HostedCluster pull-secret (HostedCluster.spec.pullSecret / HostedControlPlane
pull secret in the HCP namespace), so clarify that reconciliation is triggered
by in-place updates to either the kube-system secrets or the control-plane
pull-secret and that HCCO copies the HostedControlPlane pull secret into
kube-system/original-pull-secret, removes or merges the global-pull-secret
depending on the presence of additional-pull-secret, and then the DaemonSet
syncs the resulting secret into /var/lib/kubelet/config.json on eligible nodes.

301-302: ⚠️ Potential issue | 🟡 Minor

Update diagram to match documented cleanup behavior.

Line 302 shows the cleanup process removing the DaemonSet, but lines 139-143 clearly state: "The DaemonSet continues to run on eligible nodes and keeps /var/lib/kubelet/config.json aligned with that HostedCluster pull secret." The code (context snippet 1) confirms that the DaemonSet is reconciled even when additional-pull-secret doesn't exist—it simply syncs original-pull-secret instead of the merged global-pull-secret.

Please update the diagram to reflect that cleanup deletes global-pull-secret but the DaemonSet remains active and reverts to syncing only original-pull-secret.

🎨 Suggested diagram fix
     GlobalPSController --> |If additional PS deleted| CleanupProcess[Cleanup Process]
     CleanupProcess --> |Deletes global PS| DeleteGlobalPS[Delete global-pull-secret]
-    CleanupProcess --> |Removes DaemonSet| RemoveDaemonSet[Remove DaemonSet]
+    CleanupProcess --> |DaemonSet remains active| RevertToOriginal[Revert to syncing original-pull-secret only]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/content/how-to/common/global-pull-secret.md` around lines 301 - 302, The
diagram incorrectly shows CleanupProcess removing the DaemonSet; update it so
CleanupProcess only deletes the global-pull-secret (DeleteGlobalPS/Delete
global-pull-secret) while the DaemonSet (RemoveDaemonSet node should be removed
or changed) remains active and continues reconciling by syncing the
original-pull-secret when additional-pull-secret is absent. Specifically, remove
the arrow/statement that deletes the DaemonSet (RemoveDaemonSet) and instead
add/label a connection indicating the DaemonSet stays running and reverts to
syncing original-pull-secret (not global-pull-secret) when
additional-pull-secret is not present.
🧹 Nitpick comments (2)
docs/content/how-to/common/global-pull-secret.md (2)

333-336: Minor wording improvement for positive selection explanation.

Lines 333-336 accurately describe the positive-selection model, but "not NodeAffinity on an InPlace label" could be clearer. Consider rephrasing to emphasize it's an opt-in approach where only labeled nodes are eligible, rather than defining it by what it's not.

✏️ Suggested rewording
-### How scheduling avoids InPlace conflicts
+### How node scheduling works (positive selection)

-Eligibility is **positive selection**, not NodeAffinity on an InPlace label: InPlace workers simply **never** receive `hypershift.openshift.io/nodepool-globalps-enabled=true`, so the sync DaemonSet does not place pods on them. Replace workers on AWS/Azure **do** receive the label so the DaemonSet can run there without colliding with MCD on InPlace upgrade paths.
+The DaemonSet uses **positive selection** via `nodeSelector`: only nodes labeled `hypershift.openshift.io/nodepool-globalps-enabled=true` are eligible to run sync pods. InPlace workers **never** receive this label (avoiding conflicts with Machine Config Daemon), while AWS/Azure Replace workers **do** receive it (enabling kubelet config sync via the DaemonSet).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/content/how-to/common/global-pull-secret.md` around lines 333 - 336,
Rephrase the sentence about positive selection to make it explicit that
scheduling is opt-in: replace the fragment "not NodeAffinity on an InPlace
label" with wording that states only nodes carrying the
hypershift.openshift.io/nodepool-globalps-enabled=true label are eligible (i.e.,
the sync DaemonSet only targets labeled nodes), and clarify that InPlace workers
never receive that label while Replace workers on AWS/Azure do so to avoid
colliding with the MCD daemonset.

274-277: Clarify conditional Secret reading in diagram.

Lines 276-277 show arrows suggesting both global-pull-secret and original-pull-secret are read concurrently, but the actual behavior (documented at lines 134-135) is: read global-pull-secret when present, otherwise read original-pull-secret.

Consider updating the diagram flow to show this is a conditional choice rather than parallel reads—perhaps by routing through a decision node first, or adding "(fallback)" to the original-pull-secret arrow.

💡 Suggested diagram improvement

Modify the flow to show the choice more clearly:

    SyncCommand --> |Reads mounted files| SyncController[sync-global-pullsecret loop]
    SyncController --> |Check presence| CheckGlobalPS{global-pull-secret exists?}
    CheckGlobalPS --> |Yes| ReadGlobalPS[Read global-pull-secret mount]
    CheckGlobalPS --> |No| ReadOriginalPS[Read original-pull-secret mount]
Loading

Or add labels to existing arrows to indicate the conditional nature:

    SyncController --> |Reads if present| ReadGlobalPS[Read global-pull-secret mount]
    SyncController --> |Reads as fallback| ReadOriginalPS[Read original-pull-secret mount]
Loading
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/content/how-to/common/global-pull-secret.md` around lines 274 - 277, The
diagram incorrectly implies concurrent reads of global-pull-secret and
original-pull-secret; update the flow so SyncController (sync-global-pullsecret
loop) performs a conditional check: add a decision node (e.g.,
CheckGlobalPS{global-pull-secret exists?}) between SyncController and the read
nodes, routing Yes -> ReadGlobalPS[Read global-pull-secret mount] and No ->
ReadOriginalPS[Read original-pull-secret mount]; alternatively, if you prefer
minimal change, keep SyncController -> ReadGlobalPS and SyncController ->
ReadOriginalPS but change the arrow labels to "Reads if present" and "Reads as
fallback" respectively to clearly indicate the conditional/fallback behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Line 215: The "Node eligibility assessment" bullet incorrectly says "Labels
nodes from InPlace NodePools"—update this to reflect the actual behavior:
InPlace NodePools are intentionally NOT labeled with
hypershift.openshift.io/nodepool-globalps-enabled=true and only AWS/Azure
Replace NodePool nodes receive that label; change the text under "Node
eligibility assessment" to state that InPlace nodes are excluded from labeling
and that the DaemonSet scheduling restrictions apply only to nodes labeled
hypershift.openshift.io/nodepool-globalps-enabled=true (i.e., Replace NodePool
nodes).

---

Duplicate comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Around line 104-109: Update the paragraph under "Automatic detection and
baseline sync" to mention that HCCO watches both the data-plane kube-system
Secrets and the control-plane HostedCluster pull-secret
(HostedCluster.spec.pullSecret / HostedControlPlane pull secret in the HCP
namespace), so clarify that reconciliation is triggered by in-place updates to
either the kube-system secrets or the control-plane pull-secret and that HCCO
copies the HostedControlPlane pull secret into kube-system/original-pull-secret,
removes or merges the global-pull-secret depending on the presence of
additional-pull-secret, and then the DaemonSet syncs the resulting secret into
/var/lib/kubelet/config.json on eligible nodes.
- Around line 301-302: The diagram incorrectly shows CleanupProcess removing the
DaemonSet; update it so CleanupProcess only deletes the global-pull-secret
(DeleteGlobalPS/Delete global-pull-secret) while the DaemonSet (RemoveDaemonSet
node should be removed or changed) remains active and continues reconciling by
syncing the original-pull-secret when additional-pull-secret is absent.
Specifically, remove the arrow/statement that deletes the DaemonSet
(RemoveDaemonSet) and instead add/label a connection indicating the DaemonSet
stays running and reverts to syncing original-pull-secret (not
global-pull-secret) when additional-pull-secret is not present.

---

Nitpick comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Around line 333-336: Rephrase the sentence about positive selection to make it
explicit that scheduling is opt-in: replace the fragment "not NodeAffinity on an
InPlace label" with wording that states only nodes carrying the
hypershift.openshift.io/nodepool-globalps-enabled=true label are eligible (i.e.,
the sync DaemonSet only targets labeled nodes), and clarify that InPlace workers
never receive that label while Replace workers on AWS/Azure do so to avoid
colliding with the MCD daemonset.
- Around line 274-277: The diagram incorrectly implies concurrent reads of
global-pull-secret and original-pull-secret; update the flow so SyncController
(sync-global-pullsecret loop) performs a conditional check: add a decision node
(e.g., CheckGlobalPS{global-pull-secret exists?}) between SyncController and the
read nodes, routing Yes -> ReadGlobalPS[Read global-pull-secret mount] and No ->
ReadOriginalPS[Read original-pull-secret mount]; alternatively, if you prefer
minimal change, keep SyncController -> ReadGlobalPS and SyncController ->
ReadOriginalPS but change the arrow labels to "Reads if present" and "Reads as
fallback" respectively to clearly indicate the conditional/fallback behavior.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 96dc9db1-2b71-4010-8917-9daf5195b483

📥 Commits

Reviewing files that changed from the base of the PR and between 0c991ce and 88bfa0c.

⛔ Files ignored due to path filters (19)
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • docs/content/reference/aggregated-docs.md is excluded by !docs/content/reference/aggregated-docs.md
  • docs/content/reference/api.md is excluded by !docs/content/reference/api.md
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.go is excluded by !vendor/**, !**/vendor/**
📒 Files selected for processing (5)
  • AGENTS.md
  • api/hypershift/v1beta1/hostedcluster_types.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/globalps/setup.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.go
  • docs/content/how-to/common/global-pull-secret.md
✅ Files skipped from review due to trivial changes (2)
  • control-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.go
  • AGENTS.md
🚧 Files skipped from review as they are similar to previous changes (2)
  • control-plane-operator/hostedclusterconfigoperator/controllers/globalps/setup.go
  • api/hypershift/v1beta1/hostedcluster_types.go

Comment thread docs/content/how-to/common/global-pull-secret.md Outdated
@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch from 88bfa0c to bd5fa9f Compare April 24, 2026 09:04
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 24, 2026

Codecov Report

❌ Patch coverage is 50.00000% with 12 lines in your changes missing coverage. Please review.
✅ Project coverage is 36.47%. Comparing base (60802b1) to head (9f69bce).

Files with missing lines Patch % Lines
...lusterconfigoperator/controllers/globalps/setup.go 53.33% 7 Missing ⚠️
...rconfigoperator/controllers/resources/resources.go 44.44% 5 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main    #8327   +/-   ##
=======================================
  Coverage   36.46%   36.47%           
=======================================
  Files         765      765           
  Lines       93256    93278   +22     
=======================================
+ Hits        34010    34022   +12     
- Misses      56532    56542   +10     
  Partials     2714     2714           
Files with missing lines Coverage Δ
...rconfigoperator/controllers/resources/resources.go 50.50% <44.44%> (-0.02%) ⬇️
...lusterconfigoperator/controllers/globalps/setup.go 10.25% <53.33%> (+10.25%) ⬆️
Flag Coverage Δ
cmd-support 30.34% <ø> (ø)
cpo-hostedcontrolplane 37.05% <ø> (ø)
cpo-other 35.72% <50.00%> (+0.03%) ⬆️
hypershift-operator 47.89% <ø> (ø)
other 28.37% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch from bd5fa9f to c635b8b Compare April 24, 2026 09:07
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
docs/content/how-to/common/global-pull-secret.md (2)

104-106: ⚠️ Potential issue | 🟡 Minor

Document both Secret watch sources in the baseline-sync section.

Line 105 currently mentions only kube-system watches, but the implementation also watches the control-plane namespace pull-secret for prompt reconciliation on in-place data updates. Please list both watch sources explicitly.

📝 Suggested doc patch
-### Automatic detection and baseline sync
-- The Hosted Cluster Config Operator (HCCO) continuously reconciles Global Pull Secret resources and watches Secrets in the `kube-system` namespace of the data plane.
+### Automatic detection and baseline sync
+- The Hosted Cluster Config Operator (HCCO) continuously reconciles Global Pull Secret resources and watches:
+  - Secrets in the `kube-system` namespace of the data plane, and
+  - the control-plane `pull-secret` Secret in the HostedControlPlane namespace.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/content/how-to/common/global-pull-secret.md` around lines 104 - 106, The
baseline-sync section should explicitly list both Secret watch sources: update
the paragraph under "Automatic detection and baseline sync" to state that HCCO
watches Secrets in the data plane `kube-system` namespace and also watches the
control-plane namespace `pull-secret` (the HostedControlPlane source) so it
reconciles promptly on in-place updates; keep references to
HostedControlPlane.spec.pullSecret and the copied target
`kube-system/original-pull-secret` to make the flow clear.

139-142: ⚠️ Potential issue | 🟡 Minor

Cleanup prose conflicts with the architecture diagram cleanup path.

Line 142 says the DaemonSet keeps running, but the diagram still shows a “Remove DaemonSet” branch. Please align the diagram with the documented behavior.

📝 Suggested diagram adjustment
-    CleanupProcess --> |Removes DaemonSet| RemoveDaemonSet[Remove DaemonSet]
+    CleanupProcess --> |DaemonSet remains scheduled by node eligibility| KeepDaemonSet[DaemonSet remains]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/content/how-to/common/global-pull-secret.md` around lines 139 - 142, The
prose and diagram conflict: the text under "Automatic cleanup" says deleting
`additional-pull-secret` leaves the DaemonSet running (syncing
`/var/lib/kubelet/config.json` from `original-pull-secret`), but the
architecture diagram shows a "Remove DaemonSet" branch; pick one behavior and
make both consistent. Either update the diagram to remove the "Remove DaemonSet"
branch and show the DaemonSet remaining active (annotate that
`global-pull-secret` is removed and the DaemonSet continues using
`original-pull-secret`), or change the prose to state that deleting
`additional-pull-secret` triggers DaemonSet removal; ensure references to
`additional-pull-secret`, `global-pull-secret`, `original-pull-secret`, and the
DaemonSet are consistent across text and diagram.
🧹 Nitpick comments (1)
api/.golangci.yml (1)

1529-1532: Replace the old PullSecret variant with the fully qualified field name to keep exclusions clean.

Lines 1526–1528 contain a PullSecret omitempty exclusion that is now superseded by the new variant at lines 1529–1532. The new entry uses the fully qualified field name (HostedClusterSpec.PullSecret) which matches the current kubeapilinter message format. Remove the old variant to avoid stale entries in the exclusion list.

Proposed cleanup

Replace lines 1526–1528 (old: field PullSecret) with the new fully qualified variant (lines 1529–1532: field HostedClusterSpec.PullSecret).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/.golangci.yml` around lines 1529 - 1532, Remove the stale kubeapilinter
exclusion that references the short field name "PullSecret" and keep the new
fully-qualified exclusion using "HostedClusterSpec.PullSecret"; specifically,
delete the old entry for field PullSecret and ensure only the fully qualified
text 'requiredfields: field HostedClusterSpec.PullSecret should have the
omitempty tag' remains in the linters exclusions so they match the current
kubeapilinter message format.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Around line 104-106: The baseline-sync section should explicitly list both
Secret watch sources: update the paragraph under "Automatic detection and
baseline sync" to state that HCCO watches Secrets in the data plane
`kube-system` namespace and also watches the control-plane namespace
`pull-secret` (the HostedControlPlane source) so it reconciles promptly on
in-place updates; keep references to HostedControlPlane.spec.pullSecret and the
copied target `kube-system/original-pull-secret` to make the flow clear.
- Around line 139-142: The prose and diagram conflict: the text under "Automatic
cleanup" says deleting `additional-pull-secret` leaves the DaemonSet running
(syncing `/var/lib/kubelet/config.json` from `original-pull-secret`), but the
architecture diagram shows a "Remove DaemonSet" branch; pick one behavior and
make both consistent. Either update the diagram to remove the "Remove DaemonSet"
branch and show the DaemonSet remaining active (annotate that
`global-pull-secret` is removed and the DaemonSet continues using
`original-pull-secret`), or change the prose to state that deleting
`additional-pull-secret` triggers DaemonSet removal; ensure references to
`additional-pull-secret`, `global-pull-secret`, `original-pull-secret`, and the
DaemonSet are consistent across text and diagram.

---

Nitpick comments:
In `@api/.golangci.yml`:
- Around line 1529-1532: Remove the stale kubeapilinter exclusion that
references the short field name "PullSecret" and keep the new fully-qualified
exclusion using "HostedClusterSpec.PullSecret"; specifically, delete the old
entry for field PullSecret and ensure only the fully qualified text
'requiredfields: field HostedClusterSpec.PullSecret should have the omitempty
tag' remains in the linters exclusions so they match the current kubeapilinter
message format.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 6114ac10-94b6-4911-8f79-b327d7ea779c

📥 Commits

Reviewing files that changed from the base of the PR and between 88bfa0c and bd5fa9f.

⛔ Files ignored due to path filters (19)
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • docs/content/reference/aggregated-docs.md is excluded by !docs/content/reference/aggregated-docs.md
  • docs/content/reference/api.md is excluded by !docs/content/reference/api.md
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.go is excluded by !vendor/**, !**/vendor/**
📒 Files selected for processing (6)
  • AGENTS.md
  • api/.golangci.yml
  • api/hypershift/v1beta1/hostedcluster_types.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/globalps/setup.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.go
  • docs/content/how-to/common/global-pull-secret.md
✅ Files skipped from review due to trivial changes (1)
  • AGENTS.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • api/hypershift/v1beta1/hostedcluster_types.go

Copy link
Copy Markdown
Contributor

@jparrill jparrill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dropped a minor thing and agreeing with bot about one thing to fix. Otherwise lgtm

Comment on lines +682 to +683
// See https://github.com/openshift/hypershift/blob/3e0e06f78460ac1100dd8432dd7eee4e980a87d0/docs/content/how-to/common/global-pull-secret.md
// for details.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree

@@ -197,16 +215,15 @@ The implementation consists of several key components working together:
- **Node eligibility assessment**: Labels nodes from InPlace NodePools and configures DaemonSet scheduling restrictions
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not accurate

@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch 2 times, most recently from 9987cf7 to 0c704a1 Compare April 24, 2026 10:36
@openshift-ci openshift-ci Bot added the area/testing Indicates the PR includes changes for e2e testing label Apr 24, 2026
@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch 2 times, most recently from 7065e93 to 87b1ccf Compare April 27, 2026 07:46
@openshift-ci-robot
Copy link
Copy Markdown

@enxebre: This pull request references Jira Issue OCPBUGS-84528, which is invalid:

  • expected the bug to target either version "5.0." or "openshift-5.0.", but it targets "4.23" instead

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

Summary

  • Clarify HostedCluster.spec.pullSecret API godoc to accurately describe in-place Secret update propagation: no NodePool rollout, but on AWS/Azure Replace NodePools credentials still reach guest cluster and kubelet via the Global Pull Secret DaemonSet path
  • Add CP pull-secret watch to the globalps HCCO controller so kube-system/original-pull-secret reconciles promptly on in-place updates (the resources controller already had a watch added for openshift-config/pull-secret)
  • Overhaul global-pull-secret.md to document platform/NodePool eligibility, positive-selection scheduling model, and baseline sync behavior without additional-pull-secret
  • Add "Pull secret cycling" section to AGENTS.md so AI agents treat spec.pullSecret, management-cluster Secret data, HCCO reconciliation, and optional Global Pull Secret as one system when making changes

Test plan

  • Verify make verify passes (API generation, CRD manifests, lint)
  • Verify HCCO starts cleanly with both new watches (resources + globalps)
  • Test in-place pull secret update on AWS Replace NodePool: confirm openshift-config/pull-secret, kube-system/original-pull-secret, and kubelet config.json all update promptly without NodePool rollout
  • Test on a non-AWS/Azure platform: confirm secrets update in-cluster but no DaemonSet pods schedule (positive-selection label absent)

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Documentation
  • Clarified pull secret propagation semantics and when node pool rollouts are triggered
  • Added guidance on platform and node pool eligibility for global pull secret synchronization
  • Updated global pull secret reconciliation behavior and merge mechanics

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@enxebre
Copy link
Copy Markdown
Member Author

enxebre commented Apr 28, 2026

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Apr 28, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@enxebre: This pull request references Jira Issue OCPBUGS-84528, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (5.0.0) matches configured target version for branch (5.0.0)
  • bug is in the state New, which is one of the valid states (NEW, ASSIGNED, POST)
Details

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@enxebre
Copy link
Copy Markdown
Member Author

enxebre commented Apr 28, 2026

/hold cancel
/test e2e-aws

@openshift-ci openshift-ci Bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 28, 2026
@cwbotbot
Copy link
Copy Markdown

cwbotbot commented Apr 28, 2026

Test Results

e2e-aws

@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch 3 times, most recently from 3a0541b to 668c6dc Compare April 28, 2026 12:31
Copy link
Copy Markdown
Member

@bryan-cox bryan-cox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci Bot added the lgtm Indicates that a PR is ready to be merged. label Apr 28, 2026
@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling tests matching the pipeline_run_if_changed or not excluded by pipeline_skip_if_only_changed parameters:
/test e2e-aks-4-22
/test e2e-aws-4-22
/test e2e-aks
/test e2e-aws
/test e2e-aws-upgrade-hypershift-operator
/test e2e-azure-self-managed
/test e2e-kubevirt-aws-ovn-reduced
/test e2e-v2-aws

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 28, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: bryan-cox, enxebre

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@enxebre
Copy link
Copy Markdown
Member Author

enxebre commented Apr 28, 2026

/test e2e-aws

@hypershift-jira-solve-ci
Copy link
Copy Markdown

AI Test Failure Analysis

Job: pull-ci-openshift-hypershift-main-e2e-azure-self-managed | Build: 2049113873465741312 | Cost: $4.45818325 | Failed step: hypershift-azure-run-e2e-self-managed

View full analysis report


Generated by hypershift-analyze-e2e-failure post-step using Claude claude-opus-4-6

@enxebre
Copy link
Copy Markdown
Member Author

enxebre commented Apr 29, 2026

/pipline required

@enxebre
Copy link
Copy Markdown
Member Author

enxebre commented Apr 29, 2026

/pipeline required

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling tests matching the pipeline_run_if_changed or not excluded by pipeline_skip_if_only_changed parameters:
/test e2e-aks-4-22
/test e2e-aws-4-22
/test e2e-aks
/test e2e-aws
/test e2e-aws-upgrade-hypershift-operator
/test e2e-azure-self-managed
/test e2e-kubevirt-aws-ovn-reduced
/test e2e-v2-aws

@bryan-cox
Copy link
Copy Markdown
Member

/retest

enxebre added 2 commits April 29, 2026 14:33
Clarify spec.pullSecret API godoc, add CP pull-secret watches
to HCCO resources and globalps controllers, extract testable
predicates with unit tests, and overhaul global-pull-secret docs.
Add subtest to EnsureGlobalPullSecret validating the full
propagation chain when management-cluster pull secret data
is updated in-place without triggering a NodePool rollout.
@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch from 668c6dc to 9f69bce Compare April 29, 2026 12:33
@openshift-ci openshift-ci Bot removed the lgtm Indicates that a PR is ready to be merged. label Apr 29, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 29, 2026

New changes are detected. LGTM label has been removed.

@enxebre
Copy link
Copy Markdown
Member Author

enxebre commented Apr 29, 2026

/pipeline required

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling tests matching the pipeline_run_if_changed or not excluded by pipeline_skip_if_only_changed parameters:
/test e2e-aks-4-22
/test e2e-aws-4-22
/test e2e-aks
/test e2e-aws
/test e2e-aws-upgrade-hypershift-operator
/test e2e-azure-self-managed
/test e2e-kubevirt-aws-ovn-reduced
/test e2e-v2-aws

@hypershift-jira-solve-ci
Copy link
Copy Markdown

Test Failure Analysis Complete

Job Information

  • Prow Job: Red Hat Konflux / hypershift-cli-mce-50-on-pull-request
  • Build ID: hypershift-cli-mce-50-on-pull-request-f8mx8
  • Pipeline Run: Konflux UI
  • Commit: 668c6dc83c7f2d9b795ff2e3ac0b2ef8a0e72ade
  • Type: Konflux (Tekton Pipelines-as-Code) container image build (Containerfile.cli)

Test Failure Analysis

Error

Konflux PipelineRun hypershift-cli-mce-50-on-pull-request-f8mx8 failed immediately
(duration: 1 second — started 2026-04-28T13:08:09Z, completed 2026-04-28T13:08:10Z).
GitHub check run title remained "CI has Started" with no error details propagated.

Summary

This is a transient Konflux infrastructure failure, not a code or test issue. The Tekton PipelineRun was created but failed to execute — the entire pipeline completed in exactly 1 second, indicating the pipeline never ran any build tasks. All other CI checks on the same commit (unit tests, envtests across 12 Kubernetes versions, lint, verify/build, codespell) passed. The same hypershift-cli-mce-50-on-pull-request pipeline succeeded on multiple other PRs (#8324, #8342, #8355, #8357, #8366, #8368, #8369) around the same time period, confirming the code compiles correctly. The PR has since been force-pushed to a new commit (9f69bce), and all Konflux pipelines — including hypershift-cli-mce-50-on-pull-request — are now running normally on the new commit.

Root Cause

The root cause is a transient Konflux/Tekton PipelineRun scheduling failure. The specific sequence of events:

  1. Force push at 12:26:14 UTC on 4/28 triggered the first Konflux pipeline runs for the PR.
  2. Second force push at 12:31:20 UTC on 4/28 (5 minutes later) created commit 668c6dc. The Pipelines-as-Code controller's cancel-in-progress: "true" annotation caused the previous pipeline runs to be cancelled, and new ones were scheduled.
  3. PipelineRun created at 13:08:09 UTC — the new hypershift-cli-mce-50-on-pull-request-f8mx8 PipelineRun was created by the PaC controller approximately 37 minutes after the push (normal Konflux scheduling latency).
  4. PipelineRun failed at 13:08:10 UTC — the PipelineRun failed within 1 second. This duration is not enough time for any Tekton task to start, let alone clone the repository or compile Go code. The failure occurred at the Tekton controller level — the PipelineRun CRD was created but the Tekton controller failed to schedule it (possible causes: transient API server error, resource quota pressure, controller reconciliation race condition from the cancel-in-progress flow, or a PaC controller issue during the cancellation-and-recreate cycle).
  5. No error details propagated — the GitHub check run summary was never updated beyond "CI has Started", meaning the PaC controller's failure reporting did not capture the Tekton-level error reason.

This is confirmed as infrastructure-only because:

  • All code-level CI checks (Go build, unit tests, 12 envtest matrix, linting) passed on the same commit.
  • The identical pipeline succeeded on 5+ other PRs within the same 24-hour window.
  • The pipeline is now running normally on the new commit (9f69bce).
Recommendations
  1. No code changes needed — this failure is unrelated to the PR's changes (API doc clarifications, HCCO controller watches, and pull secret behavior).
  2. Wait for the current run — the hypershift-cli-mce-50-on-pull-request pipeline is currently in_progress on the new commit 9f69bce (started 2026-04-29T12:33:45Z) and should complete normally (~20 minutes typical duration).
  3. If the issue recurs, consider:
    • Avoiding rapid successive force-pushes (the two pushes 5 minutes apart on 4/28 may have triggered a race condition in PaC's cancel-in-progress logic).
    • Requesting the Konflux/PaC team investigate the crt-redhat-acm-tenant namespace for PipelineRun scheduling issues around 2026-04-28T13:08Z.
    • Using /retest to re-trigger the specific pipeline rather than force-pushing if only a CI retry is needed.
Evidence
Evidence Detail
Pipeline duration 1 second (13:08:09Z → 13:08:10Z) — normal runs take ~20 minutes
Check run title Stuck at "CI has Started" — never progressed to build output
Same pipeline on PR #8355 ✅ Succeeded (12:05:44 → 12:25:29 on same day 4/28)
Same pipeline on PR #8357 ✅ Succeeded (12:06:29 → 12:26:38 on 4/28)
Same pipeline on PR #8366 ✅ Succeeded (09:55:16 → 10:15:14 on 4/29)
Same pipeline on PR #8368 ✅ Succeeded (12:00:12 → 12:21:54 on 4/29)
Same pipeline on PR #8369 ✅ Succeeded (12:05:21 → 12:27:05 on 4/29)
Go build (Verify check) ✅ Passed on same commit 668c6dc
Unit tests (5 suites) ✅ All passed on same commit
Envtests (12 K8s versions) ✅ All passed on same commit
Lint ✅ Passed on same commit
Force push sequence 12:26:14 → 12:31:20 on 4/28 (5 min apart, cancel-in-progress race)
Current state (new commit) hypershift-cli-mce-50-on-pull-request is IN_PROGRESS on 9f69bce
PaC cancel-in-progress Enabled in .tekton/hypershift-cli-mce-50-pull-request.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. area/api Indicates the PR includes changes for the API area/cli Indicates the PR includes changes for CLI area/control-plane-operator Indicates the PR includes changes for the control plane operator - in an OCP release area/documentation Indicates the PR includes changes for documentation area/testing Indicates the PR includes changes for e2e testing jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants