Skip to content

Merge https://github.com/kubernetes-sigs/cluster-api:v1.13.1 (16d0a65) into master#288

Open
cloud-team-rebase-bot[bot] wants to merge 600 commits intoopenshift:masterfrom
openshift-cloud-team:rebase-bot-master
Open

Merge https://github.com/kubernetes-sigs/cluster-api:v1.13.1 (16d0a65) into master#288
cloud-team-rebase-bot[bot] wants to merge 600 commits intoopenshift:masterfrom
openshift-cloud-team:rebase-bot-master

Conversation

@cloud-team-rebase-bot
Copy link
Copy Markdown

@cloud-team-rebase-bot cloud-team-rebase-bot Bot commented Apr 30, 2026

Summary by CodeRabbit

  • New Features

    • Node taint propagation and management
    • Disk partition layout customization for machine bootstrap
    • ClusterClass upgrade configuration and extensibility
    • MachineHealthCheck: unhealthy-condition rules
    • Topology rollout timing controls
    • Configurable certificate encryption algorithm support
  • Bug Fixes

    • Improved certificate validity/rotation handling
    • Reduced noisy control-plane init logging
  • Documentation

    • Updated many release notes (v1.10.9→v1.13.0) and contributing link
  • Chores

    • Tooling/version bumps, CRD cleanup (removed deprecated alpha versions), CI/workflow updates

k8s-ci-robot and others added 30 commits February 24, 2026 11:33
…bot/go_modules/all-go-mod-patch-and-minor-fcb864dbd7

🌱 Bump the all-go-mod-patch-and-minor group across 2 directories with 2 updates
…bot/go_modules/sigs.k8s.io/structured-merge-diff/v6-6.3.2

🌱 Bump sigs.k8s.io/structured-merge-diff/v6 from 6.3.2-0.20260122202528-d9cc6641c482 to 6.3.2
Signed-off-by: Stefan Büringer buringerst@vmware.com
…/sdk pkg) (kubernetes-sigs#13372)

* GO-2026-4394: CVE fix for go.opentelemetry.io/otel/sdk pkg

Signed-off-by: Adarsh Agrawal <adarsh.agrawal1@ibm.com>

* Updating otlp pkgs to latest

Signed-off-by: Adarsh Agrawal <adarsh.agrawal1@ibm.com>

* Updating remaining opentelemetry pkg

Signed-off-by: Adarsh Agrawal <adarsh.agrawal1@ibm.com>

---------

Signed-off-by: Adarsh Agrawal <adarsh.agrawal1@ibm.com>
…-limiting-beta

✨ Promote ReconcileRateLimiting to beta (enabled per default)
…toscaler-v1.35.0

🌱 Bump autoscaler version used for testing to v1.35.0
Signed-off-by: Troy Connor <troy0820@users.noreply.github.com>
…e-cert-manager-1.19.4

 🌱 Bump cert-manager v1.19.4
Signed-off-by: Stefan Büringer buringerst@vmware.com
…panic

🐛 Fix panic in Cluster conversion
Signed-off-by: Stefan Büringer buringerst@vmware.com
🐛 e2e: only retry creating objects that failed
Signed-off-by: Stefan Büringer buringerst@vmware.com
🌱 Bump golang.org/x/net to v0.51 to fix CVE
Signed-off-by: Stefan Büringer buringerst@vmware.com
…rbosity

🌱 Remove stack traces from ClusterCache errors
Bumps the all-github-actions group with 1 update: [actions/setup-go](https://github.com/actions/setup-go).


Updates `actions/setup-go` from 6.2.0 to 6.3.0
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](actions/setup-go@7a3fe6c...4b73464)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: 6.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-github-actions
...

Signed-off-by: dependabot[bot] <support@github.com>
… 8 updates

Bumps the all-go-mod-patch-and-minor group with 3 updates in the / directory: [k8s.io/api](https://github.com/kubernetes/api), [k8s.io/apiextensions-apiserver](https://github.com/kubernetes/apiextensions-apiserver) and [k8s.io/cluster-bootstrap](https://github.com/kubernetes/cluster-bootstrap).
Bumps the all-go-mod-patch-and-minor group with 3 updates in the /hack/tools directory: [k8s.io/api](https://github.com/kubernetes/api), [k8s.io/apiextensions-apiserver](https://github.com/kubernetes/apiextensions-apiserver) and [google.golang.org/api](https://github.com/googleapis/google-api-go-client).
Bumps the all-go-mod-patch-and-minor group with 2 updates in the /test directory: [k8s.io/api](https://github.com/kubernetes/api) and [k8s.io/apiextensions-apiserver](https://github.com/kubernetes/apiextensions-apiserver).


Updates `k8s.io/api` from 0.35.1 to 0.35.2
- [Commits](kubernetes/api@v0.35.1...v0.35.2)

Updates `k8s.io/apiextensions-apiserver` from 0.35.1 to 0.35.2
- [Release notes](https://github.com/kubernetes/apiextensions-apiserver/releases)
- [Commits](kubernetes/apiextensions-apiserver@v0.35.1...v0.35.2)

Updates `k8s.io/apimachinery` from 0.35.1 to 0.35.2
- [Commits](kubernetes/apimachinery@v0.35.1...v0.35.2)

Updates `k8s.io/apiserver` from 0.35.1 to 0.35.2
- [Commits](kubernetes/apiserver@v0.35.1...v0.35.2)

Updates `k8s.io/client-go` from 0.35.1 to 0.35.2
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](kubernetes/client-go@v0.35.1...v0.35.2)

Updates `k8s.io/cluster-bootstrap` from 0.35.1 to 0.35.2
- [Commits](kubernetes/cluster-bootstrap@v0.35.1...v0.35.2)

Updates `k8s.io/component-base` from 0.35.1 to 0.35.2
- [Commits](kubernetes/component-base@v0.35.1...v0.35.2)

Updates `k8s.io/api` from 0.35.1 to 0.35.2
- [Commits](kubernetes/api@v0.35.1...v0.35.2)

Updates `k8s.io/apiextensions-apiserver` from 0.35.1 to 0.35.2
- [Release notes](https://github.com/kubernetes/apiextensions-apiserver/releases)
- [Commits](kubernetes/apiextensions-apiserver@v0.35.1...v0.35.2)

Updates `k8s.io/apimachinery` from 0.35.1 to 0.35.2
- [Commits](kubernetes/apimachinery@v0.35.1...v0.35.2)

Updates `k8s.io/client-go` from 0.35.1 to 0.35.2
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](kubernetes/client-go@v0.35.1...v0.35.2)

Updates `google.golang.org/api` from 0.268.0 to 0.269.0
- [Release notes](https://github.com/googleapis/google-api-go-client/releases)
- [Changelog](https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md)
- [Commits](googleapis/google-api-go-client@v0.268.0...v0.269.0)

Updates `k8s.io/api` from 0.35.1 to 0.35.2
- [Commits](kubernetes/api@v0.35.1...v0.35.2)

Updates `k8s.io/apiextensions-apiserver` from 0.35.1 to 0.35.2
- [Release notes](https://github.com/kubernetes/apiextensions-apiserver/releases)
- [Commits](kubernetes/apiextensions-apiserver@v0.35.1...v0.35.2)

Updates `k8s.io/apimachinery` from 0.35.1 to 0.35.2
- [Commits](kubernetes/apimachinery@v0.35.1...v0.35.2)

Updates `k8s.io/apiserver` from 0.35.1 to 0.35.2
- [Commits](kubernetes/apiserver@v0.35.1...v0.35.2)

Updates `k8s.io/client-go` from 0.35.1 to 0.35.2
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](kubernetes/client-go@v0.35.1...v0.35.2)

Updates `k8s.io/component-base` from 0.35.1 to 0.35.2
- [Commits](kubernetes/component-base@v0.35.1...v0.35.2)

---
updated-dependencies:
- dependency-name: k8s.io/api
  dependency-version: 0.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go-mod-patch-and-minor
- dependency-name: k8s.io/apiextensions-apiserver
  dependency-version: 0.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go-mod-patch-and-minor
- dependency-name: k8s.io/apimachinery
  dependency-version: 0.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go-mod-patch-and-minor
- dependency-name: k8s.io/apiserver
  dependency-version: 0.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go-mod-patch-and-minor
- dependency-name: k8s.io/client-go
  dependency-version: 0.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go-mod-patch-and-minor
- dependency-name: k8s.io/cluster-bootstrap
  dependency-version: 0.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go-mod-patch-and-minor
- dependency-name: k8s.io/component-base
  dependency-version: 0.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go-mod-patch-and-minor
- dependency-name: k8s.io/api
  dependency-version: 0.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go-mod-patch-and-minor
- dependency-name: k8s.io/apiextensions-apiserver
  dependency-version: 0.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go-mod-patch-and-minor
- dependency-name: k8s.io/apimachinery
  dependency-version: 0.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go-mod-patch-and-minor
- dependency-name: k8s.io/client-go
  dependency-version: 0.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go-mod-patch-and-minor
- dependency-name: google.golang.org/api
  dependency-version: 0.269.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: all-go-mod-patch-and-minor
- dependency-name: k8s.io/api
  dependency-version: 0.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go-mod-patch-and-minor
- dependency-name: k8s.io/apiextensions-apiserver
  dependency-version: 0.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go-mod-patch-and-minor
- dependency-name: k8s.io/apimachinery
  dependency-version: 0.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go-mod-patch-and-minor
- dependency-name: k8s.io/apiserver
  dependency-version: 0.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go-mod-patch-and-minor
- dependency-name: k8s.io/client-go
  dependency-version: 0.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go-mod-patch-and-minor
- dependency-name: k8s.io/component-base
  dependency-version: 0.35.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: all-go-mod-patch-and-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
…bot/github_actions/all-github-actions-2c6e677ddc

🌱 Bump actions/setup-go from 6.2.0 to 6.3.0 in the all-github-actions group
…bot/go_modules/all-go-mod-patch-and-minor-2fc94a814f

🌱 Bump the all-go-mod-patch-and-minor group across 3 directories with 8 updates
…es-status-addresses-even-further

🌱  api: relax validation for Machine .status.addresses to maximum of 256 instead of 128 items
* Postpone date when we stop serving v1beta1

* Address comments
Signed-off-by: Stefan Büringer buringerst@vmware.com
* Add rolloutAfter to cluster.spec.topology

* Address comments
…eout-unset

🌱 Avoid unsetting nodeDeletionTimeoutSeconds during Machine deletion
apiserver

Signed-off-by: Stefan Büringer buringerst@vmware.com
@openshift-ci openshift-ci Bot requested a review from RadekManak April 30, 2026 12:32
@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented Apr 30, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: cloud-team-rebase-bot[bot]
Once this PR has been reviewed and has the lgtm label, please assign mdbooth for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@damdo
Copy link
Copy Markdown
Member

damdo commented Apr 30, 2026

/hold

/ok-to-test

@openshift-ci openshift-ci Bot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Apr 30, 2026
@damdo
Copy link
Copy Markdown
Member

damdo commented Apr 30, 2026

/retest

cloud-team-rebase-bot and others added 20 commits May 4, 2026 13:07
Squash follow-up OWNERS sync into the initial OpenShift-specific carry since it
updates the same initial ownership surface.

# Conflicts:
#	.github/workflows/pr-dependabot.yaml
#	.github/workflows/pr-golangci-lint.yaml
#	.github/workflows/pr-verify.yaml
#	OWNERS_ALIASES
Squash the OWNERS-only carries into a single update to keep ownership churn in
one focused commit.
Squash adjacent changes that iterate on OpenShift manifest tooling and
metadata sync behavior in the same Makefile-driven flow.
Squash adjacent Dockerfile updates that refine the 4.21 image carry and
manager binary naming.
Squash adjacent toolchain updates touching openshift/tools so kustomize
alignment and IPAM pinning are applied together.
…olicy: Ignore

Add functions to set the failurePolicy to Ignore for both mutating and validating webhooks handling IPAM resources.

During bootstrap, the bootstrap node's Kube API Server receives IPAM create requests but is unable
to reach the webhooks in the Cluster API namespace.

This is because the bootstrap node doesn't have a route to the pods as it doesn't have access to the pod networks.
If failurePolicy is set to Fail, the KAS cannot reach the webhook endpoints and the request fails, preventing creation of IPAddress and IPAddressClaim resources.

This causes a chicken-and-egg problem as it prevents IPAM provisioning
for the workers which won't start without their IP addresses being allocated.

Setting failurePolicy to Ignore allows the resources to be created even when the webhooks are
unreachable during bootstrap, matching what Machine API also does.

More context: https://redhat-internal.slack.com/archives/C0A2M43S199/p1765540108488539
Squash ART image consistency updates into a single carry commit.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Squash adjacent carries that iterate over OpenShift manifests generation,
IPAM kustomization, and Dockerfile image consistency.
… upstream rebase

Squash the post-rebase regeneration steps into a single carry commit so the
PR keeps one coherent update for generated manifests and dependency vendoring.
@cloud-team-bot cloud-team-bot Bot force-pushed the rebase-bot-master branch from b87308b to 5c92e88 Compare May 4, 2026 13:09
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
config/crd/bases/addons.cluster.x-k8s.io_clusterresourcesetbindings.yaml (1)

18-237: ⚠️ Potential issue | 🟠 Major

Verify that v1alpha3 and v1alpha4 were safely migrated before removing them from spec.versions.

This change removes v1alpha3 and v1alpha4 from the CRD versions. According to Kubernetes CRD versioning requirements, a version cannot be safely removed from spec.versions while it still appears in the live cluster's status.storedVersions. Before removing these versions, confirm that:

  1. All existing ClusterResourceSetBinding objects stored in these versions have been migrated to the new storage version (v1beta2)
  2. v1alpha3 and v1alpha4 have been removed from the CRD's status.storedVersions on all affected clusters

If this migration was not completed before applying this manifest, the CRD update will fail on upgraded clusters.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@config/crd/bases/addons.cluster.x-k8s.io_clusterresourcesetbindings.yaml`
around lines 18 - 237, The CRD removal of v1alpha3 and v1alpha4 from
spec.versions for ClusterResourceSetBinding can fail if stored objects still
exist in those versions; verify migration by ensuring all
ClusterResourceSetBinding objects previously stored as v1alpha3/v1alpha4 have
been migrated to the storage version v1beta2 and that v1alpha3 and v1alpha4 no
longer appear in the CRD's status.storedVersions on every cluster before
removing them from spec.versions; if you find remaining storedVersions or
resources, perform the Kubernetes CRD version migration (or re-add the versions
temporarily) so storedVersions is cleared, confirm spec.required and the v1beta2
schema supports the migrated objects, then remove v1alpha3/v1alpha4 from
spec.versions only after status.storedVersions no longer lists them.
api/bootstrap/kubeadm/v1beta1/kubeadmconfig_types.go (1)

777-806: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Reject contradictory layout/diskLayout combinations.

diskLayout adds a second way to define partition layout, but layout is still a required boolean on the same struct. As written, layout: false plus a non-empty diskLayout can still be admitted, so the boolean is effectively ignored by the renderer and the persisted spec becomes misleading. Please add schema/webhook validation requiring layout=true when diskLayout is set, or make the two fields conditionally exclusive.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/bootstrap/kubeadm/v1beta1/kubeadmconfig_types.go` around lines 777 - 806,
The Partition struct allows contradictory definitions (Layout bool vs DiskLayout
slice); add validation so DiskLayout may only be set when Layout is true.
Implement this by adding a kubebuilder XValidation on the Partition type (or
immediately above DiskLayout) such as an XValidation rule that enforces "if
diskLayout is present then layout == true" (e.g. rule like
"!has(self.diskLayout) || self.layout == true" with a clear message), or if you
prefer, implement an equivalent validating webhook that rejects Partition
entries where DiskLayout is non-empty but Layout is false; reference the
Partition struct and its fields Layout and DiskLayout when making the change.
controllers/clustercache/cluster_cache.go (1)

322-328: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Wire options.ClusterFilter into the cache instance.

Reconcile reads cc.clusterFilter, but this constructor never copies options.ClusterFilter into the struct, so the new filter is always nil and filtered clusters will still be connected.

Suggested fix
 	cc := &clusterCache{
 		client:                mgr.GetClient(),
 		clusterAccessorConfig: buildClusterAccessorConfig(mgr.GetScheme(), options, controllerPodMetadata),
 		clusterAccessors:      make(map[client.ObjectKey]*clusterAccessor),
 		cacheCtx:              cacheCtx,
 		cacheCtxCancel:        cacheCtxCancel,
+		clusterFilter:         options.ClusterFilter,
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@controllers/clustercache/cluster_cache.go` around lines 322 - 328, The
clusterCache constructor is not setting the clusterFilter field, so
cc.clusterFilter is nil when Reconcile reads it; update the struct literal that
creates the clusterCache (the cc := &clusterCache{...} block) to include
clusterFilter: options.ClusterFilter so the newly created clusterCache carries
the provided filter into Reconcile.
🧹 Nitpick comments (3)
CHANGELOG/v1.13.0-rc.0.md (3)

236-236: ⚡ Quick win

Maintain consistent spelling: use "Optimize" instead of "Optimise".

The document consistently uses American spelling "optimize" elsewhere (lines 65, 73, 82), but line 236 uses British spelling "Optimise." As per coding guidelines, maintain consistency within a single document.

📝 Suggested fix
-- util: Optimise patch calls (`#13367`)
+- util: Optimize patch calls (`#13367`)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@CHANGELOG/v1.13.0-rc.0.md` at line 236, Change the British spelling
"Optimise" to American "Optimize" in the changelog entry "- util: Optimise patch
calls (`#13367`)" so it matches the document's existing use of "optimize"; update
the string exactly to "- util: Optimize patch calls (`#13367`)" in the
CHANGELOG/v1.13.0-rc.0.md content.

37-38: ⚡ Quick win

Fix subject-verb agreement.

"Provider" should be plural "Providers" to match the verb "should."

📝 Suggested fix
-    - Reminder: Provider should start implementing the v1beta2 contract ASAP.
+    - Reminder: Providers should start implementing the v1beta2 contract ASAP.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@CHANGELOG/v1.13.0-rc.0.md` around lines 37 - 38, Change the noun in the
sentence "Reminder: Provider should start implementing the v1beta2 contract
ASAP." to plural so it agrees with the verb; replace "Provider" with "Providers"
in the CHANGELOG entry so the line reads "Reminder: Providers should start
implementing the v1beta2 contract ASAP."

13-13: ⚡ Quick win

Use singular "performance" instead of "performances".

In this technical context, "performance" is typically used as an uncountable noun.

📝 Suggested fix
-CAPI v1.13 is a release focused on stability, reliability and performances:
+CAPI v1.13 is a release focused on stability, reliability and performance:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@CHANGELOG/v1.13.0-rc.0.md` at line 13, Replace the plural word "performances"
in the sentence "CAPI v1.13 is a release focused on stability, reliability and
performances:" with the uncountable singular "performance" so it reads "CAPI
v1.13 is a release focused on stability, reliability and performance:"; update
only that token in the line containing "stability, reliability and
performances".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/weekly-security-scan.yaml:
- Line 16: The workflow's matrix branches list is invalid (it lists main and
release-1.*); update the branches array used in the weekly-security-scan
workflow so actions/checkout uses existing repo branches (replace "branch: [
main, release-1.12, release-1.11 ]" with a valid list such as "branch: [ master
]" or the actual release branch names), ensuring the matrix contains only real
branch names so the job runs successfully.

In `@api/controlplane/kubeadm/v1beta1/conversion.go`:
- Around line 420-427: The conversion functions are appending to destination
taint slices (e.g., out.Spec.Taints, out.Taints) without clearing them, causing
duplicates when objects are reused; change each conversion that iterates over
in.Taints (and the reverse/template variants) to allocate a slice of exact
length (len(in.Taints)), assign it to the destination (out.Spec.Taints =
make(..., len(in.Taints))), and populate entries by index using
clusterv1.MachineTaint{...} instead of append; apply the same pattern for all
occurrences noted (the blocks around the in.Taints iterations and their
reverse/template counterparts).

In `@api/runtime/hooks/v1alpha1/lifecyclehooks_types.go`:
- Around line 22-23: Revert the import change so the embedded Cluster schema in
the v1alpha1 hook contract remains the original core v1beta1 type: replace the
current import "sigs.k8s.io/cluster-api/api/core/v1beta2" used as clusterv1 with
the original "sigs.k8s.io/cluster-api/api/core/v1beta1" and ensure every struct
in this file that embeds or references clusterv1.Cluster (the v1alpha1 request
types) continues to use the v1beta1 shape; if you actually need v1beta2
semantics, instead create a new hook version (e.g., v1beta1 hook API) and
perform explicit conversion between versions rather than changing the v1alpha1
contract in-place.

In `@bootstrap/util/configowner_test.go`:
- Around line 66-68: The test currently discards the error returned by
clusterv1.AddToScheme, which can hide scheme registration failures; update both
places where runtime.NewScheme() and clusterv1.AddToScheme(...) are used to
capture the returned error (e.g., err := clusterv1.AddToScheme(scheme)) and
explicitly fail the test if err != nil (use t.Fatalf or a test assertion helper
like require.NoError) before building the fake client
(fake.NewClientBuilder().WithScheme(scheme)...), so any scheme registration
failure surfaces immediately.

In `@CHANGELOG/v1.13.0-rc.0.md`:
- Line 186: Replace the malformed changelog line "e2e: 0 in e2e tests" with a
clear, complete description for PR `#13429`: locate the entry referencing PR
`#13429` and update it to a concise sentence like "e2e: <brief summary of the
fix/feature introduced by PR `#13429`> (PR `#13429`)" so it explains what changed in
e2e tests and includes the PR number for traceability; confirm the wording
matches the PR title/description and preserves the changelog format.

In `@config/crd/bases/cluster.x-k8s.io_machines.yaml`:
- Around line 293-302: The spec.taints[].key schema currently allows a name
segment longer than 63 chars because maxLength: 317 was left without the
split-length enforcement; restore the original qualified-name validation by
updating the key schema (the pattern and length checks for spec.taints[].key) so
the optional DNS subdomain prefix is limited to 253 chars and the name segment
is limited to 63 chars (i.e. reinstate the regex that enforces the name segment
max 63 and prefix max 253 rather than relying only on a 317 overall max),
update/remove the incorrect maxLength if needed to match that regex, and then
regenerate the CRDs so the corrected validation is applied.

In `@controllers/clustercache/cluster_cache.go`:
- Around line 469-478: When a cluster is filtered out we currently disconnect
and delete the accessor (getClusterAccessor, accessor.Disconnect,
deleteClusterAccessor, cleanupClusterSourcesForCluster) then return early, which
prevents notifying consumers; move or add a call to
cc.sendEventsToClusterSources (or invoke the existing method that enqueues a
handled→filtered-out disconnect event for GetClusterSource consumers)
immediately after Disconnect and before
deleteClusterAccessor/cleanupClusterSourcesForCluster (or at least before
returning) so the disconnect/requeue is sent; ensure you reference clusterKey
and the same transition payload used elsewhere so consumers see the
handled→filtered-out event.

---

Outside diff comments:
In `@api/bootstrap/kubeadm/v1beta1/kubeadmconfig_types.go`:
- Around line 777-806: The Partition struct allows contradictory definitions
(Layout bool vs DiskLayout slice); add validation so DiskLayout may only be set
when Layout is true. Implement this by adding a kubebuilder XValidation on the
Partition type (or immediately above DiskLayout) such as an XValidation rule
that enforces "if diskLayout is present then layout == true" (e.g. rule like
"!has(self.diskLayout) || self.layout == true" with a clear message), or if you
prefer, implement an equivalent validating webhook that rejects Partition
entries where DiskLayout is non-empty but Layout is false; reference the
Partition struct and its fields Layout and DiskLayout when making the change.

In `@config/crd/bases/addons.cluster.x-k8s.io_clusterresourcesetbindings.yaml`:
- Around line 18-237: The CRD removal of v1alpha3 and v1alpha4 from
spec.versions for ClusterResourceSetBinding can fail if stored objects still
exist in those versions; verify migration by ensuring all
ClusterResourceSetBinding objects previously stored as v1alpha3/v1alpha4 have
been migrated to the storage version v1beta2 and that v1alpha3 and v1alpha4 no
longer appear in the CRD's status.storedVersions on every cluster before
removing them from spec.versions; if you find remaining storedVersions or
resources, perform the Kubernetes CRD version migration (or re-add the versions
temporarily) so storedVersions is cleared, confirm spec.required and the v1beta2
schema supports the migrated objects, then remove v1alpha3/v1alpha4 from
spec.versions only after status.storedVersions no longer lists them.

In `@controllers/clustercache/cluster_cache.go`:
- Around line 322-328: The clusterCache constructor is not setting the
clusterFilter field, so cc.clusterFilter is nil when Reconcile reads it; update
the struct literal that creates the clusterCache (the cc := &clusterCache{...}
block) to include clusterFilter: options.ClusterFilter so the newly created
clusterCache carries the provided filter into Reconcile.

---

Nitpick comments:
In `@CHANGELOG/v1.13.0-rc.0.md`:
- Line 236: Change the British spelling "Optimise" to American "Optimize" in the
changelog entry "- util: Optimise patch calls (`#13367`)" so it matches the
document's existing use of "optimize"; update the string exactly to "- util:
Optimize patch calls (`#13367`)" in the CHANGELOG/v1.13.0-rc.0.md content.
- Around line 37-38: Change the noun in the sentence "Reminder: Provider should
start implementing the v1beta2 contract ASAP." to plural so it agrees with the
verb; replace "Provider" with "Providers" in the CHANGELOG entry so the line
reads "Reminder: Providers should start implementing the v1beta2 contract ASAP."
- Line 13: Replace the plural word "performances" in the sentence "CAPI v1.13 is
a release focused on stability, reliability and performances:" with the
uncountable singular "performance" so it reads "CAPI v1.13 is a release focused
on stability, reliability and performance:"; update only that token in the line
containing "stability, reliability and performances".
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: a65565de-5e44-46d8-b1b1-bc5a111226ef

📥 Commits

Reviewing files that changed from the base of the PR and between 656dec8 and 5c92e88.

⛔ Files ignored due to path filters (36)
  • api/bootstrap/kubeadm/v1beta1/zz_generated.conversion.go is excluded by !**/zz_generated*
  • api/bootstrap/kubeadm/v1beta1/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • api/bootstrap/kubeadm/v1beta2/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • api/controlplane/kubeadm/v1beta1/zz_generated.conversion.go is excluded by !**/zz_generated*
  • api/controlplane/kubeadm/v1beta1/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • api/controlplane/kubeadm/v1beta2/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • api/core/v1beta1/zz_generated.conversion.go is excluded by !**/zz_generated*
  • api/core/v1beta1/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • api/core/v1beta1/zz_generated.openapi.go is excluded by !**/zz_generated*
  • api/core/v1beta2/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • api/core/v1beta2/zz_generated.openapi.go is excluded by !**/zz_generated*
  • api/runtime/hooks/v1alpha1/zz_generated.deepcopy.go is excluded by !**/zz_generated*
  • api/runtime/hooks/v1alpha1/zz_generated.openapi.go is excluded by !**/zz_generated*
  • docs/book/src/images/clusterclass-crd-relationships.svg is excluded by !**/*.svg
  • docs/book/src/images/kubeadm-control-plane-machines-resources.png is excluded by !**/*.png
  • docs/book/src/images/worker-machines-resources.png is excluded by !**/*.png
  • go.sum is excluded by !**/*.sum
  • hack/tools/go.sum is excluded by !**/*.sum
  • hack/tools/vendor/cloud.google.com/go/auth/CHANGES.md is excluded by !**/vendor/**
  • hack/tools/vendor/cloud.google.com/go/auth/credentials/detect.go is excluded by !**/vendor/**
  • hack/tools/vendor/cloud.google.com/go/auth/credentials/filetypes.go is excluded by !**/vendor/**
  • hack/tools/vendor/cloud.google.com/go/auth/credentials/internal/gdch/gdch.go is excluded by !**/vendor/**
  • hack/tools/vendor/cloud.google.com/go/auth/grpctransport/grpctransport.go is excluded by !**/vendor/**
  • hack/tools/vendor/cloud.google.com/go/auth/httptransport/httptransport.go is excluded by !**/vendor/**
  • hack/tools/vendor/cloud.google.com/go/auth/httptransport/transport.go is excluded by !**/vendor/**
  • hack/tools/vendor/cloud.google.com/go/auth/internal/credsfile/credsfile.go is excluded by !**/vendor/**
  • hack/tools/vendor/cloud.google.com/go/auth/internal/credsfile/filetype.go is excluded by !**/vendor/**
  • hack/tools/vendor/cloud.google.com/go/auth/internal/credsfile/parse.go is excluded by !**/vendor/**
  • hack/tools/vendor/cloud.google.com/go/auth/internal/internal.go is excluded by !**/vendor/**
  • hack/tools/vendor/cloud.google.com/go/auth/internal/jwt/jwt.go is excluded by !**/vendor/**
  • hack/tools/vendor/cloud.google.com/go/auth/internal/transport/transport.go is excluded by !**/vendor/**
  • hack/tools/vendor/cloud.google.com/go/auth/internal/version.go is excluded by !**/vendor/**
  • hack/tools/vendor/cloud.google.com/go/iam/.repo-metadata.json is excluded by !**/vendor/**
  • hack/tools/vendor/cloud.google.com/go/iam/CHANGES.md is excluded by !**/vendor/**
  • hack/tools/vendor/cloud.google.com/go/iam/README.md is excluded by !**/vendor/**
  • hack/tools/vendor/cloud.google.com/go/iam/apiv1/iampb/iam_policy.pb.go is excluded by !**/*.pb.go, !**/vendor/**
📒 Files selected for processing (264)
  • .dockerignore
  • .github/workflows/pr-gh-workflow-approve.yaml
  • .github/workflows/pr-md-link-check.yaml
  • .github/workflows/release.yaml
  • .github/workflows/weekly-md-link-check.yaml
  • .github/workflows/weekly-security-scan.yaml
  • .github/workflows/weekly-test-release.yaml
  • .golangci-kal.yml
  • .golangci.yml
  • .trivyignore
  • CHANGELOG/v1.10.10.md
  • CHANGELOG/v1.10.9.md
  • CHANGELOG/v1.11.4.md
  • CHANGELOG/v1.11.5.md
  • CHANGELOG/v1.11.6.md
  • CHANGELOG/v1.11.7.md
  • CHANGELOG/v1.12.0-rc.1.md
  • CHANGELOG/v1.12.0.md
  • CHANGELOG/v1.12.1.md
  • CHANGELOG/v1.12.2.md
  • CHANGELOG/v1.12.3.md
  • CHANGELOG/v1.12.4.md
  • CHANGELOG/v1.13.0-beta.0.md
  • CHANGELOG/v1.13.0-beta.1.md
  • CHANGELOG/v1.13.0-rc.0.md
  • CONTRIBUTING.md
  • Dockerfile
  • Makefile
  • Tiltfile
  • api/bootstrap/kubeadm/v1beta1/conversion.go
  • api/bootstrap/kubeadm/v1beta1/kubeadm_types.go
  • api/bootstrap/kubeadm/v1beta1/kubeadmconfig_types.go
  • api/bootstrap/kubeadm/v1beta2/kubeadmconfig_types.go
  • api/controlplane/kubeadm/v1beta1/conversion.go
  • api/controlplane/kubeadm/v1beta1/kubeadm_control_plane_types.go
  • api/controlplane/kubeadm/v1beta1/kubeadmcontrolplanetemplate_types.go
  • api/controlplane/kubeadm/v1beta2/kubeadm_control_plane_types.go
  • api/controlplane/kubeadm/v1beta2/kubeadmcontrolplanetemplate_types.go
  • api/core/v1beta1/cluster_types.go
  • api/core/v1beta1/clusterclass_types.go
  • api/core/v1beta1/common_types.go
  • api/core/v1beta1/conversion.go
  • api/core/v1beta1/conversion_test.go
  • api/core/v1beta1/machine_types.go
  • api/core/v1beta1/machinehealthcheck_types.go
  • api/core/v1beta2/cluster_types.go
  • api/core/v1beta2/clusterclass_types.go
  • api/core/v1beta2/common_types.go
  • api/core/v1beta2/condition_types.go
  • api/core/v1beta2/machine_types.go
  • api/ipam/v1alpha1/conversion.go
  • api/runtime/hooks/v1alpha1/common_types.go
  • api/runtime/hooks/v1alpha1/lifecyclehooks_types.go
  • api/runtime/hooks/v1alpha1/topologymutation_types.go
  • api/runtime/hooks/v1alpha1/topologymutation_variable_types.go
  • bootstrap/kubeadm/config/crd/bases/bootstrap.cluster.x-k8s.io_kubeadmconfigs.yaml
  • bootstrap/kubeadm/config/crd/bases/bootstrap.cluster.x-k8s.io_kubeadmconfigtemplates.yaml
  • bootstrap/kubeadm/config/crd/patches/webhook_in_kubeadmconfigs.yaml
  • bootstrap/kubeadm/config/crd/patches/webhook_in_kubeadmconfigtemplates.yaml
  • bootstrap/kubeadm/config/manager/manager.yaml
  • bootstrap/kubeadm/config/webhook/manifests.yaml
  • bootstrap/kubeadm/internal/cloudinit/cloudinit_test.go
  • bootstrap/kubeadm/internal/cloudinit/disk_setup.go
  • bootstrap/kubeadm/internal/cloudinit/utils.go
  • bootstrap/kubeadm/internal/controllers/kubeadmconfig_controller.go
  • bootstrap/kubeadm/internal/controllers/suite_test.go
  • bootstrap/kubeadm/internal/locking/control_plane_init_mutex.go
  • bootstrap/kubeadm/internal/setup/setup.go
  • bootstrap/kubeadm/internal/webhooks/kubeadmconfig.go
  • bootstrap/kubeadm/internal/webhooks/kubeadmconfig_test.go
  • bootstrap/kubeadm/internal/webhooks/kubeadmconfigtemplate.go
  • bootstrap/kubeadm/internal/webhooks/kubeadmconfigtemplate_test.go
  • bootstrap/kubeadm/main.go
  • bootstrap/util/configowner_test.go
  • bootstrap/util/suite_test.go
  • cmd/clusterctl/Dockerfile
  • cmd/clusterctl/client/cluster/cert_manager.go
  • cmd/clusterctl/client/cluster/cert_manager_test.go
  • cmd/clusterctl/client/cluster/mover.go
  • cmd/clusterctl/client/cluster/template.go
  • cmd/clusterctl/client/cluster/template_test.go
  • cmd/clusterctl/client/cluster/upgrader.go
  • cmd/clusterctl/client/cluster/upgrader_test.go
  • cmd/clusterctl/client/config/imagemeta_client.go
  • cmd/clusterctl/client/config/imagemeta_client_test.go
  • cmd/clusterctl/client/config/providers_client.go
  • cmd/clusterctl/client/config_test.go
  • cmd/clusterctl/client/repository/repository_github.go
  • cmd/clusterctl/client/repository/repository_github_test.go
  • cmd/clusterctl/client/upgrade.go
  • cmd/clusterctl/cmd/config_repositories_test.go
  • cmd/clusterctl/cmd/describe_cluster.go
  • cmd/clusterctl/cmd/upgrade_apply.go
  • cmd/clusterctl/cmd/version_checker.go
  • cmd/clusterctl/config/crd/bases/clusterctl.cluster.x-k8s.io_metadata.yaml
  • cmd/clusterctl/config/crd/bases/clusterctl.cluster.x-k8s.io_providers.yaml
  • cmd/clusterctl/config/manifest/clusterctl-api.yaml
  • cmd/clusterctl/hack/create-local-repository.py
  • cmd/clusterctl/internal/test/fake_github.go
  • cmd/clusterctl/internal/test/fake_reader.go
  • config/crd/bases/addons.cluster.x-k8s.io_clusterresourcesetbindings.yaml
  • config/crd/bases/addons.cluster.x-k8s.io_clusterresourcesets.yaml
  • config/crd/bases/cluster.x-k8s.io_clusterclasses.yaml
  • config/crd/bases/cluster.x-k8s.io_clusters.yaml
  • config/crd/bases/cluster.x-k8s.io_machinedeployments.yaml
  • config/crd/bases/cluster.x-k8s.io_machinedrainrules.yaml
  • config/crd/bases/cluster.x-k8s.io_machinehealthchecks.yaml
  • config/crd/bases/cluster.x-k8s.io_machinepools.yaml
  • config/crd/bases/cluster.x-k8s.io_machines.yaml
  • config/crd/bases/cluster.x-k8s.io_machinesets.yaml
  • config/crd/bases/ipam.cluster.x-k8s.io_ipaddressclaims.yaml
  • config/crd/bases/ipam.cluster.x-k8s.io_ipaddresses.yaml
  • config/crd/bases/runtime.cluster.x-k8s.io_extensionconfigs.yaml
  • config/crd/patches/webhook_in_clusterclasses.yaml
  • config/crd/patches/webhook_in_clusterresourcesetbindings.yaml
  • config/crd/patches/webhook_in_clusterresourcesets.yaml
  • config/crd/patches/webhook_in_clusters.yaml
  • config/crd/patches/webhook_in_extensionconfigs.yaml
  • config/crd/patches/webhook_in_ipaddressclaims.yaml
  • config/crd/patches/webhook_in_ipaddresses.yaml
  • config/crd/patches/webhook_in_machinedeployments.yaml
  • config/crd/patches/webhook_in_machinedrainrules.yaml
  • config/crd/patches/webhook_in_machinehealthchecks.yaml
  • config/crd/patches/webhook_in_machinepools.yaml
  • config/crd/patches/webhook_in_machines.yaml
  • config/crd/patches/webhook_in_machinesets.yaml
  • config/manager/manager.yaml
  • config/metrics/crd-metrics-config.yaml
  • config/webhook/manifests.yaml
  • controllers/clustercache/cluster_accessor.go
  • controllers/clustercache/cluster_accessor_client.go
  • controllers/clustercache/cluster_accessor_test.go
  • controllers/clustercache/cluster_cache.go
  • controllers/clustercache/cluster_cache_test.go
  • controllers/crdmigrator/crd_migrator.go
  • controllers/crdmigrator/test/t1/crd/test.cluster.x-k8s.io_testclusters.yaml
  • controllers/crdmigrator/test/t2/crd/test.cluster.x-k8s.io_testclusters.yaml
  • controllers/crdmigrator/test/t3/crd/test.cluster.x-k8s.io_testclusters.yaml
  • controllers/crdmigrator/test/t4/crd/test.cluster.x-k8s.io_testclusters.yaml
  • controlplane/kubeadm/config/crd/bases/controlplane.cluster.x-k8s.io_kubeadmcontrolplanes.yaml
  • controlplane/kubeadm/config/crd/bases/controlplane.cluster.x-k8s.io_kubeadmcontrolplanetemplates.yaml
  • controlplane/kubeadm/config/crd/patches/webhook_in_kubeadmcontrolplanes.yaml
  • controlplane/kubeadm/config/crd/patches/webhook_in_kubeadmcontrolplanetemplates.yaml
  • controlplane/kubeadm/config/manager/manager.yaml
  • controlplane/kubeadm/config/webhook/manifests.yaml
  • controlplane/kubeadm/internal/cluster.go
  • controlplane/kubeadm/internal/cluster_test.go
  • controlplane/kubeadm/internal/clustercache_utils.go
  • controlplane/kubeadm/internal/control_plane.go
  • controlplane/kubeadm/internal/control_plane_test.go
  • controlplane/kubeadm/internal/controllers/controller.go
  • controlplane/kubeadm/internal/controllers/controller_test.go
  • controlplane/kubeadm/internal/controllers/fakes_test.go
  • controlplane/kubeadm/internal/controllers/helpers.go
  • controlplane/kubeadm/internal/controllers/inplace.go
  • controlplane/kubeadm/internal/controllers/inplace_canupdatemachine.go
  • controlplane/kubeadm/internal/controllers/inplace_trigger.go
  • controlplane/kubeadm/internal/controllers/remediation.go
  • controlplane/kubeadm/internal/controllers/remediation_test.go
  • controlplane/kubeadm/internal/controllers/scale.go
  • controlplane/kubeadm/internal/controllers/scale_test.go
  • controlplane/kubeadm/internal/controllers/status.go
  • controlplane/kubeadm/internal/controllers/status_test.go
  • controlplane/kubeadm/internal/controllers/suite_test.go
  • controlplane/kubeadm/internal/controllers/update_test.go
  • controlplane/kubeadm/internal/desiredstate/desired_state.go
  • controlplane/kubeadm/internal/desiredstate/desired_state_test.go
  • controlplane/kubeadm/internal/etcd/etcd_test.go
  • controlplane/kubeadm/internal/etcd/fake/client.go
  • controlplane/kubeadm/internal/setup/setup.go
  • controlplane/kubeadm/internal/suite_test.go
  • controlplane/kubeadm/internal/webhooks/kubeadmcontrolplane.go
  • controlplane/kubeadm/internal/webhooks/kubeadmcontrolplane_test.go
  • controlplane/kubeadm/internal/webhooks/kubeadmcontrolplanetemplate.go
  • controlplane/kubeadm/internal/webhooks/scale.go
  • controlplane/kubeadm/internal/workload_cluster.go
  • controlplane/kubeadm/internal/workload_cluster_conditions.go
  • controlplane/kubeadm/internal/workload_cluster_conditions_test.go
  • controlplane/kubeadm/internal/workload_cluster_coredns.go
  • controlplane/kubeadm/internal/workload_cluster_etcd.go
  • controlplane/kubeadm/internal/workload_cluster_etcd_test.go
  • controlplane/kubeadm/internal/workload_cluster_test.go
  • controlplane/kubeadm/main.go
  • docs/book/src/SUMMARY.md
  • docs/book/src/clusterctl/configuration.md
  • docs/book/src/developer/core/logging.md
  • docs/book/src/developer/core/tilt.md
  • docs/book/src/developer/providers/contracts/bootstrap-config.md
  • docs/book/src/developer/providers/contracts/clusterctl.md
  • docs/book/src/developer/providers/contracts/control-plane.md
  • docs/book/src/developer/providers/contracts/infra-cluster.md
  • docs/book/src/developer/providers/contracts/infra-machine.md
  • docs/book/src/developer/providers/contracts/infra-machinepool.md
  • docs/book/src/developer/providers/getting-started/webhooks.md
  • docs/book/src/developer/providers/migrations/v1.10-to-v1.11.md
  • docs/book/src/developer/providers/migrations/v1.12-to-v1.13.md
  • docs/book/src/developer/providers/migrations/v1.9-to-v1.10.md
  • docs/book/src/images/kubeadm-control-plane-machines-resources.plantuml
  • docs/book/src/images/worker-machines-resources.plantuml
  • docs/book/src/introduction.md
  • docs/book/src/reference/api/crd-api-reference-v1beta1.md
  • docs/book/src/reference/api/crd-api-reference.md
  • docs/book/src/reference/api/crd-relationships.md
  • docs/book/src/reference/api/reference.md
  • docs/book/src/reference/versions.md
  • docs/book/src/tasks/automated-machine-management/healthchecking.md
  • docs/book/src/tasks/automated-machine-management/scaling.md
  • docs/book/src/tasks/cluster-resource-set.md
  • docs/book/src/tasks/diagnostics.md
  • docs/book/src/tasks/experimental-features/cluster-class/write-clusterclass.md
  • docs/book/src/tasks/experimental-features/experimental-features.md
  • docs/book/src/tasks/experimental-features/machine-pools.md
  • docs/book/src/tasks/experimental-features/runtime-sdk/implement-extensions.md
  • docs/book/src/tasks/experimental-features/runtime-sdk/index.md
  • docs/book/src/tasks/external-etcd.md
  • docs/book/src/tasks/using-kustomize.md
  • docs/book/src/user/quick-start.md
  • docs/proposals/20200506-conditions.md
  • docs/proposals/20210310-opt-in-autoscaling-from-zero.md
  • docs/proposals/20220330-topology-mutation-hook.md
  • docs/proposals/20240916-improve-status-in-CAPI-resources.md
  • docs/proposals/20250124-From CAPD(docker) to CAPD(dev) .md
  • docs/release/releases/release-1.13.md
  • docs/release/role-handbooks/ci-signal/README.md
  • docs/release/role-handbooks/release-lead/README.md
  • exp/topology/desiredstate/desired_state.go
  • exp/topology/desiredstate/desired_state_test.go
  • exp/topology/desiredstate/lifecycle_hooks.go
  • exp/topology/desiredstate/lifecycle_hooks_test.go
  • exp/topology/desiredstate/upgrade_plan.go
  • exp/topology/desiredstate/upgrade_plan_test.go
  • feature/feature.go
  • go.mod
  • hack/crd-ref-docs-config-v1beta1.yaml
  • hack/crd-ref-docs-config-v1beta2.yaml
  • hack/ensure-go.sh
  • hack/gogcflags.sh
  • hack/kind-install.sh
  • hack/observability/alloy/kustomization.yaml
  • hack/observability/grafana/chart/kustomization.yaml
  • hack/observability/grafana/dashboards/cluster-api-mgmt-apiserver-requests.json
  • hack/observability/grafana/dashboards/cluster-api-performance.json
  • hack/observability/grafana/dashboards/cluster-api-state.json
  • hack/observability/grafana/dashboards/cluster-api-wl-apiserver-requests.json
  • hack/observability/grafana/dashboards/controller-runtime.json
  • hack/observability/grafana/dashboards/runtime-extensions.json
  • hack/observability/kube-state-metrics/kustomization.yaml
  • hack/observability/loki/kustomization.yaml
  • hack/observability/loki/values.yaml
  • hack/observability/metrics-server/kustomization.yaml
  • hack/observability/parca/values.yaml
  • hack/observability/prometheus/kustomization.yaml
  • hack/observability/prometheus/values.yaml
  • hack/observability/tempo/kustomization.yaml
  • hack/observability/visualizer/kustomization.yaml
  • hack/tools/go.mod
  • hack/tools/internal/tilt-prepare/main.go
  • hack/tools/prowjob-gen/config.go
  • hack/tools/prowjob-gen/test/test-configuration.yaml
  • hack/tools/prowjob-gen/test/test-main.yaml.golden
  • hack/tools/prowjob-gen/test/test.yaml.tpl
  • hack/tools/release/internal/update_providers/provider_issues.go
  • hack/tools/release/weekly/main.go
  • hack/tools/runtime-openapi-gen/main.go
💤 Files with no reviewable changes (7)
  • config/webhook/manifests.yaml
  • controllers/clustercache/cluster_accessor_test.go
  • .trivyignore
  • bootstrap/kubeadm/config/webhook/manifests.yaml
  • cmd/clusterctl/client/cluster/upgrader_test.go
  • bootstrap/util/suite_test.go
  • .github/workflows/pr-gh-workflow-approve.yaml

fail-fast: false
matrix:
branch: [ main, release-1.11, release-1.10, release-1.9 ]
branch: [ main, release-1.12, release-1.11 ]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

for b in main master release-1.12 release-1.11; do
  if git ls-remote --exit-code --heads origin "$b" >/dev/null 2>&1; then
    echo "exists: $b"
  else
    echo "missing: $b"
  fi
done

Repository: openshift/cluster-api

Length of output: 138


Update workflow matrix branches to match actual repository branches.

The workflow matrix references main, release-1.12, and release-1.11, but only master exists in this repository. The actions/checkout action will fail on all three matrix entries, preventing the security scan from running. Update line 16 to reference valid branches.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/weekly-security-scan.yaml at line 16, The workflow's
matrix branches list is invalid (it lists main and release-1.*); update the
branches array used in the weekly-security-scan workflow so actions/checkout
uses existing repo branches (replace "branch: [ main, release-1.12, release-1.11
]" with a valid list such as "branch: [ master ]" or the actual release branch
names), ensuring the matrix contains only real branch names so the job runs
successfully.

Comment on lines +420 to +427
for _, c := range in.Taints {
out.Spec.Taints = append(out.Spec.Taints, clusterv1.MachineTaint{
Key: c.Key,
Value: c.Value,
Effect: c.Effect,
Propagation: clusterv1.MachineTaintPropagation(c.Propagation),
})
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Reset destination taint slices before appending.

These conversions append into out.Spec.Taints / out.Taints without clearing them first. If the destination object is reused, you'll retain stale taints or duplicate entries across conversions. Allocate the exact-length slice and assign by index instead of appending.

💡 Suggested pattern
- for _, c := range in.Taints {
- 	out.Spec.Taints = append(out.Spec.Taints, clusterv1.MachineTaint{
- 		Key:         c.Key,
- 		Value:       c.Value,
- 		Effect:      c.Effect,
- 		Propagation: clusterv1.MachineTaintPropagation(c.Propagation),
- 	})
- }
+ if len(in.Taints) == 0 {
+ 	out.Spec.Taints = nil
+ } else {
+ 	out.Spec.Taints = make([]clusterv1.MachineTaint, len(in.Taints))
+ 	for i, c := range in.Taints {
+ 		out.Spec.Taints[i] = clusterv1.MachineTaint{
+ 			Key:         c.Key,
+ 			Value:       c.Value,
+ 			Effect:      c.Effect,
+ 			Propagation: clusterv1.MachineTaintPropagation(c.Propagation),
+ 		}
+ 	}
+ }

Apply the same pattern to the reverse/template conversions as well.

Also applies to: 446-453, 464-471, 482-489

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/controlplane/kubeadm/v1beta1/conversion.go` around lines 420 - 427, The
conversion functions are appending to destination taint slices (e.g.,
out.Spec.Taints, out.Taints) without clearing them, causing duplicates when
objects are reused; change each conversion that iterates over in.Taints (and the
reverse/template variants) to allocate a slice of exact length (len(in.Taints)),
assign it to the destination (out.Spec.Taints = make(..., len(in.Taints))), and
populate entries by index using clusterv1.MachineTaint{...} instead of append;
apply the same pattern for all occurrences noted (the blocks around the
in.Taints iterations and their reverse/template counterparts).

Comment on lines +22 to 23
clusterv1 "sigs.k8s.io/cluster-api/api/core/v1beta2"
runtimecatalog "sigs.k8s.io/cluster-api/exp/runtime/catalog"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift

Avoid changing the embedded Cluster schema in the v1alpha1 hook contract.

These request types are the wire contract for Runtime SDK extensions. Swapping cluster from core v1beta1 to core v1beta2 changes that payload shape without a hook API version bump, so existing extensions can start deserializing a different object after upgrading the management cluster. Keep the old payload in v1alpha1 or introduce a new hook version and convert explicitly.

Also applies to: 34-36, 61-63, 89-91, 141-143, 185-187, 225-227, 269-271, 309-311, 341-343

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/runtime/hooks/v1alpha1/lifecyclehooks_types.go` around lines 22 - 23,
Revert the import change so the embedded Cluster schema in the v1alpha1 hook
contract remains the original core v1beta1 type: replace the current import
"sigs.k8s.io/cluster-api/api/core/v1beta2" used as clusterv1 with the original
"sigs.k8s.io/cluster-api/api/core/v1beta1" and ensure every struct in this file
that embeds or references clusterv1.Cluster (the v1alpha1 request types)
continues to use the v1beta1 shape; if you actually need v1beta2 semantics,
instead create a new hook version (e.g., v1beta1 hook API) and perform explicit
conversion between versions rather than changing the v1alpha1 contract in-place.

Comment on lines +66 to +68
scheme := runtime.NewScheme()
_ = clusterv1.AddToScheme(scheme)
c := fake.NewClientBuilder().WithScheme(scheme).WithObjects(myMachine).Build()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Handle scheme registration failures explicitly.

At Line [67] and Line [121], discarding the AddToScheme error can hide test setup failures and make downstream assertions misleading.

Suggested fix
 			scheme := runtime.NewScheme()
-			_ = clusterv1.AddToScheme(scheme)
+			g.Expect(clusterv1.AddToScheme(scheme)).To(Succeed())
 			c := fake.NewClientBuilder().WithScheme(scheme).WithObjects(myMachine).Build()
@@
 			scheme := runtime.NewScheme()
-			_ = clusterv1.AddToScheme(scheme)
+			g.Expect(clusterv1.AddToScheme(scheme)).To(Succeed())
 			c := fake.NewClientBuilder().WithScheme(scheme).WithObjects(myPool).Build()

Also applies to: 120-123

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bootstrap/util/configowner_test.go` around lines 66 - 68, The test currently
discards the error returned by clusterv1.AddToScheme, which can hide scheme
registration failures; update both places where runtime.NewScheme() and
clusterv1.AddToScheme(...) are used to capture the returned error (e.g., err :=
clusterv1.AddToScheme(scheme)) and explicitly fail the test if err != nil (use
t.Fatalf or a test assertion helper like require.NoError) before building the
fake client (fake.NewClientBuilder().WithScheme(scheme)...), so any scheme
registration failure surfaces immediately.

Comment thread CHANGELOG/v1.13.0-rc.0.md
- Devtools: Enable native histograms in Grafana / Prometheus (#13304)
- Devtools: Make kind image configurable via env var for make tilt-up (#13333)
- Devtools: Updated dev observability stack (#13044)
- e2e: 0 in e2e tests (#13429)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Fix malformed changelog entry.

This entry is incomplete and unclear: "0 in e2e tests" does not describe what change was made. Please verify the correct description for PR #13429.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@CHANGELOG/v1.13.0-rc.0.md` at line 186, Replace the malformed changelog line
"e2e: 0 in e2e tests" with a clear, complete description for PR `#13429`: locate
the entry referencing PR `#13429` and update it to a concise sentence like "e2e:
<brief summary of the fix/feature introduced by PR `#13429`> (PR `#13429`)" so it
explains what changed in e2e tests and includes the PR number for traceability;
confirm the wording matches the PR title/description and preserves the changelog
format.

Comment on lines +293 to +302
key:
description: |-
key is the taint key to be applied to a node.
Must be a valid qualified name of maximum size 63 characters
with an optional subdomain prefix of maximum size 253 characters,
separated by a `/`.
maxLength: 317
minLength: 1
pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*\/)?([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]$
type: string
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Restore qualified-name validation on spec.taints[].key.

After dropping the split-length validation, this schema now accepts keys whose name segment is longer than 63 characters as long as the overall string stays under 317 chars. That no longer matches Kubernetes qualified-name rules, so invalid taints can be admitted here and then fail later when Cluster API tries to apply them to Nodes. Please restore the source-level validation and regenerate the CRDs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@config/crd/bases/cluster.x-k8s.io_machines.yaml` around lines 293 - 302, The
spec.taints[].key schema currently allows a name segment longer than 63 chars
because maxLength: 317 was left without the split-length enforcement; restore
the original qualified-name validation by updating the key schema (the pattern
and length checks for spec.taints[].key) so the optional DNS subdomain prefix is
limited to 253 chars and the name segment is limited to 63 chars (i.e. reinstate
the regex that enforces the name segment max 63 and prefix max 253 rather than
relying only on a 317 overall max), update/remove the incorrect maxLength if
needed to match that regex, and then regenerate the CRDs so the corrected
validation is applied.

Comment on lines +469 to +478
// Apply cluster filter if set
if cc.clusterFilter != nil && !cc.clusterFilter(cluster) {
log.V(6).Info("Cluster filtered out by ClusterFilter, not connecting")
accessor := cc.getClusterAccessor(clusterKey)
if accessor != nil {
accessor.Disconnect(ctx)
}
cc.deleteClusterAccessor(clusterKey)
cc.cleanupClusterSourcesForCluster(clusterKey)
return ctrl.Result{}, nil
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Don't drop disconnect events when a cluster becomes filtered out.

If this branch disconnects an existing accessor, the early return skips sendEventsToClusterSources below, so GetClusterSource consumers never see the disconnect requeue for handled → filtered-out transitions.

Suggested fix
 	// Apply cluster filter if set
 	if cc.clusterFilter != nil && !cc.clusterFilter(cluster) {
 		log.V(6).Info("Cluster filtered out by ClusterFilter, not connecting")
 		accessor := cc.getClusterAccessor(clusterKey)
+		didDisconnect := false
 		if accessor != nil {
-			accessor.Disconnect(ctx)
+			if accessor.Connected(ctx) {
+				accessor.Disconnect(ctx)
+				didDisconnect = true
+			}
 		}
+		if didDisconnect {
+			cc.sendEventsToClusterSources(ctx, cluster, time.Now(), accessor.GetHealthCheckingState(ctx).LastProbeSuccessTime, false, true)
+		}
 		cc.deleteClusterAccessor(clusterKey)
 		cc.cleanupClusterSourcesForCluster(clusterKey)
 		return ctrl.Result{}, nil
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@controllers/clustercache/cluster_cache.go` around lines 469 - 478, When a
cluster is filtered out we currently disconnect and delete the accessor
(getClusterAccessor, accessor.Disconnect, deleteClusterAccessor,
cleanupClusterSourcesForCluster) then return early, which prevents notifying
consumers; move or add a call to cc.sendEventsToClusterSources (or invoke the
existing method that enqueues a handled→filtered-out disconnect event for
GetClusterSource consumers) immediately after Disconnect and before
deleteClusterAccessor/cleanupClusterSourcesForCluster (or at least before
returning) so the disconnect/requeue is sent; ensure you reference clusterKey
and the same transition payload used elsewhere so consumers see the
handled→filtered-out event.

@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented May 4, 2026

@cloud-team-rebase-bot[bot]: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. ok-to-test Indicates a non-member PR verified by an org member that is safe to test.

Projects

None yet

Development

Successfully merging this pull request may close these issues.