Skip to content

CNTRLPLANE-2204: improve AllNodesHealthy and AllMachinesReady conditions#8247

Merged
openshift-merge-bot[bot] merged 4 commits intoopenshift:mainfrom
sdminonne:CNTRLPLANE-2204
Apr 23, 2026
Merged

CNTRLPLANE-2204: improve AllNodesHealthy and AllMachinesReady conditions#8247
openshift-merge-bot[bot] merged 4 commits intoopenshift:mainfrom
sdminonne:CNTRLPLANE-2204

Conversation

@sdminonne
Copy link
Copy Markdown
Contributor

@sdminonne sdminonne commented Apr 15, 2026

Summary

  • Introduce findMachineStatusCondition to look up conditions on a CAPI Machine and normalize them into a common machineConditionResult struct, replacing the previous findCAPIStatusCondition helper
  • Fix setAllNodesHealthyCondition to correctly treat machines with no NodeHealthy condition as not healthy (reason WaitingForNodeRef) instead of silently ignoring them
  • Refactor setAllNodesHealthyCondition to use the same aggregateMachineReasonsAndMessages pattern as setAllMachinesReadyCondition, reporting e.g. "2 of 3 machines are not healthy"
  • Fix setAllMachinesReadyCondition to treat machines with no Ready condition as not ready (reason WaitingForInfrastructure) instead of silently ignoring them
  • Include CAPI condition Message in the Ready condition fallback path when present and meaningful

Test plan

  • Unit tests for findMachineStatusCondition covering condition found, not found, and specific values
  • Unit tests for setAllNodesHealthyCondition when machines have no NodeHealthy condition
  • Unit tests for setAllMachinesReadyCondition when machines have no Ready condition
  • Updated existing test expectations to match new aggregation output format
  • Run make test to verify all unit tests pass
  • Run make verify to check formatting and generation

Ref: CNTRLPLANE-2204

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Bug Fixes

    • NodePool health now treats missing or non-true machine conditions as not healthy/ready and reflects counts/messages accordingly.
    • Aggregated per-machine reasons/messages are deterministic, omit empty/duplicate fragments (including certain setup-counter-like messages), and enforce per-reason and global truncation limits with clear suffixing.
  • Tests

    • Added extensive tests covering condition resolution, aggregation, ordering, omission rules, and truncation behavior.

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: LGTM mode

@openshift-ci openshift-ci Bot added do-not-merge/needs-area area/hypershift-operator Indicates the PR includes changes for the hypershift operator and API - outside an OCP release labels Apr 15, 2026
@openshift-ci openshift-ci Bot requested review from devguyio and sjenning April 15, 2026 12:31
@sdminonne sdminonne changed the title feat(nodepool): improve CAPI Machine condition handling feat(nodepool): improve AllNodesHealthy and AllMachinesReady conditions Apr 15, 2026
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 15, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 35.66%. Comparing base (1d9c83c) to head (e977a39).
⚠️ Report is 42 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #8247      +/-   ##
==========================================
+ Coverage   35.62%   35.66%   +0.03%     
==========================================
  Files         767      767              
  Lines       93333    93387      +54     
==========================================
+ Hits        33248    33302      +54     
  Misses      57396    57396              
  Partials     2689     2689              
Files with missing lines Coverage Δ
...rshift-operator/controllers/nodepool/conditions.go 53.93% <100.00%> (+1.82%) ⬆️
...erator/controllers/nodepool/nodepool_controller.go 41.30% <100.00%> (+1.64%) ⬆️
...ypershift-operator/controllers/nodepool/version.go 95.08% <100.00%> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 15, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Introduced findMachineStatusCondition(machine, conditionType) which returns a normalized {Status, Reason, Message} or nil if absent. Controllers now use it in place of the prior CAPI lookup: missing MachineNodeHealthy marks machines as not healthy (aggregated under WaitingForNodeRefReason), and missing Ready marks machines as not ready (aggregated under WaitingForInfrastructureFallbackReason). Ready condition messages are appended only when non-empty and not matching setup-counter patterns. Aggregation now sorts per-reason messages deterministically and enforces per-reason and global message-length caps with truncation suffixes. Tests were added/expanded to cover lookup, aggregation, ordering, and truncation behaviors.

Sequence Diagram(s)

sequenceDiagram
  participant Controller as NodePoolController
  participant API as MachineAPI
  participant Finder as findMachineStatusCondition
  participant Aggregator as aggregateMachineReasonsAndMessages
  participant NP as NodePoolStatus

  Controller->>API: list Machines
  API-->>Controller: Machines[]
  loop for each Machine
    Controller->>Finder: findMachineStatusCondition(machine, conditionType)
    Finder-->>Controller: {Status, Reason, Message} or nil
  end
  Controller->>Aggregator: aggregateMachineReasonsAndMessages(per-machine results)
  Aggregator-->>Controller: aggregatedReasons, aggregatedMessage (truncated)
  Controller->>NP: set NodePool conditions (Healthy/Ready) with aggregated info
  NP-->>Controller: status updated
Loading
🚥 Pre-merge checks | ✅ 9 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 30.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (9 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Stable And Deterministic Test Names ✅ Passed Pull request adds Go standard testing convention tests with static descriptive names; no dynamic values in test titles.
Test Structure And Quality ✅ Passed Test code demonstrates good structure with single responsibility through table-driven tests, inline setup using fake.NewClientBuilder(), meaningful assertions with custom helpers and Gomega matchers, no indefinite waits, and consistency with established codebase patterns.
Microshift Test Compatibility ✅ Passed PR modifies only standard Go unit tests in hypershift-operator controller directory, not Ginkgo e2e tests.
Single Node Openshift (Sno) Test Compatibility ✅ Passed This pull request does not introduce any Ginkgo e2e tests. The test file modifications add only unit tests using Go's standard testing package with table-driven test patterns, which are not e2e tests.
Topology-Aware Scheduling Compatibility ✅ Passed PR modifies NodePool controller to evaluate and report CAPI Machine status conditions without introducing scheduling constraints, affinity rules, topology assumptions, or replica logic.
Ote Binary Stdout Contract ✅ Passed PR modifies controller-level code in hypershift operator nodepool package, not OTE binary entry points. No stdout writes detected in modified files.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed This PR contains only unit tests using standard Go testing, not new Ginkgo e2e tests.
Title check ✅ Passed The PR title directly and accurately describes the main changes: introducing improvements to both AllNodesHealthy and AllMachinesReady condition handling through helper refactoring and aggregation logic.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
hypershift-operator/controllers/nodepool/nodepool_controller.go (1)

1123-1128: Sort each reason bucket before building the message.

The reason keys are deterministic now, but messageMap[reason] still reflects machine list order. Since Kubernetes list order is not guaranteed, the per-machine lines inside a single reason can still reshuffle between reconciles and trigger needless status patches.

💡 Proposed fix
 	for _, reason := range reasons {
+		sort.Strings(messageMap[reason])
 		reasonBlock := aggregateMachineMessages(messageMap[reason])
 		if msgBuilder.Len()+len(reasonBlock) > maxGlobalMessageLength {
 			msgBuilder.WriteString(endOfGlobalMessage)
 			break
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hypershift-operator/controllers/nodepool/nodepool_controller.go` around lines
1123 - 1128, The per-reason message bucket (messageMap[reason]) is using machine
list order which can vary; before calling aggregateMachineMessages and appending
to msgBuilder you should sort the items in that bucket deterministically (e.g.
by machine name or another stable key) so aggregateMachineMessages receives a
stable, ordered slice; update the code around reasonBlock creation (where
reasonBlock := aggregateMachineMessages(messageMap[reason]) is called) to sort
messageMap[reason] in-place or pass a sorted copy, keeping references to
messageMap, reason, aggregateMachineMessages, msgBuilder,
maxGlobalMessageLength, and endOfGlobalMessage to locate the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@hypershift-operator/controllers/nodepool/nodepool_controller.go`:
- Around line 1124-1126: The truncation suffix is appended after checking if
adding reasonBlock would overflow, which can still push the final message past
maxGlobalMessageLength; update the logic around msgBuilder, reasonBlock,
maxGlobalMessageLength and endOfGlobalMessage so the check considers the suffix
size too (e.g., test msgBuilder.Len()+len(reasonBlock)+len(endOfGlobalMessage)
against maxGlobalMessageLength), and if it would overflow either truncate
reasonBlock to leave room for endOfGlobalMessage or write only
endOfGlobalMessage and break so the global cap is never exceeded.

---

Nitpick comments:
In `@hypershift-operator/controllers/nodepool/nodepool_controller.go`:
- Around line 1123-1128: The per-reason message bucket (messageMap[reason]) is
using machine list order which can vary; before calling aggregateMachineMessages
and appending to msgBuilder you should sort the items in that bucket
deterministically (e.g. by machine name or another stable key) so
aggregateMachineMessages receives a stable, ordered slice; update the code
around reasonBlock creation (where reasonBlock :=
aggregateMachineMessages(messageMap[reason]) is called) to sort
messageMap[reason] in-place or pass a sorted copy, keeping references to
messageMap, reason, aggregateMachineMessages, msgBuilder,
maxGlobalMessageLength, and endOfGlobalMessage to locate the change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: a7e8efa0-4f57-4e56-b0d9-ab1911343902

📥 Commits

Reviewing files that changed from the base of the PR and between 916b455 and 4040c92.

📒 Files selected for processing (4)
  • hypershift-operator/controllers/nodepool/conditions.go
  • hypershift-operator/controllers/nodepool/nodepool_controller.go
  • hypershift-operator/controllers/nodepool/nodepool_controller_test.go
  • hypershift-operator/controllers/nodepool/version.go

Comment thread hypershift-operator/controllers/nodepool/nodepool_controller.go Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
hypershift-operator/controllers/nodepool/conditions.go (1)

111-123: Consider adding a nil guard for defensive coding.

The function doesn't check if machine is nil before accessing machine.Status.Conditions. While current callers iterate over slices that shouldn't contain nil pointers, a nil check would make this helper more robust for future use.

🛡️ Optional defensive fix
 func findMachineStatusCondition(machine *capiv1.Machine, conditionType string) *machineConditionResult {
+	if machine == nil {
+		return nil
+	}
 	for i := range machine.Status.Conditions {
 		if string(machine.Status.Conditions[i].Type) == conditionType {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hypershift-operator/controllers/nodepool/conditions.go` around lines 111 -
123, Add a nil guard at the start of findMachineStatusCondition so it returns
nil immediately if the input machine is nil; this prevents dereferencing a nil
pointer when accessing machine.Status.Conditions and keeps callers safe. Update
the function findMachineStatusCondition to check "if machine == nil { return nil
}" before iterating, while leaving the existing construction of
machineConditionResult (Status, Reason, Message) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@hypershift-operator/controllers/nodepool/conditions.go`:
- Around line 111-123: Add a nil guard at the start of
findMachineStatusCondition so it returns nil immediately if the input machine is
nil; this prevents dereferencing a nil pointer when accessing
machine.Status.Conditions and keeps callers safe. Update the function
findMachineStatusCondition to check "if machine == nil { return nil }" before
iterating, while leaving the existing construction of machineConditionResult
(Status, Reason, Message) unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: f582d6b2-f86e-4725-b26e-211aec3904dd

📥 Commits

Reviewing files that changed from the base of the PR and between 4040c92 and 4b86074.

📒 Files selected for processing (4)
  • hypershift-operator/controllers/nodepool/conditions.go
  • hypershift-operator/controllers/nodepool/nodepool_controller.go
  • hypershift-operator/controllers/nodepool/nodepool_controller_test.go
  • hypershift-operator/controllers/nodepool/version.go
✅ Files skipped from review due to trivial changes (1)
  • hypershift-operator/controllers/nodepool/version.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • hypershift-operator/controllers/nodepool/nodepool_controller.go

@csrwng
Copy link
Copy Markdown
Contributor

csrwng commented Apr 15, 2026

/approve

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 15, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: csrwng, sdminonne

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci Bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 15, 2026
},
expectedAllNodes: &testCondition{
Status: corev1.ConditionFalse,
Reason: capiv1.NodeConditionsFailedReason + "," + capiv1.WaitingForNodeRefReason,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this truncated anyhow so we don't break the MaxLength=1024 of the reason field?
same question for Message MaxLength

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh cool! Thanks @enxebre .

While we truncate the messages in this PR for the Reason there's no specific logic to keep under 1024 and even if I'm not sure there's not enough Reasons to reach 1024 it's a theoretical gap, I agree.

Reworking it

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@hypershift-operator/controllers/nodepool/nodepool_controller_test.go`:
- Around line 2054-2055: The test currently skips the exact-length assertion for
the "When empty reasons..." case because the check uses `if expectExactLen > 0`
which ignores `expectExactLen == 0`; update the assertion guard so zero-length
expectations are asserted (e.g., change the condition to `if expectExactLen >=
0` or adopt a sentinel like `-1` for "no-check" and adjust cases accordingly),
and ensure the test case named "When empty reasons..." (where `expectExactLen:
0`) triggers the `assert.Len(..., expectExactLen)` branch; reference the
`expectExactLen` field and the assertion block in the
nodepool_controller_test.go test function to implement this change.

In `@hypershift-operator/controllers/nodepool/nodepool_controller.go`:
- Around line 1156-1158: In truncateReasons the logic that writes endOfReasons
can produce a leading separator when nothing else fits; change the branch that
appends endOfReasons (using builder, separator, endOfReasons, maxReasonLength)
to detect if builder.Len() == 0 and, in that case, write endOfReasons without
the separator (or otherwise trim the leading separator) so you never produce a
comma-only prefix like ",ReasonsTruncated". Ensure the check still enforces
maxReasonLength and preserves existing behavior when builder already contains
entries.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: 12a91ca8-7b68-4451-b490-dc61bd20cbd9

📥 Commits

Reviewing files that changed from the base of the PR and between 4b86074 and 8167954.

📒 Files selected for processing (2)
  • hypershift-operator/controllers/nodepool/nodepool_controller.go
  • hypershift-operator/controllers/nodepool/nodepool_controller_test.go

Comment on lines +2054 to +2055
expectExactLen int // if > 0, expect exact length
}{
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Empty-reasons case is not actually asserted.

Line 2137 checks expectExactLen > 0, so the "When empty reasons..." case (expectExactLen: 0) skips its exact-length assertion.

💡 Proposed fix
-		expectExactLen int // if > 0, expect exact length
+		expectExactLen *int // if non-nil, expect exact length
@@
-			expectExactLen: 0,
+			expectExactLen: ptr.To(0),
@@
-			if tc.expectExactLen > 0 {
-				g.Expect(len(result)).To(Equal(tc.expectExactLen))
+			if tc.expectExactLen != nil {
+				g.Expect(len(result)).To(Equal(*tc.expectExactLen))
 			}

Also applies to: 2137-2139

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hypershift-operator/controllers/nodepool/nodepool_controller_test.go` around
lines 2054 - 2055, The test currently skips the exact-length assertion for the
"When empty reasons..." case because the check uses `if expectExactLen > 0`
which ignores `expectExactLen == 0`; update the assertion guard so zero-length
expectations are asserted (e.g., change the condition to `if expectExactLen >=
0` or adopt a sentinel like `-1` for "no-check" and adjust cases accordingly),
and ensure the test case named "When empty reasons..." (where `expectExactLen:
0`) triggers the `assert.Len(..., expectExactLen)` branch; reference the
`expectExactLen` field and the assertion block in the
nodepool_controller_test.go test function to implement this change.

Comment on lines +1156 to +1158
if builder.Len()+len(separator)+len(reason)+len(endOfReasons) > maxReasonLength {
builder.WriteString(endOfReasons)
break
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid leading comma when only truncation suffix fits.

If no reason can fit (e.g., an exceptionally long first reason), truncateReasons returns ,ReasonsTruncated. That produces a malformed comma-separated list.

💡 Proposed fix
 		if builder.Len()+len(separator)+len(reason)+len(endOfReasons) > maxReasonLength {
-			builder.WriteString(endOfReasons)
+			if builder.Len() == 0 {
+				builder.WriteString(strings.TrimPrefix(endOfReasons, ","))
+			} else {
+				builder.WriteString(endOfReasons)
+			}
 			break
 		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if builder.Len()+len(separator)+len(reason)+len(endOfReasons) > maxReasonLength {
builder.WriteString(endOfReasons)
break
if builder.Len()+len(separator)+len(reason)+len(endOfReasons) > maxReasonLength {
if builder.Len() == 0 {
builder.WriteString(strings.TrimPrefix(endOfReasons, ","))
} else {
builder.WriteString(endOfReasons)
}
break
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hypershift-operator/controllers/nodepool/nodepool_controller.go` around lines
1156 - 1158, In truncateReasons the logic that writes endOfReasons can produce a
leading separator when nothing else fits; change the branch that appends
endOfReasons (using builder, separator, endOfReasons, maxReasonLength) to detect
if builder.Len() == 0 and, in that case, write endOfReasons without the
separator (or otherwise trim the leading separator) so you never produce a
comma-only prefix like ",ReasonsTruncated". Ensure the check still enforces
maxReasonLength and preserves existing behavior when builder already contains
entries.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@CodeRabbit I've hard time to imagine a 1024 long reason. I would say it is a very very unlikely

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sdminonne, you're right — a single CAPI condition reason exceeding 1006 characters (i.e. maxReasonLength - len(",ReasonsTruncated")) is essentially impossible in practice given how CAPI reasons are structured (short identifiers like WaitingForInfrastructure, NodeNotFound, etc.). The concern is purely theoretical. Happy to leave the code as-is.

@sdminonne
Copy link
Copy Markdown
Contributor Author

@enxebre ptal

@enxebre
Copy link
Copy Markdown
Member

enxebre commented Apr 16, 2026

/lgtm

@openshift-ci openshift-ci Bot added the lgtm Indicates that a PR is ready to be merged. label Apr 16, 2026
@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling tests matching the pipeline_run_if_changed or not excluded by pipeline_skip_if_only_changed parameters:
/test e2e-aks-4-22
/test e2e-aws-4-22
/test e2e-aks
/test e2e-aws
/test e2e-aws-upgrade-hypershift-operator
/test e2e-azure-self-managed
/test e2e-kubevirt-aws-ovn-reduced
/test e2e-v2-aws

@sdminonne
Copy link
Copy Markdown
Contributor Author

/retest

@cwbotbot
Copy link
Copy Markdown

cwbotbot commented Apr 16, 2026

Test Results

e2e-aws

e2e-aks

@hypershift-jira-solve-ci
Copy link
Copy Markdown

AI Test Failure Analysis

Job: pull-ci-openshift-hypershift-main-e2e-azure-self-managed | Build: 2044731489655984128 | Cost: $1.6323884999999998 | Failed step: hypershift-azure-run-e2e-self-managed

View full analysis report


Generated by hypershift-analyze-e2e-failure post-step using Claude claude-opus-4-6

@hypershift-jira-solve-ci
Copy link
Copy Markdown

AI Test Failure Analysis

Job: pull-ci-openshift-hypershift-main-e2e-aws | Build: 2044731477077266432 | Cost: $2.1176352499999997 | Failed step: hypershift-aws-run-e2e-nested

View full analysis report


Generated by hypershift-analyze-e2e-failure post-step using Claude claude-opus-4-6

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling tests matching the pipeline_run_if_changed or not excluded by pipeline_skip_if_only_changed parameters:
/test e2e-aks-4-22
/test e2e-aws-4-22
/test e2e-aks
/test e2e-aws
/test e2e-aws-upgrade-hypershift-operator
/test e2e-azure-self-managed
/test e2e-kubevirt-aws-ovn-reduced
/test e2e-v2-aws

@mgencur
Copy link
Copy Markdown
Contributor

mgencur commented Apr 20, 2026

/verified by nodepool_test.go

@openshift-ci-robot
Copy link
Copy Markdown

@mgencur: This PR has been marked as verified by nodepool_test.go.

Details

In response to this:

/verified by nodepool_test.go

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@sdminonne
Copy link
Copy Markdown
Contributor Author

/test e2e-aws-4-22

@sdminonne
Copy link
Copy Markdown
Contributor Author

/retest

@sdminonne
Copy link
Copy Markdown
Contributor Author

/test e2e-kubevirt-aws-ovn-reduced

@sdminonne
Copy link
Copy Markdown
Contributor Author

/retest

@sdminonne
Copy link
Copy Markdown
Contributor Author

/test e2e-kubevirt-aws-ovn-reduced

@sdminonne sdminonne changed the title feat(nodepool): improve AllNodesHealthy and AllMachinesReady conditions CNTRLPLANE-2204: improve AllNodesHealthy and AllMachinesReady conditions Apr 21, 2026
@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Apr 21, 2026
@openshift-ci-robot
Copy link
Copy Markdown

openshift-ci-robot commented Apr 21, 2026

@sdminonne: This pull request references CNTRLPLANE-2204 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target either version "5.0." or "openshift-5.0.", but it targets "openshift-4.21" instead.

Details

In response to this:

Summary

  • Introduce findMachineStatusCondition to look up conditions on a CAPI Machine and normalize them into a common machineConditionResult struct, replacing the previous findCAPIStatusCondition helper
  • Fix setAllNodesHealthyCondition to correctly treat machines with no NodeHealthy condition as not healthy (reason WaitingForNodeRef) instead of silently ignoring them
  • Refactor setAllNodesHealthyCondition to use the same aggregateMachineReasonsAndMessages pattern as setAllMachinesReadyCondition, reporting e.g. "2 of 3 machines are not healthy"
  • Fix setAllMachinesReadyCondition to treat machines with no Ready condition as not ready (reason WaitingForInfrastructure) instead of silently ignoring them
  • Include CAPI condition Message in the Ready condition fallback path when present and meaningful

Test plan

  • Unit tests for findMachineStatusCondition covering condition found, not found, and specific values
  • Unit tests for setAllNodesHealthyCondition when machines have no NodeHealthy condition
  • Unit tests for setAllMachinesReadyCondition when machines have no Ready condition
  • Updated existing test expectations to match new aggregation output format
  • Run make test to verify all unit tests pass
  • Run make verify to check formatting and generation

Ref: CNTRLPLANE-2204

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Bug Fixes

  • NodePool health now treats missing or non-true machine conditions as not healthy/ready and reflects counts/messages accordingly.

  • Aggregated per-machine reasons/messages are deterministic, omit empty/duplicate fragments (including certain setup-counter-like messages), and enforce per-reason and global truncation limits with clear suffixing.

  • Tests

  • Added extensive tests covering condition resolution, aggregation, ordering, omission rules, and truncation behavior.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@sdminonne
Copy link
Copy Markdown
Contributor Author

/jira refresh

@openshift-ci-robot
Copy link
Copy Markdown

openshift-ci-robot commented Apr 21, 2026

@sdminonne: This pull request references CNTRLPLANE-2204 which is a valid jira issue.

Details

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@sdminonne
Copy link
Copy Markdown
Contributor Author

/rerun

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

/retest-required

Remaining retests: 0 against base HEAD 9a6b857 and 2 for PR HEAD e977a39 in total

@sdminonne
Copy link
Copy Markdown
Contributor Author

/retest-required

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

/retest-required

Remaining retests: 0 against base HEAD d0a4024 and 1 for PR HEAD e977a39 in total

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

/retest-required

Remaining retests: 0 against base HEAD 3ec580c and 0 for PR HEAD e977a39 in total

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

/hold

Revision e977a39 was retested 3 times: holding

@openshift-ci openshift-ci Bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 21, 2026
@sdminonne
Copy link
Copy Markdown
Contributor Author

/test e2e-kubevirt-aws-ovn-reduced

@sdminonne
Copy link
Copy Markdown
Contributor Author

/retest

1 similar comment
@sdminonne
Copy link
Copy Markdown
Contributor Author

/retest

@hypershift-jira-solve-ci
Copy link
Copy Markdown

hypershift-jira-solve-ci Bot commented Apr 22, 2026

Now I have all the evidence needed for a comprehensive report. Here it is:

Test Failure Analysis Complete

Job Information

Test Failure Analysis

Error

util.go:3023: Failed to wait for HostedCluster to have valid conditions in 10m0s: context deadline exceeded
eventually.go:225: observed *v1beta1.HostedCluster invalid: controlPlaneVersion state is Partial, expected Completed

Summary

Both TestCreateCluster and TestAutoscaling failed identically: HostedClusters rolled out successfully (nodes ready, CVO completed, all operators available) but the new controlPlaneVersion status field remained stuck at Partial instead of transitioning to Completed. The blocker is the cluster-network-operator ControlPlaneComponent, which is the only component out of ~37 with RolloutComplete=False. It fails because the v2 CNO component reconciler unconditionally checks for the cloud-network-config-controller deployment when NetworkType=OVNKubernetes, but this deployment is never created on KubeVirt platforms — it only exists on cloud platforms (AWS, Azure, GCP, OpenStack). This is a pre-existing bug in the v2 control-plane-operator framework, not caused by PR #8247's NodePool condition changes.

Root Cause

The root cause is a platform-awareness bug in control-plane-operator/controllers/hostedcontrolplane/v2/cno/component.go, specifically in the checkOperandsRolloutStatus() function.

The bug: When NetworkType == OVNKubernetes, the function adds cloud-network-config-controller to its list of required operand deployments to verify. However, cloud-network-config-controller is only deployed on cloud platforms (AWS, Azure, GCP, OpenStack) where cloud-specific network credentials exist. KubeVirt does not deploy this controller, so the check permanently fails with:

failed to get deployment cloud-network-config-controller: Deployment.apps "cloud-network-config-controller" not found

This causes the cluster-network-operator ControlPlaneComponent to report RolloutComplete=False with reason WaitingForOperands. Since the controlPlaneVersion field requires ALL ControlPlaneComponents to have RolloutComplete=True before transitioning from Partial to Completed, the HostedCluster is stuck forever.

Why both tests failed identically: Both TestCreateCluster and TestAutoscaling create KubeVirt-based HostedClusters, so both hit the same CNO rollout check bug. The CVO (old version tracking) completed successfully (status.version.history[0].state: Completed), but the new controlPlaneVersion field (used by the E2E validation) remained at Partial.

PR #8247 is NOT the cause: The PR modifies only NodePool condition logic (AllNodesHealthy, AllMachinesReady). ControlPlaneComponent rollout status is computed independently of NodePool conditions — it depends solely on deployment availability in the hosted control plane namespace.

Recommendations
  1. Fix the CNO rollout check: Add platform-awareness to checkOperandsRolloutStatus() in control-plane-operator/controllers/hostedcontrolplane/v2/cno/component.go. Only check for cloud-network-config-controller on cloud platforms (AWS, Azure, GCP, OpenStack, IBMCloud), not on KubeVirt, Agent, or None platforms.

  2. PR CNTRLPLANE-2204: improve AllNodesHealthy and AllMachinesReady conditions #8247 can be retested: This failure is unrelated to the NodePool condition improvements. A /retest should be attempted after the CNO fix lands, or the job can be considered a known flake for this platform combination.

  3. Check if other KubeVirt e2e jobs are affected: Any KubeVirt job that validates controlPlaneVersion reaching Completed will hit this same failure. The older status.version (CVO-based) check passes because it doesn't depend on ControlPlaneComponent status.

  4. There is a partial fix in progress (commit 1f4b7c9) that skips OVN deployment checks when NetworkType=Other, but this does not cover the KubeVirt case where NetworkType=OVNKubernetes. The fix needs to be extended to also check the platform type.

Evidence
Evidence Detail
Failing condition controlPlaneVersion.history[0].state = Partial (never reaches Completed)
Blocking component cluster-network-operator ControlPlaneComponent — the only 1 of 37 components with RolloutComplete=False
CNO error message failed to get deployment cloud-network-config-controller: Deployment.apps "cloud-network-config-controller" not found
CNO RolloutComplete reason WaitingForOperands (since 14:52:08, never cleared)
CVO status (old field) status.version.history[0].state = Completed at 15:06:08 — cluster was actually healthy
Platform KubeVirt (no cloud-network-config-controller expected)
All other components 36/37 ControlPlaneComponents have RolloutComplete=True with correct version
TestCreateCluster HC e2e-clusters-nr4bg/create-cluster-xpzzz — same CNO failure
TestAutoscaling HC e2e-clusters-j4mpv/autoscaling-x7np2 — same CNO failure
PR #8247 changed files nodepool_controller.go, conditions.go, version.go, nodepool_controller_test.go, nodepool_test.go — no control plane component logic
Buggy code location control-plane-operator/controllers/hostedcontrolplane/v2/cno/component.gocheckOperandsRolloutStatus()

@sdminonne
Copy link
Copy Markdown
Contributor Author

/test e2e-kubevirt-aws-ovn-reduced

@sdminonne
Copy link
Copy Markdown
Contributor Author

/hold cancel

@openshift-ci openshift-ci Bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 23, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 23, 2026

@sdminonne: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-merge-bot openshift-merge-bot Bot merged commit 6daa9ce into openshift:main Apr 23, 2026
36 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. area/hypershift-operator Indicates the PR includes changes for the hypershift operator and API - outside an OCP release area/testing Indicates the PR includes changes for e2e testing jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. verified Signifies that the PR passed pre-merge verification criteria

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants