Skip to content

fix: OCPBUGS-78575: create virt-launcher NetworkPolicy on external infra cluster#8056

Open
dpateriya wants to merge 1 commit intoopenshift:mainfrom
dpateriya:fix/virt-launcher-netpol-external-infra
Open

fix: OCPBUGS-78575: create virt-launcher NetworkPolicy on external infra cluster#8056
dpateriya wants to merge 1 commit intoopenshift:mainfrom
dpateriya:fix/virt-launcher-netpol-external-infra

Conversation

@dpateriya
Copy link
Copy Markdown

@dpateriya dpateriya commented Mar 24, 2026

Summary

  • Create the virt-launcher NetworkPolicy on the external infrastructure cluster when deploying HCP KubeVirt with workers on a separate cluster (Credentials != nil)
  • Add reconcileVirtLauncherNetworkPolicyExternalInfra function that builds an adapted policy for external infra (uses infra cluster CIDRs, omits control-plane pod selectors that only exist on the management cluster)
  • Update documented minimum RBAC role (kv-external-infra-role) to include networkpolicies permission

Problem

When deploying a Hosted Control Plane with KubeVirt platform using external infrastructure (workers on a separate cluster from the control plane), the virt-launcher NetworkPolicy is never created on the infrastructure cluster. The code in reconcileNetworkPolicies explicitly skips it when hcluster.Spec.Platform.Kubevirt.Credentials != nil. This leaves guest VMs with unrestricted network access to all pods and services on the infrastructure cluster, breaking tenant isolation in multi-tenant environments.

Fix

Uses the existing infra cluster client (KubevirtInfraClientMap) -- the same client already used for VM creation, route management, and version validation -- to also create the NetworkPolicy on the infra cluster. The policy:

  • Blocks egress to the infra cluster clusterNetwork and serviceNetwork CIDRs
  • Allows inter-VM communication (same infra-id), DNS, ingress controller, and external traffic
  • Omits kube-apiserver, oauth, and ignition-server-proxy pod selectors (those run on the management cluster and are reached via external IPs, already permitted by the 0.0.0.0/0 rule)

Test plan

  • Deploy HCP KubeVirt with external infra (--infra-kubeconfig-file + --infra-namespace)
  • Verify virt-launcher NetworkPolicy is created in the infra namespace on the workers cluster
  • Verify the policy egress rules block infra cluster CIDRs and allow inter-VM, DNS, ingress
  • Verify centralized infra (same-cluster) deployments continue to work unchanged
  • Verify guest VMs can still reach their control plane (kube-apiserver, oauth, ignition) via external routes
  • Verify guest VMs cannot reach arbitrary pods/services on the infra cluster

Made with Cursor

Summary by CodeRabbit

  • Documentation

    • Expanded RBAC guidance: added networkpolicies and events, adjusted volumesnapshot verbs, updated Role examples, and documented cluster-scoped RBAC needed to read cluster network configuration.
  • New Features

    • Network policy handling for external infrastructure clusters: discovers infra network config and applies refined virt-launcher ingress/egress policies with optional CIDR-based egress restrictions.
  • Bug Fixes / Observability

    • Added a hosted cluster condition, infra-cluster warning events, and status update behavior when infra kubeconfig cannot read cluster network config.

@openshift-ci-robot
Copy link
Copy Markdown

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: LGTM mode

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 24, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

KubeVirt NetworkPolicy reconciliation now branches by whether external infra credentials exist. If external credentials are present the controller discovers an infra-cluster client, attempts to GET the cluster-scoped configv1.Network "cluster", updates the HostedCluster condition ValidKubeVirtInfraNetworkPolicyRBAC (True/False), emits a warning Event on RBAC/read failure, and reconciles the virt-launcher NetworkPolicy in the infra namespace using infra CIDRs when available. If credentials are absent it reconciles the virt-launcher NetworkPolicy in the management control-plane namespace. Documentation and example RBAC were updated to reflect required networkpolicies and cluster-scoped read access.

Sequence Diagram

sequenceDiagram
    participant MC as Management Cluster Controller
    participant Creds as KubeVirt Credentials
    participant IC as Infrastructure Cluster API
    participant NCfg as configv1.Network ("cluster")
    participant Status as HostedCluster Status
    participant Event as Infra Warning Event
    participant NP as NetworkPolicy Resource

    rect rgba(70, 130, 180, 0.5)
    Note over MC,NP: External Infrastructure Flow
    MC->>Creds: Check if Credentials != nil
    alt External Infrastructure
        Creds-->>MC: Credentials present
        MC->>IC: Discover infra cluster client
        IC->>NCfg: GET configv1.Network "cluster"
        NCfg-->>IC: Return network CIDRs or error
        IC-->>MC: Provide network config or error
        MC->>Status: Update ValidKubeVirtInfraNetworkPolicyRBAC condition
        MC->>Event: Emit infra-cluster RBAC warning (on read failure)
        MC->>NP: Create/Update virt-launcher NetworkPolicy in infra namespace (use CIDRs if available)
    else Centralized Infrastructure
        Creds-->>MC: No credentials
        MC->>NP: Create/Update virt-launcher NetworkPolicy in control-plane namespace
    end
    end
Loading
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci bot added area/documentation Indicates the PR includes changes for documentation area/hypershift-operator Indicates the PR includes changes for the hypershift operator and API - outside an OCP release area/platform/kubevirt PR/issue for KubeVirt (KubevirtPlatform) platform and removed do-not-merge/needs-area labels Mar 24, 2026
@openshift-ci openshift-ci bot requested review from orenc1 and qinqon March 24, 2026 13:22
@dpateriya dpateriya force-pushed the fix/virt-launcher-netpol-external-infra branch from adca61c to b3411d4 Compare March 24, 2026 13:29
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
hypershift-operator/controllers/hostedcluster/network_policies.go (1)

719-842: Please extract the shared virt-launcher policy builder before these two paths drift.

This function is nearly a copy of reconcileVirtLauncherNetworkPolicy; only the blocked CIDR source and a few egress peers differ. Pull the common ingress/egress construction into a helper and pass the external-infra deltas in, otherwise future policy changes are easy to apply to one path and miss in the other.

As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hypershift-operator/controllers/hostedcluster/network_policies.go` around
lines 719 - 842, The two functions
reconcileVirtLauncherNetworkPolicyExternalInfra and
reconcileVirtLauncherNetworkPolicy share almost identical ingress/egress
construction; extract a helper (e.g., buildVirtLauncherPolicyBase or
newVirtLauncherPolicyBuilder) that builds the common policy.Spec.PolicyTypes,
PodSelector, common Ingress rules and base Egress rules and accepts parameters
for the variable pieces (blockedIPv4Networks, blockedIPv6Networks, and a slice
of extra egress peers or a callback to append external-infra-specific peers).
Replace the duplicated blocks in both
reconcileVirtLauncherNetworkPolicyExternalInfra and
reconcileVirtLauncherNetworkPolicy to call the new helper, then apply the
external-specific deltas (adding service NodePort IPBlocks and the
infra-specific Pod/Namespace peers) to the returned policy or builder before
returning; keep existing symbol names (policy, hcluster, infraClusterNetwork) to
make locating sites straightforward.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@hypershift-operator/controllers/hostedcluster/network_policies.go`:
- Around line 166-168: The code performs a cluster-scoped lookup via
infraClient.Get on infraClusterNetwork (configv1.Network{Name:"cluster"}) which
requires cluster-level RBAC; change this by either removing the live
cluster-scoped lookup (avoid calling infraClient.Get in the network policy
reconciliation path and use a safer local/default value or accept a passed-in
configuration) or explicitly require and document ClusterRole/ClusterRoleBinding
granting get on networks.config.openshift.io for the controller; update the code
around infraClusterNetwork/infraClient.Get to implement the chosen approach and
add a RBAC manifest entry (ClusterRole + ClusterRoleBinding) if you opt to keep
the live lookup, ensuring it grants verbs=get on
resource=networks.config.openshift.io.

---

Nitpick comments:
In `@hypershift-operator/controllers/hostedcluster/network_policies.go`:
- Around line 719-842: The two functions
reconcileVirtLauncherNetworkPolicyExternalInfra and
reconcileVirtLauncherNetworkPolicy share almost identical ingress/egress
construction; extract a helper (e.g., buildVirtLauncherPolicyBase or
newVirtLauncherPolicyBuilder) that builds the common policy.Spec.PolicyTypes,
PodSelector, common Ingress rules and base Egress rules and accepts parameters
for the variable pieces (blockedIPv4Networks, blockedIPv6Networks, and a slice
of extra egress peers or a callback to append external-infra-specific peers).
Replace the duplicated blocks in both
reconcileVirtLauncherNetworkPolicyExternalInfra and
reconcileVirtLauncherNetworkPolicy to call the new helper, then apply the
external-specific deltas (adding service NodePort IPBlocks and the
infra-specific Pod/Namespace peers) to the returned policy or builder before
returning; keep existing symbol names (policy, hcluster, infraClusterNetwork) to
make locating sites straightforward.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 72ef2682-9485-4808-b0a3-ac41022f2c6d

📥 Commits

Reviewing files that changed from the base of the PR and between c8ae120 and adca61c.

📒 Files selected for processing (2)
  • docs/content/how-to/kubevirt/external-infrastructure.md
  • hypershift-operator/controllers/hostedcluster/network_policies.go

Comment thread hypershift-operator/controllers/hostedcluster/network_policies.go Outdated
@dpateriya dpateriya force-pushed the fix/virt-launcher-netpol-external-infra branch 2 times, most recently from 6b6911d to a599aa8 Compare March 24, 2026 13:52
@dpateriya dpateriya changed the title Bug: Create virt-launcher NetworkPolicy on external infra cluster fix: OCPBUGS-78575: create virt-launcher NetworkPolicy on external infra cluster Mar 24, 2026
@openshift-ci-robot openshift-ci-robot added jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Mar 24, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@dpateriya: This pull request references Jira Issue OCPBUGS-78575, which is invalid:

  • expected the bug to target the "4.22.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

Summary

  • Create the virt-launcher NetworkPolicy on the external infrastructure cluster when deploying HCP KubeVirt with workers on a separate cluster (Credentials != nil)
  • Add reconcileVirtLauncherNetworkPolicyExternalInfra function that builds an adapted policy for external infra (uses infra cluster CIDRs, omits control-plane pod selectors that only exist on the management cluster)
  • Update documented minimum RBAC role (kv-external-infra-role) to include networkpolicies permission

Problem

When deploying a Hosted Control Plane with KubeVirt platform using external infrastructure (workers on a separate cluster from the control plane), the virt-launcher NetworkPolicy is never created on the infrastructure cluster. The code in reconcileNetworkPolicies explicitly skips it when hcluster.Spec.Platform.Kubevirt.Credentials != nil. This leaves guest VMs with unrestricted network access to all pods and services on the infrastructure cluster, breaking tenant isolation in multi-tenant environments.

Fix

Uses the existing infra cluster client (KubevirtInfraClientMap) -- the same client already used for VM creation, route management, and version validation -- to also create the NetworkPolicy on the infra cluster. The policy:

  • Blocks egress to the infra cluster clusterNetwork and serviceNetwork CIDRs
  • Allows inter-VM communication (same infra-id), DNS, ingress controller, and external traffic
  • Omits kube-apiserver, oauth, and ignition-server-proxy pod selectors (those run on the management cluster and are reached via external IPs, already permitted by the 0.0.0.0/0 rule)

Test plan

  • Deploy HCP KubeVirt with external infra (--infra-kubeconfig-file + --infra-namespace)
  • Verify virt-launcher NetworkPolicy is created in the infra namespace on the workers cluster
  • Verify the policy egress rules block infra cluster CIDRs and allow inter-VM, DNS, ingress
  • Verify centralized infra (same-cluster) deployments continue to work unchanged
  • Verify guest VMs can still reach their control plane (kube-apiserver, oauth, ignition) via external routes
  • Verify guest VMs cannot reach arbitrary pods/services on the infra cluster

Made with Cursor

Summary by CodeRabbit

  • Documentation

  • Updated RBAC and access control docs: include networkpolicies as required and add guidance to bind both a Role and a ClusterRole; new instructions and examples for granting read access to cluster network config.

  • New Features

  • Network policy handling now supports external infrastructure clusters: discovers infra network config and creates appropriate virt-launcher network policies with refined ingress/egress rules.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
hypershift-operator/controllers/hostedcluster/network_policies.go (2)

181-183: Reuse the injected createOrUpdate in the external-infra branch.

Constructing a fresh upsert helper here bypasses the caller-supplied createOrUpdate, so this path no longer shares the same wrapper/test seam as the rest of reconcileNetworkPolicies.

Suggested change
-			infraCreateOrUpdate := upsert.New(r.EnableCIDebugOutput).CreateOrUpdate
-			if _, err := infraCreateOrUpdate(ctx, infraClient, policy, func() error {
+			if _, err := createOrUpdate(ctx, infraClient, policy, func() error {
 				return reconcileVirtLauncherNetworkPolicyExternalInfra(log, policy, hcluster, infraClusterNetwork)
 			}); err != nil {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hypershift-operator/controllers/hostedcluster/network_policies.go` around
lines 181 - 183, This code creates a new upsert helper (infraCreateOrUpdate :=
upsert.New(r.EnableCIDebugOutput).CreateOrUpdate) instead of using the injected
caller-supplied createOrUpdate, breaking the test/wrapper seam; change the
branch to call the injected createOrUpdate (the same createOrUpdate used
elsewhere in reconcileNetworkPolicies/external-infra path) when applying policy
= networkpolicy.VirtLauncherNetworkPolicy(infraNamespace) so the call becomes
createOrUpdate(ctx, infraClient, policy, func() error { ... }) and remove the
local upsert.New(...) construction.

733-857: Extract the shared virt-launcher policy builder.

This helper copies most of reconcileVirtLauncherNetworkPolicy: selector setup, ingress rules, CIDR blocking, and NodePort exception handling. Only a small subset of egress peers differs, so keeping two near-identical implementations will drift quickly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hypershift-operator/controllers/hostedcluster/network_policies.go` around
lines 733 - 857, The two virt-launcher reconciler functions are largely
duplicated; extract the shared logic into a helper (e.g.,
buildVirtLauncherPolicyBase or new function buildVirtLauncherPolicySpec) that
accepts the logger, policy pointer or returns a prepared
networkingv1.NetworkPolicySpec, the HostedCluster (hcluster) and
infraClusterNetwork and performs: setting PolicyTypes, PodSelector
(hyperv1.InfraIDLabel and "kubevirt.io":"virt-launcher"), ingress ports,
building blockedIPv4/IPv6 via addToBlockedNetworks, and the NodePort exception
handling loop over hcluster.Spec.Services (preserving netip.ParseAddr, utilsnet
IPv4/IPv6 checks and error returns). Have
reconcileVirtLauncherNetworkPolicyExternalInfra and the other
reconcileVirtLauncherNetworkPolicy call this helper to get the base Spec (or
mutate policy) and then append only their differing egress peers (the
DNS/ingress/peer differences), returning errors from the helper as needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@hypershift-operator/controllers/hostedcluster/network_policies.go`:
- Around line 170-179: The infra cluster Network lookup currently swallows
errors (infraClient.Get) leaving infraClusterNetwork nil which causes fallback
to unrestricted egress; instead, when infraClient.Get fails, either return a
reconciliation error from the surrounding reconcile function (so the controller
retries) or mark the HostedCluster as degraded via the existing status/condition
helper (e.g., setHostedClusterDegraded or SetProgressing/SetDegraded) with a
clear message referencing the failed infra network lookup; update the code paths
around infraClusterNetwork, networkObj and the virt-launcher NetworkPolicy
creation to rely on that error flow so we do not silently emit 0.0.0.0/0 and
::/0 rules when the CIDR cannot be retrieved.

---

Nitpick comments:
In `@hypershift-operator/controllers/hostedcluster/network_policies.go`:
- Around line 181-183: This code creates a new upsert helper
(infraCreateOrUpdate := upsert.New(r.EnableCIDebugOutput).CreateOrUpdate)
instead of using the injected caller-supplied createOrUpdate, breaking the
test/wrapper seam; change the branch to call the injected createOrUpdate (the
same createOrUpdate used elsewhere in reconcileNetworkPolicies/external-infra
path) when applying policy =
networkpolicy.VirtLauncherNetworkPolicy(infraNamespace) so the call becomes
createOrUpdate(ctx, infraClient, policy, func() error { ... }) and remove the
local upsert.New(...) construction.
- Around line 733-857: The two virt-launcher reconciler functions are largely
duplicated; extract the shared logic into a helper (e.g.,
buildVirtLauncherPolicyBase or new function buildVirtLauncherPolicySpec) that
accepts the logger, policy pointer or returns a prepared
networkingv1.NetworkPolicySpec, the HostedCluster (hcluster) and
infraClusterNetwork and performs: setting PolicyTypes, PodSelector
(hyperv1.InfraIDLabel and "kubevirt.io":"virt-launcher"), ingress ports,
building blockedIPv4/IPv6 via addToBlockedNetworks, and the NodePort exception
handling loop over hcluster.Spec.Services (preserving netip.ParseAddr, utilsnet
IPv4/IPv6 checks and error returns). Have
reconcileVirtLauncherNetworkPolicyExternalInfra and the other
reconcileVirtLauncherNetworkPolicy call this helper to get the base Spec (or
mutate policy) and then append only their differing egress peers (the
DNS/ingress/peer differences), returning errors from the helper as needed.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 37a29eba-4fda-4644-8e36-3c43d3834bdc

📥 Commits

Reviewing files that changed from the base of the PR and between adca61c and 6b6911d.

📒 Files selected for processing (2)
  • docs/content/how-to/kubevirt/external-infrastructure.md
  • hypershift-operator/controllers/hostedcluster/network_policies.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • docs/content/how-to/kubevirt/external-infrastructure.md

Comment thread hypershift-operator/controllers/hostedcluster/network_policies.go
@dpateriya dpateriya force-pushed the fix/virt-launcher-netpol-external-infra branch from a599aa8 to 5c389e3 Compare March 24, 2026 14:18
@openshift-ci openshift-ci bot added the area/api Indicates the PR includes changes for the API label Mar 24, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
hypershift-operator/controllers/hostedcluster/network_policies.go (1)

172-177: ⚠️ Potential issue | 🟠 Major

Don't fall back on every infra-network lookup error.

This path currently treats transient API/auth/connectivity failures the same as missing RBAC and silently creates the weaker no-CIDR-blocking policy instead of retrying. Reserve the fallback for the expected permission / unsupported-resource cases and return the rest.

Possible fix
-			if err := infraClient.Get(ctx, client.ObjectKeyFromObject(networkObj), networkObj); err != nil {
-				log.Info("unable to read networks.config.openshift.io/cluster from the infrastructure cluster; "+
-					"virt-launcher NetworkPolicy will be created without CIDR-based egress restrictions. "+
-					"Grant get permission on networks.config.openshift.io via a ClusterRole for full network isolation",
-					"error", err)
-			} else {
-				infraClusterNetwork = networkObj
-			}
+			if err := infraClient.Get(ctx, client.ObjectKeyFromObject(networkObj), networkObj); err != nil {
+				if apierrors.IsForbidden(err) || apierrors.IsNotFound(err) || meta.IsNoMatchError(err) {
+					log.Info("unable to read networks.config.openshift.io/cluster from the infrastructure cluster; "+
+						"virt-launcher NetworkPolicy will be created without CIDR-based egress restrictions. "+
+						"Grant get permission on networks.config.openshift.io via a ClusterRole for full network isolation",
+						"error", err)
+				} else {
+					return fmt.Errorf("failed to get infrastructure cluster network config: %w", err)
+				}
+			} else {
+				infraClusterNetwork = networkObj
+			}

This needs k8s.io/apimachinery/pkg/api/errors and k8s.io/apimachinery/pkg/api/meta imports above.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hypershift-operator/controllers/hostedcluster/network_policies.go` around
lines 172 - 177, The current error handling around infraClient.Get(networkObj)
in network_policies.go treats all errors as "missing RBAC" and silently falls
back; change it to only swallow expected permission/unsupported-resource errors
(e.g., apierrors.IsNotFound(err), apierrors.IsForbidden(err), or
meta.IsNoMatchError(err)) and for any other error return/propagate it so the
reconcile will retry; update the branch around the Get call (the networkObj /
infraClient.Get(...) handling) to import and use
k8s.io/apimachinery/pkg/api/errors and k8s.io/apimachinery/pkg/api/meta and only
perform the no-CIDR-blocking fallback on those specific conditions, otherwise
return the original error.
🧹 Nitpick comments (1)
hypershift-operator/controllers/hostedcluster/network_policies.go (1)

181-183: Reuse the injected createOrUpdate here.

reconcileNetworkPolicies already receives a CreateOrUpdateFN that takes the target client. Spinning up a fresh upserter only for this branch bypasses that seam and makes tests/observability wrappers easier to miss on the external-infra path.

Small cleanup
-			infraCreateOrUpdate := upsert.New(r.EnableCIDebugOutput).CreateOrUpdate
-			if _, err := infraCreateOrUpdate(ctx, infraClient, policy, func() error {
+			if _, err := createOrUpdate(ctx, infraClient, policy, func() error {
 				return reconcileVirtLauncherNetworkPolicyExternalInfra(log, policy, hcluster, infraClusterNetwork)
 			}); err != nil {

As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hypershift-operator/controllers/hostedcluster/network_policies.go` around
lines 181 - 183, The code creates a new upserter for the infra path
(infraCreateOrUpdate := upsert.New(r.EnableCIDebugOutput).CreateOrUpdate)
instead of using the injected CreateOrUpdateFN passed into
reconcileNetworkPolicies; remove the upsert.New(...) call and invoke the
existing injected createOrUpdate function with the infraClient (i.e., call
createOrUpdate(ctx, infraClient, policy, func() error { ... })) so the
external-infra path uses the same create/update wrapper, tests and observability
hooks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@hypershift-operator/controllers/hostedcluster/network_policies.go`:
- Around line 172-177: The current error handling around
infraClient.Get(networkObj) in network_policies.go treats all errors as "missing
RBAC" and silently falls back; change it to only swallow expected
permission/unsupported-resource errors (e.g., apierrors.IsNotFound(err),
apierrors.IsForbidden(err), or meta.IsNoMatchError(err)) and for any other error
return/propagate it so the reconcile will retry; update the branch around the
Get call (the networkObj / infraClient.Get(...) handling) to import and use
k8s.io/apimachinery/pkg/api/errors and k8s.io/apimachinery/pkg/api/meta and only
perform the no-CIDR-blocking fallback on those specific conditions, otherwise
return the original error.

---

Nitpick comments:
In `@hypershift-operator/controllers/hostedcluster/network_policies.go`:
- Around line 181-183: The code creates a new upserter for the infra path
(infraCreateOrUpdate := upsert.New(r.EnableCIDebugOutput).CreateOrUpdate)
instead of using the injected CreateOrUpdateFN passed into
reconcileNetworkPolicies; remove the upsert.New(...) call and invoke the
existing injected createOrUpdate function with the infraClient (i.e., call
createOrUpdate(ctx, infraClient, policy, func() error { ... })) so the
external-infra path uses the same create/update wrapper, tests and observability
hooks.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: eae1fb84-55eb-47d1-a51c-1034c5cf8414

📥 Commits

Reviewing files that changed from the base of the PR and between 6b6911d and a599aa8.

📒 Files selected for processing (2)
  • docs/content/how-to/kubevirt/external-infrastructure.md
  • hypershift-operator/controllers/hostedcluster/network_policies.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • docs/content/how-to/kubevirt/external-infrastructure.md

@dpateriya dpateriya force-pushed the fix/virt-launcher-netpol-external-infra branch from 5c389e3 to f33bbc9 Compare March 24, 2026 14:54
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@api/hypershift/v1beta1/hostedcluster_conditions.go`:
- Around line 125-132: The new condition type
ValidKubeVirtInfraNetworkPolicyRBAC is defined but not registered in the
ExpectedHCConditions initialization, so add ValidKubeVirtInfraNetworkPolicyRBAC
to the KubevirtPlatform case in the ExpectedHCConditions slice/map in
support/conditions/conditions.go (the same place where
ValidKubeVirtInfraNetworkMTU and KubeVirtNodesLiveMigratable are listed) so the
reconciliation logic will initialize and maintain this condition; update the
KubevirtPlatform entry to include the ValidKubeVirtInfraNetworkPolicyRBAC
ConditionType.

In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`:
- Around line 2015-2022: The ValidKubeVirtInfraNetworkPolicyRBAC condition set
by reconcileNetworkPolicies must be persisted immediately instead of waiting
until after reconcileKubevirtCSIClusterRBAC; update the HostedCluster status
(call r.Client.Status().Update(ctx, hcluster) and handle apierrors.IsConflict
the same way) right after the condition is set (or at least before any
subsequent paths that may return an error such as
reconcileKubevirtCSIClusterRBAC), so the infra-network RBAC condition is never
lost or left stale even if later KubeVirt steps fail.

In `@hypershift-operator/controllers/hostedcluster/network_policies.go`:
- Around line 721-730: Update the NamespaceSelector for the peer that targets
ingress router pods so it uses the standard namespace label key
"kubernetes.io/metadata.name" instead of "name"; locate the block that has
PodSelector matching
"ingresscontroller.operator.openshift.io/deployment-ingresscontroller":
"default" and replace the NamespaceSelector MatchLabels key so it matches
"kubernetes.io/metadata.name": "openshift-ingress" to ensure the selector
actually matches the openshift-ingress namespace.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: c57503d0-c095-4f5e-8422-9ea2faf6a6e3

📥 Commits

Reviewing files that changed from the base of the PR and between a599aa8 and f33bbc9.

⛔ Files ignored due to path filters (1)
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_conditions.go is excluded by !vendor/**, !**/vendor/**
📒 Files selected for processing (4)
  • api/hypershift/v1beta1/hostedcluster_conditions.go
  • docs/content/how-to/kubevirt/external-infrastructure.md
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go
  • hypershift-operator/controllers/hostedcluster/network_policies.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • docs/content/how-to/kubevirt/external-infrastructure.md

Comment thread api/hypershift/v1beta1/hostedcluster_conditions.go
Comment thread hypershift-operator/controllers/hostedcluster/network_policies.go
@dpateriya dpateriya force-pushed the fix/virt-launcher-netpol-external-infra branch from f33bbc9 to 04deba2 Compare March 24, 2026 15:28
@openshift-ci openshift-ci bot added area/control-plane-operator Indicates the PR includes changes for the control plane operator - in an OCP release area/testing Indicates the PR includes changes for e2e testing labels Mar 24, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
hypershift-operator/controllers/hostedcluster/network_policies.go (1)

721-730: ⚠️ Potential issue | 🟠 Major

Use the namespace label key that this router peer can actually match.

The selector at Line 729 still depends on name=openshift-ingress, while this file otherwise matches standard namespaces via kubernetes.io/metadata.name. If the namespace only has the standard label, this peer never matches and virt-launcher egress to router pods stays blocked.

Suggested fix
 			NamespaceSelector: &metav1.LabelSelector{
 				MatchLabels: map[string]string{
-					"name": "openshift-ingress",
+					"kubernetes.io/metadata.name": "openshift-ingress",
 				},
 			},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hypershift-operator/controllers/hostedcluster/network_policies.go` around
lines 721 - 730, The NamespaceSelector in the NetworkPolicy peer uses the
non-standard label key "name" which prevents matching namespaces that only have
the standard label; update the NamespaceSelector.MatchLabels key from "name" to
"kubernetes.io/metadata.name" in the NetworkPolicy block (the peer with
PodSelector matching
"ingresscontroller.operator.openshift.io/deployment-ingresscontroller":
"default") so the selector correctly matches the openshift-ingress namespace.
hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go (1)

2011-2022: ⚠️ Potential issue | 🟠 Major

Persist ValidKubeVirtInfraNetworkPolicyRBAC before the CSI RBAC reconcile.

The extra status write at Line 2016 still runs only after reconcileKubevirtCSIClusterRBAC. If that call fails, the new infra-network RBAC condition can still be left stale or missing, which removes the main diagnostic signal for the external-infra path.

Suggested fix
 case hyperv1.KubevirtPlatform:
+	if hcluster.Spec.Platform.Kubevirt != nil && hcluster.Spec.Platform.Kubevirt.Credentials != nil {
+		if err := r.Client.Status().Update(ctx, hcluster); err != nil {
+			if apierrors.IsConflict(err) {
+				return ctrl.Result{Requeue: true}, nil
+			}
+			return ctrl.Result{}, fmt.Errorf("failed to update status after network policy RBAC check: %w", err)
+		}
+	}
 	err = r.reconcileKubevirtCSIClusterRBAC(ctx, createOrUpdate, hcluster)
 	if err != nil {
 		return ctrl.Result{}, fmt.Errorf("failed to reconcile kubevirt CSI cluster wide RBAC: %w", err)
 	}
-	if hcluster.Spec.Platform.Kubevirt != nil && hcluster.Spec.Platform.Kubevirt.Credentials != nil {
-		if err := r.Client.Status().Update(ctx, hcluster); err != nil {
-			if apierrors.IsConflict(err) {
-				return ctrl.Result{Requeue: true}, nil
-			}
-			return ctrl.Result{}, fmt.Errorf("failed to update status after network policy RBAC check: %w", err)
-		}
-	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`
around lines 2011 - 2022, The status update for the infra-network RBAC condition
is happening after reconcileKubevirtCSIClusterRBAC and can be skipped if that
call errors; move/persist the ValidKubeVirtInfraNetworkPolicyRBAC status write
so it runs before or independently of reconcileKubevirtCSIClusterRBAC.
Specifically, ensure the code that sets/updates the
ValidKubeVirtInfraNetworkPolicyRBAC condition on hcluster and calls
r.Client.Status().Update(ctx, hcluster) executes (and handles
apierrors.IsConflict) prior to invoking r.reconcileKubevirtCSIClusterRBAC,
leaving reconcileKubevirtCSIClusterRBAC to run afterwards so the infra-network
RBAC diagnostic state cannot be lost if CSI RBAC reconciliation fails.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`:
- Around line 2011-2022: The status update for the infra-network RBAC condition
is happening after reconcileKubevirtCSIClusterRBAC and can be skipped if that
call errors; move/persist the ValidKubeVirtInfraNetworkPolicyRBAC status write
so it runs before or independently of reconcileKubevirtCSIClusterRBAC.
Specifically, ensure the code that sets/updates the
ValidKubeVirtInfraNetworkPolicyRBAC condition on hcluster and calls
r.Client.Status().Update(ctx, hcluster) executes (and handles
apierrors.IsConflict) prior to invoking r.reconcileKubevirtCSIClusterRBAC,
leaving reconcileKubevirtCSIClusterRBAC to run afterwards so the infra-network
RBAC diagnostic state cannot be lost if CSI RBAC reconciliation fails.

In `@hypershift-operator/controllers/hostedcluster/network_policies.go`:
- Around line 721-730: The NamespaceSelector in the NetworkPolicy peer uses the
non-standard label key "name" which prevents matching namespaces that only have
the standard label; update the NamespaceSelector.MatchLabels key from "name" to
"kubernetes.io/metadata.name" in the NetworkPolicy block (the peer with
PodSelector matching
"ingresscontroller.operator.openshift.io/deployment-ingresscontroller":
"default") so the selector correctly matches the openshift-ingress namespace.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: e6fa90e7-dc22-429b-9381-bf3a998f498e

📥 Commits

Reviewing files that changed from the base of the PR and between f33bbc9 and 04deba2.

⛔ Files ignored due to path filters (3)
  • docs/content/reference/aggregated-docs.md is excluded by !docs/content/reference/aggregated-docs.md
  • docs/content/reference/api.md is excluded by !docs/content/reference/api.md
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_conditions.go is excluded by !vendor/**, !**/vendor/**
📒 Files selected for processing (6)
  • api/hypershift/v1beta1/hostedcluster_conditions.go
  • docs/content/how-to/kubevirt/external-infrastructure.md
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go
  • hypershift-operator/controllers/hostedcluster/network_policies.go
  • support/conditions/conditions.go
  • test/e2e/util/util.go
🚧 Files skipped from review as they are similar to previous changes (2)
  • api/hypershift/v1beta1/hostedcluster_conditions.go
  • docs/content/how-to/kubevirt/external-infrastructure.md

@dpateriya dpateriya force-pushed the fix/virt-launcher-netpol-external-infra branch from 04deba2 to d7f90ca Compare March 26, 2026 15:06
@dpateriya
Copy link
Copy Markdown
Author

/retest

@orenc1
Copy link
Copy Markdown
Contributor

orenc1 commented Mar 31, 2026

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Mar 31, 2026
@openshift-ci-robot
Copy link
Copy Markdown

Scheduling tests matching the pipeline_run_if_changed or not excluded by pipeline_skip_if_only_changed parameters:
/test e2e-aks-4-21
/test e2e-aws-4-21
/test e2e-aks
/test e2e-aws
/test e2e-aws-upgrade-hypershift-operator
/test e2e-azure-self-managed
/test e2e-kubevirt-aws-ovn-reduced
/test e2e-v2-aws

@dpateriya
Copy link
Copy Markdown
Author

/test e2e-aws

@dpateriya
Copy link
Copy Markdown
Author

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Mar 31, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@dpateriya: This pull request references Jira Issue OCPBUGS-78575, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state New, which is one of the valid states (NEW, ASSIGNED, POST)
Details

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@dpateriya
Copy link
Copy Markdown
Author

@enxebre @sjenning could one of you add /approve when you have a moment? @orenc1 has already /lgtm, CI is green, and @csrwng is on PTO until mid-April so we need another approver to satisfy OWNERS + api/OWNERS.

This change adds the virt-launcher NetworkPolicy on the external KubeVirt infra cluster (OCPBUGS-78575), including the follow-up fixes for ingress namespace labels and persisting the RBAC status before CSI reconcile.

CC: @celebdor

@JoelSpeed
Copy link
Copy Markdown
Contributor

/approve

Changes under api LGTM

@qinqon
Copy link
Copy Markdown
Contributor

qinqon commented Apr 7, 2026

/lgtm
/approve

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Apr 7, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: dpateriya, JoelSpeed, qinqon
Once this PR has been reviewed and has the lgtm label, please assign csrwng for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Copy Markdown
Contributor

@csrwng csrwng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main concern is the impact of this change on existing kubevirt deployments that are already using external infra and don't have the updated RBAC to allow creating/updating network policies.

Other than that, looks good.

Comment thread hypershift-operator/controllers/hostedcluster/network_policies.go Outdated
Comment thread hypershift-operator/controllers/hostedcluster/network_policies.go
@dpateriya
Copy link
Copy Markdown
Author

@csrwng Ingress namespace from "name": "openshift-ingress" to kubernetes.io/metadata.name was a CodeRabbit-driven fix for matching real namespace labels.

Regarding the existing hostedclusters using external infra, we can

  • Treat infra NetworkPolicy create/update failures (e.g. 403 Forbidden) as an operator-visible outcome: set ValidKubeVirtInfraNetworkPolicyRBAC to False with a concrete message (missing networking.k8s.io/networkpolicies verbs / namespace scope) and emit a warning Event on the HostedCluster so support and customers see why isolation is not enforced yet.

  • Keep the docs / kv-external-infra-role path as the supported fix and add an explicit “before upgrading HyperShift” note that existing infra roles must gain networkpolicies permissions so reconciliation can apply the policy after upgrade.

If you have any other suggestion let me know.

@csrwng
Copy link
Copy Markdown
Contributor

csrwng commented Apr 15, 2026

Of the options above, I prefer 1. (nobody reads docs)

Also, for the ingress label, if we're going to change it from what it was in the past, we should use what the networking team recommends:

network.openshift.io/policy-group: ingress

See:
https://github.com/openshift/enhancements/blob/master/enhancements/network/allow-from-router-networkpolicy.md?plain=1

@dpateriya dpateriya force-pushed the fix/virt-launcher-netpol-external-infra branch from d7f90ca to 1bb7f9f Compare April 15, 2026 17:19
@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Apr 15, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Apr 15, 2026

New changes are detected. LGTM label has been removed.

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 15, 2026

Codecov Report

❌ Patch coverage is 55.39906% with 95 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (main@c8ae120). Learn more about missing BASE report.

Files with missing lines Patch % Lines
...ator/controllers/hostedcluster/network_policies.go 57.35% 82 Missing and 5 partials ⚠️
...trollers/hostedcluster/hostedcluster_controller.go 16.66% 3 Missing and 2 partials ⚠️
support/conditions/conditions.go 0.00% 3 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main    #8056   +/-   ##
=======================================
  Coverage        ?   35.60%           
=======================================
  Files           ?      767           
  Lines           ?    93459           
  Branches        ?        0           
=======================================
  Hits            ?    33280           
  Misses          ?    57484           
  Partials        ?     2695           
Files with missing lines Coverage Δ
support/conditions/conditions.go 0.00% <0.00%> (ø)
...trollers/hostedcluster/hostedcluster_controller.go 43.23% <16.66%> (ø)
...ator/controllers/hostedcluster/network_policies.go 48.81% <57.35%> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown
Contributor

@csrwng csrwng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple of comments

meta.SetStatusCondition(&hcluster.Status.Conditions, metav1.Condition{
Type: string(hyperv1.ValidKubeVirtInfraNetworkPolicyRBAC),
Status: metav1.ConditionFalse,
Reason: hyperv1.InfraClusterNetworkReadFailedReason,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason should be different.

ValidKubeVirtInfraNetworkMTU ConditionType = "ValidKubeVirtInfraNetworkMTU"

// ValidKubeVirtInfraNetworkPolicyRBAC indicates whether the external infra
// kubeconfig has sufficient permissions to read the infrastructure cluster's
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This description should be updated to reflect that the condition now applies to insufficient RBAC for networkpolicies in addition to the network configuration.

@dpateriya dpateriya force-pushed the fix/virt-launcher-netpol-external-infra branch from 1bb7f9f to 2cfbc6c Compare April 16, 2026 18:38
…fra cluster

When deploying HCP KubeVirt with external infrastructure (workers on a
separate cluster), the virt-launcher NetworkPolicy was never created on
the infrastructure cluster. The reconcileNetworkPolicies function
explicitly skipped it when Credentials != nil, leaving guest VMs with
unrestricted network access to all pods and services on the infra
cluster.

This patch adds an else branch that uses the existing infra cluster
client (from KubevirtInfraClientMap) to create the virt-launcher
NetworkPolicy in the infra namespace on the infrastructure cluster.

A new reconcileVirtLauncherNetworkPolicyExternalInfra function builds
the policy adapted for external infra:
- Blocks infra cluster's clusterNetwork/serviceNetwork CIDRs
- Allows inter-VM, DNS, and ingress controller traffic
- Omits control-plane pod selectors (kube-apiserver, oauth,
  ignition-server-proxy) since those pods run on the management
  cluster and are reached via external IPs

The Network config lookup is best-effort: if the infra kubeconfig
lacks cluster-scoped get on networks.config.openshift.io, the policy
is still created but without CIDR-based egress blocking.

Also updates the documented minimum RBAC role for external infra to
include networkpolicies (networking.k8s.io) and documents the
optional ClusterRole for full network isolation.

Made-with: Cursor
@dpateriya dpateriya force-pushed the fix/virt-launcher-netpol-external-infra branch from 2cfbc6c to 7b03ca8 Compare April 16, 2026 18:47
@hypershift-jira-solve-ci
Copy link
Copy Markdown

hypershift-jira-solve-ci bot commented Apr 16, 2026

Now I have the complete root cause analysis. Here's my final report:

Test Failure Analysis Complete

Job Information

Test Failure Analysis

Error

[FAILED] [hypershift.openshift.io/v1beta1, Resource=hcpetcdbackups]
  [FeatureSet="CustomNoUpgrade"][File=hcpetcdbackups-CustomNoUpgrade.crd.yaml]
  HCPEtcdBackup immutability [BeforeEach] On Update
  When S3 spec storage is changed it should fail

Expected success, but got an error:
    <context.deadlineExceededError>:
    context deadline exceeded
    {}
In [BeforeEach] at: test/envtest/generator.go:196

Summary

This is a known flaky test unrelated to PR #8056. The HCPEtcdBackup immutability envtest fails intermittently when WaitForCRDs() times out because a preceding test suite (GenerateCRDInstallTest) uninstalls CRDs without waiting for the API server to fully remove them. When the next test suite starts and tries to re-install the same CRD, it encounters a stale CRD in a transitional deletion state, causing the 30-second WaitForCRDs timeout to expire. PR #8056 does not modify any test infrastructure or CRD files — its changes are limited to network_policies.go, hostedcluster_controller.go, and API condition definitions. The flake hits different Kube versions non-deterministically: in the prior run of this same PR, Kube 1.33.0 passed but 1.35.0 failed; in the current run, the opposite occurred. An unrelated PR (AUTOSCALE-615) also hit the identical failure on the same day.

Root Cause

The root cause is a race condition in test/envtest/generator.go in the GenerateCRDInstallTest function (line ~249-252). This function:

  1. Installs all HyperShift CRDs into the envtest API server
  2. Validates them with WaitForCRDs
  3. Calls UninstallCRDs to clean up
  4. Returns immediately without waiting for the API server to fully delete the CRDs

When the subsequent per-suite test (GenerateTestSuite for hcpetcdbackups-CustomNoUpgrade.crd.yaml) runs its BeforeEach (line 177), it:

  1. Calls InstallCRDs — this may succeed because the CRD object can be re-created even while deletion finalizers are still processing
  2. Calls WaitForCRDs (line 194-196) with a 30-second timeout — this times out because the API server is still processing the deletion/re-creation transition, and the CRD never reaches a fully "ready" state within the window

The non-deterministic nature (different Kube versions fail on different runs) is because the race depends on timing of the API server's CRD deletion controller, which varies with envtest server startup time and runner load.

An open fix exists: PR #8261 (OCPBUGS-83585) adds a wait-for-removal loop after UninstallCRDs in GenerateCRDInstallTest, matching the pattern already used in GenerateTestSuite's AfterEach. That PR's own envtest runs all passed across all 5 Kube versions.

Recommendations
  1. Re-run the failed workflow — This is a transient flake; a retry will very likely pass (as demonstrated by the prior run where 1.33.0 succeeded).
  2. Merge PR OCPBUGS-83585: Wait for CRD removal in GenerateCRDInstallTest to fix flaky envtest #8261 (OCPBUGS-83585) to permanently fix the flaky test. That PR adds the missing wait-for-removal loop in GenerateCRDInstallTest, which is the actual root cause of this intermittent failure.
  3. No changes needed in PR fix: OCPBUGS-78575: create virt-launcher NetworkPolicy on external infra cluster #8056 — The PR's code (network policy reconciliation for external infra KubeVirt clusters) is entirely unrelated to the CRD envtest infrastructure.
Evidence
Evidence Detail
Failed test HCPEtcdBackup immutability / On Update / When S3 spec storage is changed it should fail
Failure location test/envtest/generator.go:196WaitForCRDs() timeout after 30s
Error context deadline exceeded
PR #8056 files changed network_policies.go, hostedcluster_controller.go, hostedcluster_conditions.go, docs — no test/envtest changes
Same flake in prior run Run 24527522885 (same PR) — Kube 1.35.0 failed with identical error, Kube 1.33.0 passed
Same flake in unrelated PR Run 24528897363 (AUTOSCALE-615) — Kube 1.35.0 failed with identical error
Fix PR #8261 (OCPBUGS-83585) — adds Eventually wait-for-removal after UninstallCRDs in GenerateCRDInstallTest
Fix PR test results All 5 Kube versions passed in PR #8261's envtest run (24520020839)
Recent main branch runs All passing (24527591801, 24520961974, 24519503954) — flake is intermittent

@dpateriya
Copy link
Copy Markdown
Author

/retest

@dpateriya
Copy link
Copy Markdown
Author

/retest-required

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Apr 17, 2026

@dpateriya: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/api Indicates the PR includes changes for the API area/control-plane-operator Indicates the PR includes changes for the control plane operator - in an OCP release area/documentation Indicates the PR includes changes for documentation area/hypershift-operator Indicates the PR includes changes for the hypershift operator and API - outside an OCP release area/platform/kubevirt PR/issue for KubeVirt (KubevirtPlatform) platform area/testing Indicates the PR includes changes for e2e testing jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants