-
Notifications
You must be signed in to change notification settings - Fork 85
Add VM file restore feature integration #2003
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add VM file restore feature integration #2003
Conversation
|
Skipping CI for Draft Pull Request. |
WalkthroughThis pull request introduces VM file restore feature support to OADP. Changes include new API types with configuration structs, two new Kubernetes CRDs (VirtualMachineBackupsDiscovery and VirtualMachineFileRestore), a controller implementation for managing VM file restore deployments, RBAC configurations, updated manifests, and documentation. Build configuration and manager environment variables are updated to support VM file restore components. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
✨ Finishing touches🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Nitpick comments (1)
bundle/manifests/oadp.openshift.io_dataprotectionapplications.yaml (1)
2672-2676: CRD keys/surface LGTM; align resources schema with existing sections.
- New image override keys and vmFileRestore field look consistent.
- Consider adding
nullable: trueto.spec.vmFileRestore.resources.{limits,requests}to match other sections and avoid rejecting explicit nulls.- limits: + limits: + nullable: true ... - requests: + requests: + nullable: truePlease also confirm the controller honors these defaults if users omit resources.
Also applies to: 2680-2746
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (25)
Makefile(1 hunks)README.md(1 hunks)api/v1alpha1/dataprotectionapplication_types.go(4 hunks)api/v1alpha1/zz_generated.deepcopy.go(2 hunks)bundle/manifests/oadp-operator.clusterserviceversion.yaml(6 hunks)bundle/manifests/oadp.openshift.io_dataprotectionapplications.yaml(1 hunks)bundle/manifests/oadp.openshift.io_virtualmachinebackupsdiscoveries.yaml(1 hunks)bundle/manifests/oadp.openshift.io_virtualmachinefilerestores.yaml(1 hunks)config/crd/bases/oadp.openshift.io_dataprotectionapplications.yaml(1 hunks)config/crd/bases/oadp.openshift.io_virtualmachinebackupsdiscoveries.yaml(1 hunks)config/crd/bases/oadp.openshift.io_virtualmachinefilerestores.yaml(1 hunks)config/crd/kustomization.yaml(1 hunks)config/manager/manager.yaml(1 hunks)config/manifests/kustomization.yaml(1 hunks)config/rbac/role.yaml(6 hunks)config/vm-file-restore-controller_rbac/kustomization.yaml(1 hunks)config/vm-file-restore-controller_rbac/role.yaml(1 hunks)config/vm-file-restore-controller_rbac/role_binding.yaml(1 hunks)config/vm-file-restore-controller_rbac/service_account.yaml(1 hunks)docs/config/vm_file_restore.md(1 hunks)internal/controller/dataprotectionapplication_controller.go(1 hunks)internal/controller/validator.go(1 hunks)internal/controller/validator_test.go(1 hunks)internal/controller/vmfilerestore_controller.go(1 hunks)internal/controller/vmfilerestore_controller_test.go(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
-Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity.
Files:
README.mdMakefileconfig/vm-file-restore-controller_rbac/service_account.yamlinternal/controller/validator.gointernal/controller/dataprotectionapplication_controller.godocs/config/vm_file_restore.mdconfig/crd/kustomization.yamlconfig/crd/bases/oadp.openshift.io_virtualmachinefilerestores.yamlconfig/crd/bases/oadp.openshift.io_virtualmachinebackupsdiscoveries.yamlbundle/manifests/oadp.openshift.io_virtualmachinefilerestores.yamlapi/v1alpha1/zz_generated.deepcopy.gobundle/manifests/oadp.openshift.io_virtualmachinebackupsdiscoveries.yamlinternal/controller/validator_test.goconfig/vm-file-restore-controller_rbac/role_binding.yamlconfig/crd/bases/oadp.openshift.io_dataprotectionapplications.yamlconfig/manifests/kustomization.yamlconfig/manager/manager.yamlinternal/controller/vmfilerestore_controller_test.goconfig/vm-file-restore-controller_rbac/kustomization.yamlinternal/controller/vmfilerestore_controller.gobundle/manifests/oadp-operator.clusterserviceversion.yamlconfig/vm-file-restore-controller_rbac/role.yamlbundle/manifests/oadp.openshift.io_dataprotectionapplications.yamlapi/v1alpha1/dataprotectionapplication_types.goconfig/rbac/role.yaml
🪛 golangci-lint (2.5.0)
internal/controller/validator_test.go
[error] 2842-2842: File is not properly formatted
(gci)
internal/controller/vmfilerestore_controller_test.go
[error] 30-30: File is not properly formatted
(gci)
internal/controller/vmfilerestore_controller.go
[error] 14-14: File is not properly formatted
(gci)
🔇 Additional comments (13)
README.md (1)
100-100: README link addition is fine.Good cross-link to feature docs.
config/crd/bases/oadp.openshift.io_virtualmachinebackupsdiscoveries.yaml (1)
19-35: Field naming vs docs: ensure examples match spec.This CRD supports time range and
requestedBackups[]; there’s nobackupNamefield. Update docs/examples accordingly (see docs comment).Also applies to: 59-91
docs/config/vm_file_restore.md (1)
92-97: Documented default resources: cite source or adjust.You list default CPU/memory. Please point to code (controller defaults) or remove to avoid drift.
Makefile (1)
50-50: CSV wiring verified—all requirements present.The Makefile change correctly adds the new service account to bundle generation. Verification confirms:
- ServiceAccount
oadp-vm-file-restore-controller-managerpresent in install.spec (line 986)- All four RELATED_IMAGE env vars present: CONTROLLER, ACCESS, SSH, BROWSER (lines 1297–1303)
- All four relatedImages entries present (lines 1461–1475)
No further action needed.
config/vm-file-restore-controller_rbac/service_account.yaml (1)
1-12: Review comment contains inaccuracy about namespace configuration mechanism.The original claim that "system" is a kustomize placeholder is incorrect. The namespace is hardcoded as a literal value and will not be overridden by the config/default overlay since
vm-file-restore-controller_rbacis not included there (it's only referenced inconfig/manifests/kustomization.yaml).However, the ServiceAccount and ClusterRoleBinding binding is correct: the subjects section properly references the SA by name and namespace.
Verify whether the "system" namespace is intentional or if this RBAC should be deployed in the operator namespace (openshift-adp) alongside other components.
config/crd/bases/oadp.openshift.io_dataprotectionapplications.yaml (1)
2672-2746: LGTM - CRD schema extension follows Kubernetes conventions.The additions properly extend the DPA CRD to support VM file restore:
- Image override keys are documented in the unsupportedOverrides description
- The vmFileRestore spec follows standard Kubernetes ResourceRequirements structure
- Field descriptions are clear and match the feature requirements
config/manifests/kustomization.yaml (1)
10-10: LGTM - RBAC resource correctly added.The VM file restore controller RBAC resource follows the same pattern as the non-admin controller above it.
config/crd/kustomization.yaml (1)
25-26: LGTM - CRD resources correctly registered.The two new VM-related CRDs are properly added to the kustomization resource list.
config/vm-file-restore-controller_rbac/kustomization.yaml (1)
1-9: LGTM - Standard RBAC kustomization structure.The kustomization correctly references the three required RBAC resources with clear comments explaining the purpose.
config/manager/manager.yaml (1)
91-98: Verify image versioning strategy.The new VM file restore image environment variables use the
latesttag. While this follows the existing pattern in this file, ensure your deployment strategy accounts for image version pinning in production environments to avoid unexpected behavior from image updates.internal/controller/dataprotectionapplication_controller.go (1)
114-114: LGTM - Reconcile function properly integrated.The VM file restore controller reconciliation is correctly added to the batch sequence after non-admin controller setup.
internal/controller/validator.go (1)
308-360: LGTM - Comprehensive validation logic.The VM file restore validation correctly enforces:
- Single controller instance across the cluster
- Required Velero configuration presence
- Mandatory kubevirt and openshift plugins
The validation follows the established pattern used for non-admin controller validation and provides clear error messages.
config/vm-file-restore-controller_rbac/role_binding.yaml (1)
1-19: LGTM - ClusterRoleBinding correctly configured.The binding properly associates the VM file restore controller's ClusterRole with its ServiceAccount using standard Kubernetes RBAC patterns and appropriate labels.
| annotations: | ||
| controller-gen.kubebuilder.io/version: v0.18.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
controller-gen version drift.
This CRD declares v0.18.0 while Makefile uses CONTROLLER_TOOLS_VERSION v0.16.5. Standardize to avoid non-reproducible bundles and CI diffs (regenerate with one version or bump Makefile/tooling).
🏁 Script executed:
#!/bin/bash
# Check controller-gen versions stamped in CRDs
rg -n "controller-gen.kubebuilder.io/version" config/crd/bases bundle/manifestsLength of output: 4991
---
🏁 Script executed:
```shell
#!/bin/bash
# Extract CONTROLLER_TOOLS_VERSION from Makefile
rg -n "CONTROLLER_TOOLS_VERSION" Makefile
Length of output: 491
Regenerate VirtualMachine CRDs with controller-gen v0.16.5 to match project standard.*
Four CRDs are stamped v0.18.0 while Makefile enforces v0.16.5 across the project. This creates bundle inconsistency and non-reproducible builds:
- config/crd/bases/oadp.openshift.io_virtualmachinebackupsdiscoveries.yaml (line 6)
- config/crd/bases/oadp.openshift.io_virtualmachinefilerestores.yaml (line 6)
- Their corresponding bundle/manifests entries
Either regenerate these two CRD families with CONTROLLER_TOOLS_VERSION=v0.16.5 or update the Makefile and all CRDs to v0.18.0.
🤖 Prompt for AI Agents
In config/crd/bases/oadp.openshift.io_virtualmachinebackupsdiscoveries.yaml
(lines 5-6) and the paired virtualmachinefilerestores.yaml (line 6) the
controller-gen annotation is v0.18.0 which mismatches the project Makefile
expecting v0.16.5; regenerate these two CRD families and their bundle/manifests
entries with CONTROLLER_TOOLS_VERSION=v0.16.5 so the annotation becomes v0.16.5
(run the controller-gen target from the Makefile after setting
CONTROLLER_TOOLS_VERSION=v0.16.5), or if you prefer to upgrade, update the
Makefile and all CRD files and bundle/manifests to consistently use v0.18.0;
ensure the chosen approach is applied to both CRD files and their
bundle/manifests entries so annotations and build tooling are consistent.
|
/retest |
|
@shubham-pampattiwar lint :) |
This commit integrates the VM file restore feature into OADP operator, following the same pattern as the non-admin controller integration. Features: - Enable/disable VM file restore via DPA.Spec.VMFileRestore.Enable - Automatic deployment of oadp-vm-file-restore-controller when enabled - Support for custom resource limits via DPA configuration - Image override support for all 4 required images: * VM file restore controller * File access container * SSH sidecar * FileBrowser sidecar Changes: - API: Added VMFileRestore struct to DataProtectionApplication spec - API: Added 4 image key constants for unsupportedOverrides - Controller: Created vmfilerestore_controller.go with reconciliation logic - Validation: Added VM file restore validation requiring kubevirt and openshift plugins - CRDs: Added VirtualMachineBackupsDiscovery and VirtualMachineFileRestore - RBAC: Added ClusterRole, binding, and ServiceAccount for controller - Bundle: Updated CSV with new CRDs, environment variables, and related images - Documentation: Created comprehensive user guide at docs/config/vm_file_restore.md - Tests: Added 33 unit test scenarios with full coverage Prerequisites: - KubeVirt must be installed in the cluster - kubevirt-velero-plugin must be configured in defaultPlugins (required) - openshift-velero-plugin must be configured in defaultPlugins (required) Implements: migtools/oadp-vm-file-restore#10 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit fixes issues found during live cluster testing: 1. Add missing RBAC permissions: - events permission for controller to create events - coordination.k8s.io/leases for leader election 2. Fix reconciliation loop in OADP operator: - Only update dynamic container fields (Image, ImagePullPolicy, Env, Resources) - Make PodSecurityContext conditional (set only if nil) - Prevents continuous reconciliation by leaving static fields unchanged 3. Bundle generation fixes: - Add vm-file-restore RBAC kustomization.yaml - Add oadp-vm-file-restore-controller-manager to Makefile BUNDLE_GEN_FLAGS - Reference vm-file-restore RBAC in config/manifests/kustomization.yaml - Clean unwanted RBAC from CSV (buckets, velero-privileged SCC, wildcards) Tested on live cluster - controller now deploys successfully, performs leader election, and no longer causes reconciliation loops. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
c2c551e to
a7441f6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (3)
config/crd/bases/oadp.openshift.io_virtualmachinebackupsdiscoveries.yaml (1)
5-6: Restore controller-gen version alignment.This CRD is stamped with controller-gen v0.18.0, but the repo still pins CONTROLLER_TOOLS_VERSION=v0.16.5. Running
make generatewill keep churning these manifests. Regenerate both VirtualMachine* CRDs (and their bundle copies) with 0.16.5 or bump the tooling consistently across Makefile, bases, and bundles so builds stay reproducible.internal/controller/vmfilerestore_controller.go (1)
3-25: Fix import grouping to satisfy gci (duplicate issue).This issue was already flagged in previous reviews. The
golangci-lintgci check is failing. Please runmake fmtorgci write internal/controller/vmfilerestore_controller.goto fix the import ordering.Based on learnings
internal/controller/vmfilerestore_controller_test.go (1)
3-20: Tests don’t compile: replace undefined helper withstrings.Contains.
containsisn’t defined anywhere, so the test build fails and golangci-lint’s gci check can’t pass until the stdlib helper is used and imported.@@ -import ( - "context" - "os" - "testing" +import ( + "context" + "os" + "strings" + "testing" @@ - } else if tt.errorContains != "" && !contains(err.Error(), tt.errorContains) { + } else if tt.errorContains != "" && !strings.Contains(err.Error(), tt.errorContains) {Also applies to: 809-815
🧹 Nitpick comments (2)
internal/controller/validator_test.go (1)
3114-3125: Use standard librarystrings.Containsinstead.The custom
containsandcontainsHelperfunctions duplicate functionality already available in Go's standard library. Thestrings.Contains(s, substr)function provides the same behavior.Apply this diff to use the standard library:
+import ( + "strings" + // ... other imports +) -func contains(s, substr string) bool { - return len(s) >= len(substr) && (s == substr || len(substr) == 0 || (len(s) > 0 && len(substr) > 0 && containsHelper(s, substr))) -} - -func containsHelper(s, substr string) bool { - for i := 0; i <= len(s)-len(substr); i++ { - if s[i:i+len(substr)] == substr { - return true - } - } - return false -}Then replace
contains(err.Error(), tt.errorContains)at line 3095 withstrings.Contains(err.Error(), tt.errorContains).internal/controller/vmfilerestore_controller.go (1)
353-353: Consider pinning default image tags instead of "latest".The default image references use the
latesttag, which can lead to unpredictable behavior in production environments when images are updated. Consider pinning to specific version tags that align with the operator release.Example:
- return "quay.io/konveyor/oadp-vm-file-restore:latest" + return "quay.io/konveyor/oadp-vm-file-restore:v1.5.0"Note: This is typically set via environment variables in production deployments, so the hardcoded default may only affect development/testing scenarios.
Also applies to: 368-368, 383-383, 398-398
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (25)
Makefile(1 hunks)README.md(1 hunks)api/v1alpha1/dataprotectionapplication_types.go(4 hunks)api/v1alpha1/zz_generated.deepcopy.go(2 hunks)bundle/manifests/oadp-operator.clusterserviceversion.yaml(6 hunks)bundle/manifests/oadp.openshift.io_dataprotectionapplications.yaml(1 hunks)bundle/manifests/oadp.openshift.io_virtualmachinebackupsdiscoveries.yaml(1 hunks)bundle/manifests/oadp.openshift.io_virtualmachinefilerestores.yaml(1 hunks)config/crd/bases/oadp.openshift.io_dataprotectionapplications.yaml(1 hunks)config/crd/bases/oadp.openshift.io_virtualmachinebackupsdiscoveries.yaml(1 hunks)config/crd/bases/oadp.openshift.io_virtualmachinefilerestores.yaml(1 hunks)config/crd/kustomization.yaml(1 hunks)config/manager/manager.yaml(1 hunks)config/manifests/kustomization.yaml(1 hunks)config/rbac/role.yaml(6 hunks)config/vm-file-restore-controller_rbac/kustomization.yaml(1 hunks)config/vm-file-restore-controller_rbac/role.yaml(1 hunks)config/vm-file-restore-controller_rbac/role_binding.yaml(1 hunks)config/vm-file-restore-controller_rbac/service_account.yaml(1 hunks)docs/config/vm_file_restore.md(1 hunks)internal/controller/dataprotectionapplication_controller.go(1 hunks)internal/controller/validator.go(1 hunks)internal/controller/validator_test.go(1 hunks)internal/controller/vmfilerestore_controller.go(1 hunks)internal/controller/vmfilerestore_controller_test.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (11)
- config/vm-file-restore-controller_rbac/kustomization.yaml
- config/crd/kustomization.yaml
- Makefile
- config/crd/bases/oadp.openshift.io_virtualmachinefilerestores.yaml
- internal/controller/dataprotectionapplication_controller.go
- api/v1alpha1/dataprotectionapplication_types.go
- config/manager/manager.yaml
- config/rbac/role.yaml
- README.md
- config/vm-file-restore-controller_rbac/role_binding.yaml
- docs/config/vm_file_restore.md
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
-Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity.
Files:
config/manifests/kustomization.yamlinternal/controller/validator_test.gointernal/controller/vmfilerestore_controller.goapi/v1alpha1/zz_generated.deepcopy.gobundle/manifests/oadp.openshift.io_virtualmachinebackupsdiscoveries.yamlconfig/crd/bases/oadp.openshift.io_dataprotectionapplications.yamlinternal/controller/vmfilerestore_controller_test.goconfig/vm-file-restore-controller_rbac/service_account.yamlbundle/manifests/oadp.openshift.io_virtualmachinefilerestores.yamlconfig/vm-file-restore-controller_rbac/role.yamlbundle/manifests/oadp.openshift.io_dataprotectionapplications.yamlinternal/controller/validator.goconfig/crd/bases/oadp.openshift.io_virtualmachinebackupsdiscoveries.yamlbundle/manifests/oadp-operator.clusterserviceversion.yaml
🪛 golangci-lint (2.5.0)
internal/controller/validator_test.go
[error] 2842-2842: File is not properly formatted
(gci)
internal/controller/vmfilerestore_controller.go
[error] 14-14: File is not properly formatted
(gci)
internal/controller/vmfilerestore_controller_test.go
[error] 30-30: File is not properly formatted
(gci)
🔇 Additional comments (6)
api/v1alpha1/zz_generated.deepcopy.go (1)
467-471: LGTM! Auto-generated DeepCopy implementation is correct.The generated DeepCopy methods for VMFileRestore properly handle pointer fields and nested structures.
Also applies to: 1267-1290
config/vm-file-restore-controller_rbac/role.yaml (1)
1-147: LGTM! ClusterRole permissions are appropriate.The ClusterRole grants necessary permissions for the VM file restore controller to:
- Manage file serving resources (deployments, services, routes)
- Access Velero backup/restore resources
- Create access control (rolebindings)
- Perform leader election (leases)
- Manage credentials (secrets)
All permissions align with the controller's documented functionality.
internal/controller/validator_test.go (1)
2840-3112: LGTM! Comprehensive test coverage for VM file restore validation.The test scenarios cover all critical validation paths:
- Feature disabled/enabled states
- Missing prerequisites (Velero config, required plugins)
- Cluster-wide controller conflicts
- Custom resource configurations
internal/controller/vmfilerestore_controller.go (1)
52-125: LGTM! Well-structured deployment reconciliation logic.The controller properly:
- Handles enable/disable lifecycle with cleanup
- Uses CreateOrUpdate pattern for idempotent reconciliation
- Sets controller ownership for proper garbage collection
- Configures appropriate security contexts and health probes
- Supports custom resource requirements via DPA
Also applies to: 127-139, 154-332
bundle/manifests/oadp.openshift.io_virtualmachinefilerestores.yaml (1)
1-444: LGTM! Well-defined CRD for VirtualMachineFileRestore.The CRD schema is comprehensive and includes:
- Clear field descriptions and validation rules
- Appropriate validation (e.g., at least one file access method required)
- Status subresource for proper condition tracking
- Useful printer columns for CLI operations
This is auto-generated by controller-gen and follows Kubernetes API conventions.
bundle/manifests/oadp.openshift.io_virtualmachinebackupsdiscoveries.yaml (1)
1-417: LGTM! Well-defined CRD for VirtualMachineBackupsDiscovery.The CRD provides comprehensive schema for VM backup discovery including:
- Time-based filtering support
- Detailed progress tracking
- Clear status reporting with conditions
- Useful statistics and printer columns
This is auto-generated by controller-gen and follows Kubernetes API conventions.
| vmfrDpaResourceVersion = "" | ||
| previousVMFileRestoreConfiguration *oadpv1alpha1.VMFileRestore = nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Package-level mutable state causes race conditions.
The package-level variables vmfrDpaResourceVersion and previousVMFileRestoreConfiguration are mutated during reconciliation (lines 204-208) without any synchronization. Since the controller runtime can invoke ReconcileVMFileRestoreController concurrently for different resources or requeues, this creates race conditions.
Recommended solution: Move this state into the reconciler struct or use an annotation on the Deployment to track the DPA resource version:
-var (
- vmFileRestoreDeploymentLabels = map[string]string{
- "app.kubernetes.io/component": "manager",
- "app.kubernetes.io/created-by": common.OADPOperator,
- "app.kubernetes.io/instance": vmFileRestoreObjectName,
- "app.kubernetes.io/managed-by": "kustomize",
- "app.kubernetes.io/name": "deployment",
- "app.kubernetes.io/part-of": common.OADPOperator,
- }
-
- vmfrDpaResourceVersion = ""
- previousVMFileRestoreConfiguration *oadpv1alpha1.VMFileRestore = nil
-)
+var (
+ vmFileRestoreDeploymentLabels = map[string]string{
+ "app.kubernetes.io/component": "manager",
+ "app.kubernetes.io/created-by": common.OADPOperator,
+ "app.kubernetes.io/instance": vmFileRestoreObjectName,
+ "app.kubernetes.io/managed-by": "kustomize",
+ "app.kubernetes.io/name": "deployment",
+ "app.kubernetes.io/part-of": common.OADPOperator,
+ }
+)Then at lines 204-212, always use the current DPA resource version:
- // Track DPA resource version for change detection
- if len(vmfrDpaResourceVersion) == 0 ||
- !reflect.DeepEqual(dpa.Spec.VMFileRestore, previousVMFileRestoreConfiguration) {
- vmfrDpaResourceVersion = dpa.GetResourceVersion()
- previousVMFileRestoreConfiguration = dpa.Spec.VMFileRestore
- }
-
podAnnotations := map[string]string{
- vmfrDpaResourceVersionAnnotation: vmfrDpaResourceVersion,
+ vmfrDpaResourceVersionAnnotation: dpa.GetResourceVersion(),
}This ensures each reconciliation uses the current DPA's resource version without shared mutable state.
Also applies to: 204-208
🤖 Prompt for AI Agents
internal/controller/vmfilerestore_controller.go lines 47-48 and reconciliation
around lines 204-212: package-level mutable variables vmfrDpaResourceVersion and
previousVMFileRestoreConfiguration are causing race conditions; move these
fields into the reconciler struct (e.g., add dpaResourceVersion string and
previousVMFileRestoreConfiguration *oadpv1alpha1.VMFileRestore) and update all
accesses to reference r.dpaResourceVersion and
r.previousVMFileRestoreConfiguration, or alternatively persist the DPA
resourceVersion as an annotation on the Deployment and read/write that
annotation during reconcile; then, at lines ~204-212 always read the current DPA
resourceVersion from the reconciler instance (or from the Deployment annotation)
instead of the package-level variables, and remove the package-level mutable
variables to eliminate shared state.
- Fix import grouping (gci) in validator_test.go, vmfilerestore_controller.go, and vmfilerestore_controller_test.go - Replace custom contains helper with strings.Contains from stdlib - Remove unwanted RBAC from config/rbac/role.yaml: - Full wildcard permissions (apiGroups: *, resources: *, verbs: *) - buckets resources (3 locations) - velero-privileged SCC reference - Duplicate privileged SCC entry - Extra velero.io resource listings - Remove unwanted RBAC from bundle CSV: - Full wildcard from velero ServiceAccount All tests pass. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/controller/validator_test.go (1)
3-20: Fix gci import grouping to unblock lint.
golangci-lintis still failing on this file’s import block (gcicheck). Please regroup the imports (stdlib / third-party / module-local) or run the project formatter (e.g.make fmtorgci write internal/controller/validator_test.go) so CI can pass.
♻️ Duplicate comments (1)
internal/controller/vmfilerestore_controller.go (1)
3-24: gci still flags the import block.The imports are again out of order for
gci(module-local paths are mixed with third-party ones). Please rerun the project’s formatter (e.g.make fmt/gci write internal/controller/vmfilerestore_controller.go) so lint stops failing.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (4)
bundle/manifests/oadp-operator.clusterserviceversion.yaml(6 hunks)internal/controller/validator_test.go(2 hunks)internal/controller/vmfilerestore_controller.go(1 hunks)internal/controller/vmfilerestore_controller_test.go(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
-Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity.
Files:
internal/controller/validator_test.gointernal/controller/vmfilerestore_controller_test.gobundle/manifests/oadp-operator.clusterserviceversion.yamlinternal/controller/vmfilerestore_controller.go
🪛 golangci-lint (2.5.0)
internal/controller/validator_test.go
[error] 10-10: File is not properly formatted
(gci)
internal/controller/vmfilerestore_controller_test.go
[error] 12-12: File is not properly formatted
(gci)
internal/controller/vmfilerestore_controller.go
[error] 10-10: File is not properly formatted
(gci)
| vmFileRestoreControlPlaneLabel = map[string]string{ | ||
| vmFileRestoreControlPlaneKey: vmFileRestoreControlPlaneValue, | ||
| } | ||
| vmFileRestoreDeploymentLabels = map[string]string{ | ||
| "app.kubernetes.io/component": "manager", | ||
| "app.kubernetes.io/created-by": common.OADPOperator, | ||
| "app.kubernetes.io/instance": vmFileRestoreObjectName, | ||
| "app.kubernetes.io/managed-by": "kustomize", | ||
| "app.kubernetes.io/name": "deployment", | ||
| "app.kubernetes.io/part-of": common.OADPOperator, | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid mutating shared maps during reconcile.
maps.Copy(vmFileRestoreDeploymentLabels, …) and the SetLabels/SetAnnotations calls assign the shared package-level maps directly to deployment/template metadata. On each reconcile you mutate those global maps (and keep reusing the same map instance), so concurrent reconciles can panic with “concurrent map writes” or bleed labels/annotations between DPAs. Clone the base maps before modifying them and assign those clones to the objects instead of the shared globals.
Apply this diff:
@@
var (
vmFileRestoreControlPlaneLabel = map[string]string{
vmFileRestoreControlPlaneKey: vmFileRestoreControlPlaneValue,
}
vmFileRestoreDeploymentLabels = map[string]string{
@@
func ensureVMFileRestoreRequiredLabels(deploymentObject *appsv1.Deployment) {
- maps.Copy(vmFileRestoreDeploymentLabels, vmFileRestoreControlPlaneLabel)
- deploymentObjectLabels := deploymentObject.GetLabels()
- if deploymentObjectLabels == nil {
- deploymentObject.SetLabels(vmFileRestoreDeploymentLabels)
- } else {
- for key, value := range vmFileRestoreDeploymentLabels {
- deploymentObjectLabels[key] = value
- }
- deploymentObject.SetLabels(deploymentObjectLabels)
- }
+ combined := maps.Clone(vmFileRestoreDeploymentLabels)
+ maps.Copy(combined, vmFileRestoreControlPlaneLabel)
+
+ if deploymentObject.GetLabels() == nil {
+ deploymentObject.SetLabels(combined)
+ return
+ }
+ labels := maps.Clone(deploymentObject.GetLabels())
+ for key, value := range combined {
+ labels[key] = value
+ }
+ deploymentObject.SetLabels(labels)
}
@@
- if templateObjectLabels == nil {
- deploymentObject.Spec.Template.SetLabels(vmFileRestoreControlPlaneLabel)
- } else {
- templateObjectLabels[vmFileRestoreControlPlaneKey] = vmFileRestoreControlPlaneLabel[vmFileRestoreControlPlaneKey]
- deploymentObject.Spec.Template.SetLabels(templateObjectLabels)
- }
+ if templateObjectLabels == nil {
+ deploymentObject.Spec.Template.SetLabels(maps.Clone(vmFileRestoreControlPlaneLabel))
+ } else {
+ templateCopy := maps.Clone(templateObjectLabels)
+ templateCopy[vmFileRestoreControlPlaneKey] = vmFileRestoreControlPlaneLabel[vmFileRestoreControlPlaneKey]
+ deploymentObject.Spec.Template.SetLabels(templateCopy)
+ }
@@
- templateObjectAnnotations := deploymentObject.Spec.Template.GetAnnotations()
- if templateObjectAnnotations == nil {
- deploymentObject.Spec.Template.SetAnnotations(podAnnotations)
- } else {
- templateObjectAnnotations[vmfrDpaResourceVersionAnnotation] = podAnnotations[vmfrDpaResourceVersionAnnotation]
- deploymentObject.Spec.Template.SetAnnotations(templateObjectAnnotations)
- }
+ templateObjectAnnotations := deploymentObject.Spec.Template.GetAnnotations()
+ if templateObjectAnnotations == nil {
+ deploymentObject.Spec.Template.SetAnnotations(maps.Clone(podAnnotations))
+ } else {
+ annotationCopy := maps.Clone(templateObjectAnnotations)
+ annotationCopy[vmfrDpaResourceVersionAnnotation] = podAnnotations[vmfrDpaResourceVersionAnnotation]
+ deploymentObject.Spec.Template.SetAnnotations(annotationCopy)
+ }Also applies to: 141-238
🤖 Prompt for AI Agents
internal/controller/vmfilerestore_controller.go around lines 34 to 45 (and
similarly apply to ranges 141-238): currently package-level maps
(vmFileRestoreControlPlaneLabel, vmFileRestoreDeploymentLabels, etc.) are being
assigned directly to object metadata and then mutated via maps.Copy / SetLabels
/ SetAnnotations, which can cause concurrent map writes and cross-instance
label/annotation leakage; fix by creating shallow copies of the base maps before
any modification or assignment (e.g., allocate a new map and copy entries from
the package-level map), perform any maps.Copy or modifications on those local
copies, and then call SetLabels/SetAnnotations with the cloned maps so the
global package-level maps remain immutable and reconcilers operate on isolated
instances.
| vmfrDpaResourceVersion = "" | ||
| previousVMFileRestoreConfiguration *oadpv1alpha1.VMFileRestore = nil | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eliminate package-level mutable state to avoid races.
vmfrDpaResourceVersion / previousVMFileRestoreConfiguration are package globals that you mutate during reconcile. Controller-runtime may run this reconciler concurrently for different DPAs, so these globals are shared across goroutines and even across namespaces. That creates data races and leaks one DPA’s resourceVersion into another deployment. Remove the globals and rely on the DPA you are reconciling—e.g. just annotate with dpa.GetResourceVersion() each time.
[state corruption, race condition]
Apply this diff:
@@
-var (
- vmfrDpaResourceVersion = ""
- previousVMFileRestoreConfiguration *oadpv1alpha1.VMFileRestore = nil
-)
+var (
+)
@@
- // Track DPA resource version for change detection
- if len(vmfrDpaResourceVersion) == 0 ||
- !reflect.DeepEqual(dpa.Spec.VMFileRestore, previousVMFileRestoreConfiguration) {
- vmfrDpaResourceVersion = dpa.GetResourceVersion()
- previousVMFileRestoreConfiguration = dpa.Spec.VMFileRestore
- }
-
- podAnnotations := map[string]string{
- vmfrDpaResourceVersionAnnotation: vmfrDpaResourceVersion,
- }
+ podAnnotations := map[string]string{
+ vmfrDpaResourceVersionAnnotation: dpa.GetResourceVersion(),
+ }Also applies to: 202-211
🤖 Prompt for AI Agents
In internal/controller/vmfilerestore_controller.go around lines 46-48 (and
similarly at 202-211), remove the package-level mutable globals
vmfrDpaResourceVersion and previousVMFileRestoreConfiguration; they cause races
when reconciles run concurrently and leak state across DPAs. Instead, stop
mutating package globals: use the reconciled DPA instance directly (e.g.
dpa.GetResourceVersion()) when annotating or persisting the resourceVersion, and
keep any previous-configuration comparison local to the current reconcile (or
store per-DPA state in the DPA’s annotations/status or a thread-safe per-DPA
cache keyed by namespaced name). Replace reads/writes of the globals with local
variables or DPA-scoped annotations/status and ensure no shared mutable state
remains at package scope.
|
/retest ai-retester: The |
❯ gh pr checkout 2003 --recurse-submodules remote: Enumerating objects: 77, done. remote: Counting objects: 100% (29/29), done. remote: Compressing objects: 100% (14/14), done. remote: Total 77 (delta 15), reused 18 (delta 14), pack-reused 48 (from 1) Unpacking objects: 100% (77/77), 118.21 KiB | 1.01 MiB/s, done. From https://github.com/shubham-pampattiwar/oadp-operator * [new branch] vm-file-restore-integration -> shub/vm-file-restore-integration Previous HEAD position was 24e3758 update prow notes, operator-config (openshift#2029) branch 'vm-file-restore-integration' set up to track 'shub/vm-file-restore-integration'. Switched to a new branch 'vm-file-restore-integration' ~/oadp-operator vm-file-restore-integration ❯ make bundle Using Container Tool: docker [ -f /Users/tkaovila/oadp-operator/bin/oadp-dev/controller-gen ] || { set -e ; mkdir -p /Users/tkaovila/oadp-operator/bin/oadp-dev/ ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading sigs.k8s.io/controller-tools/cmd/controller-gen@v0.16.5 to branch directory" ; GOBIN=/Users/tkaovila/oadp-operator/bin/oadp-dev/ go install -mod=mod sigs.k8s.io/controller-tools/cmd/controller-gen@v0.16.5 ; rm -rf $TMP_DIR ; } GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases [ -f /Users/tkaovila/oadp-operator/bin/oadp-dev/kustomize ] || { set -e ; mkdir -p /Users/tkaovila/oadp-operator/bin/oadp-dev/ ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading sigs.k8s.io/kustomize/kustomize/v5@v5.2.1 to branch directory" ; GOBIN=/Users/tkaovila/oadp-operator/bin/oadp-dev/ go install -mod=mod sigs.k8s.io/kustomize/kustomize/v5@v5.2.1 ; rm -rf $TMP_DIR ; } GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/operator-sdk generate kustomize manifests -q cd config/manager && GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/kustomize edit set image controller=quay.io/konveyor/oadp-operator:latest GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/kustomize build config/manifests | GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/operator-sdk generate bundle -q --extra-service-accounts "velero,non-admin-controller,oadp-vm-file-restore-controller-manager" --overwrite --version 99.0.0 --channels="dev" --default-channel="dev" WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation WARN[0000] ClusterServiceVersion validation: [CSVFileNotValid] (oadp-operator.v99.0.0) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects. INFO[0000] Creating bundle.Dockerfile INFO[0000] Creating bundle/metadata/annotations.yaml INFO[0000] Bundle metadata generated successfully Using Container Tool: docker Using Container Tool: docker [ -f /Users/tkaovila/oadp-operator/bin/yq ] || { set -e ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading github.com/mikefarah/yq/v4@v4.28.1" ; GOBIN=/Users/tkaovila/oadp-operator/bin go install -mod=mod github.com/mikefarah/yq/v4@v4.28.1 ; rm -rf $TMP_DIR ; } go: creating new go.mod: module tmp Downloading github.com/mikefarah/yq/v4@v4.28.1 Using Container Tool: docker [ -f /Users/tkaovila/oadp-operator/bin/yq ] || { set -e ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading github.com/mikefarah/yq/v4@v4.28.1" ; GOBIN=/Users/tkaovila/oadp-operator/bin go install -mod=mod github.com/mikefarah/yq/v4@v4.28.1 ; rm -rf $TMP_DIR ; } cp bundle.Dockerfile build/Dockerfile.bundle GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/operator-sdk bundle validate ./bundle WARN[0000] Warning: Value velero.io/v1, Kind=BackupRepository: provided API should have an example annotation WARN[0000] Warning: Value oadp.openshift.io/v1alpha1, Kind=VirtualMachineBackupsDiscovery: provided API should have an example annotation WARN[0000] Warning: Value oadp.openshift.io/v1alpha1, Kind=CloudStorage: provided API should have an example annotation WARN[0000] Warning: Value velero.io/v2alpha1, Kind=DataUpload: provided API should have an example annotation WARN[0000] Warning: Value velero.io/v2alpha1, Kind=DataDownload: provided API should have an example annotation WARN[0000] Warning: Value oadp.openshift.io/v1alpha1, Kind=VirtualMachineFileRestore: provided API should have an example annotation WARN[0000] Warning: Value : (oadp-operator.v99.0.0) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects. INFO[0000] All validation tests have completed successfully gsed -e 's/ createdAt: .*/ createdAt: "2025-02-28T20:03:54Z"/' bundle/manifests/oadp-operator.clusterserviceversion.yaml > bundle/manifests/oadp-operator.clusterserviceversion.yaml.tmp mv bundle/manifests/oadp-operator.clusterserviceversion.yaml.tmp bundle/manifests/oadp-operator.clusterserviceversion.yaml Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
bundle/manifests/oadp-operator.clusterserviceversion.yaml (1)
691-696: Add descriptions and displayNames to new CRD entries for consistency and discoverability.The new VirtualMachineBackupsDiscovery and VirtualMachineFileRestore CRD entries lack descriptions and displayNames, which is inconsistent with other CRDs in this file (e.g., BackupRepository at line 304, DataProtectionApplication at line 440). This omission may impact OLM UI rendering and user discoverability.
Consider adding metadata similar to other CRD entries to maintain consistency and improve user experience.
~ - kind: VirtualMachineBackupsDiscovery ~ name: virtualmachinebackupsdiscoveries.oadp.openshift.io ~ version: v1alpha1 ~ - kind: VirtualMachineFileRestore ~ name: virtualmachinefilerestores.oadp.openshift.io ~ version: v1alpha1 + - description: VirtualMachineBackupsDiscovery represents discovery of VM backups for file restore operations. + displayName: Virtual Machine Backups Discovery + kind: VirtualMachineBackupsDiscovery + name: virtualmachinebackupsdiscoveries.oadp.openshift.io + version: v1alpha1 + - description: VirtualMachineFileRestore represents a file restore operation for virtual machines. + displayName: Virtual Machine File Restore + kind: VirtualMachineFileRestore + name: virtualmachinefilerestores.oadp.openshift.io + version: v1alpha1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
bundle/manifests/oadp-operator.clusterserviceversion.yaml(4 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
-Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity.
Files:
bundle/manifests/oadp-operator.clusterserviceversion.yaml
🔇 Additional comments (2)
bundle/manifests/oadp-operator.clusterserviceversion.yaml (2)
844-986: Verify RBAC permissions scope for vm-file-restore-controller-manager.The new clusterPermissions block (lines 844-986) grants broad access to multiple resource types across several API groups. While the permissions appear necessary for controller operation (managing deployments, routes, velero resources, VM file restore CRDs), consider verifying:
- Are all listed resources and verbs required for the controller's reconciliation logic?
- Could
leases(line 883-893) androlebindings(line 924-934) access be scoped more narrowly?- Are write permissions (create, patch, update) necessary for all velero resources listed (lines 948-985)?
Principle of least privilege should be maintained even as feature scope expands.
1287-1294: Ensure image environment variables align with actual deployment and configuration.Four new RELATED_IMAGE environment variables are added for VM file restore components. Verify that:
- These image names and registries (quay.io/konveyor/*) match the actual images used in the vm-file-restore-controller deployment
- The image tag
latestis appropriate for production, or if pinned versions should be used instead- The environment variable names match what the vm-file-restore-controller expects
|
/retest ai-retester: The e2e-test-aws-e2e step failed because the MySQL application DATAMOVER test timed out after 540 seconds, and the 'todolist' container in a related pod reached 'PodInitializing' status, preventing the full test suite from completing. The mysql container was also shown to have liveness probe failures, and repeatedly restarting |
|
/retest ai-retester: The |
|
@weshayutin @mpryc PR is ripe for review and testing ! |
|
/woof |
|
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/woof |
|
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/retest ai-retester: The |
|
@shubham-pampattiwar: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
/retest ai-retester: The |
|
/hold |
|
On OCP 4.20, deployed by jenkins mtv Enable VMFR: SUCCESS!
ERRORS: due to namespace.. I think |
|
/unhold |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kaovilai, shubham-pampattiwar, weshayutin The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
* Add VM file restore feature integration This commit integrates the VM file restore feature into OADP operator, following the same pattern as the non-admin controller integration. Features: - Enable/disable VM file restore via DPA.Spec.VMFileRestore.Enable - Automatic deployment of oadp-vm-file-restore-controller when enabled - Support for custom resource limits via DPA configuration - Image override support for all 4 required images: * VM file restore controller * File access container * SSH sidecar * FileBrowser sidecar Changes: - API: Added VMFileRestore struct to DataProtectionApplication spec - API: Added 4 image key constants for unsupportedOverrides - Controller: Created vmfilerestore_controller.go with reconciliation logic - Validation: Added VM file restore validation requiring kubevirt and openshift plugins - CRDs: Added VirtualMachineBackupsDiscovery and VirtualMachineFileRestore - RBAC: Added ClusterRole, binding, and ServiceAccount for controller - Bundle: Updated CSV with new CRDs, environment variables, and related images - Documentation: Created comprehensive user guide at docs/config/vm_file_restore.md - Tests: Added 33 unit test scenarios with full coverage Prerequisites: - KubeVirt must be installed in the cluster - kubevirt-velero-plugin must be configured in defaultPlugins (required) - openshift-velero-plugin must be configured in defaultPlugins (required) Implements: migtools/oadp-vm-file-restore#10 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * Fix VM file restore controller deployment and permissions This commit fixes issues found during live cluster testing: 1. Add missing RBAC permissions: - events permission for controller to create events - coordination.k8s.io/leases for leader election 2. Fix reconciliation loop in OADP operator: - Only update dynamic container fields (Image, ImagePullPolicy, Env, Resources) - Make PodSecurityContext conditional (set only if nil) - Prevents continuous reconciliation by leaving static fields unchanged 3. Bundle generation fixes: - Add vm-file-restore RBAC kustomization.yaml - Add oadp-vm-file-restore-controller-manager to Makefile BUNDLE_GEN_FLAGS - Reference vm-file-restore RBAC in config/manifests/kustomization.yaml - Clean unwanted RBAC from CSV (buckets, velero-privileged SCC, wildcards) Tested on live cluster - controller now deploys successfully, performs leader election, and no longer causes reconciliation loops. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * Fix code formatting and clean unwanted RBAC permissions - Fix import grouping (gci) in validator_test.go, vmfilerestore_controller.go, and vmfilerestore_controller_test.go - Replace custom contains helper with strings.Contains from stdlib - Remove unwanted RBAC from config/rbac/role.yaml: - Full wildcard permissions (apiGroups: *, resources: *, verbs: *) - buckets resources (3 locations) - velero-privileged SCC reference - Duplicate privileged SCC entry - Extra velero.io resource listings - Remove unwanted RBAC from bundle CSV: - Full wildcard from velero ServiceAccount All tests pass. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * lint fix * make bundle * `make bundle` after `gh pr checkout 2003 --recurse-submodules` ❯ gh pr checkout 2003 --recurse-submodules remote: Enumerating objects: 77, done. remote: Counting objects: 100% (29/29), done. remote: Compressing objects: 100% (14/14), done. remote: Total 77 (delta 15), reused 18 (delta 14), pack-reused 48 (from 1) Unpacking objects: 100% (77/77), 118.21 KiB | 1.01 MiB/s, done. From https://github.com/shubham-pampattiwar/oadp-operator * [new branch] vm-file-restore-integration -> shub/vm-file-restore-integration Previous HEAD position was 24e3758 update prow notes, operator-config (openshift#2029) branch 'vm-file-restore-integration' set up to track 'shub/vm-file-restore-integration'. Switched to a new branch 'vm-file-restore-integration' ~/oadp-operator vm-file-restore-integration ❯ make bundle Using Container Tool: docker [ -f /Users/tkaovila/oadp-operator/bin/oadp-dev/controller-gen ] || { set -e ; mkdir -p /Users/tkaovila/oadp-operator/bin/oadp-dev/ ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading sigs.k8s.io/controller-tools/cmd/controller-gen@v0.16.5 to branch directory" ; GOBIN=/Users/tkaovila/oadp-operator/bin/oadp-dev/ go install -mod=mod sigs.k8s.io/controller-tools/cmd/controller-gen@v0.16.5 ; rm -rf $TMP_DIR ; } GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases [ -f /Users/tkaovila/oadp-operator/bin/oadp-dev/kustomize ] || { set -e ; mkdir -p /Users/tkaovila/oadp-operator/bin/oadp-dev/ ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading sigs.k8s.io/kustomize/kustomize/v5@v5.2.1 to branch directory" ; GOBIN=/Users/tkaovila/oadp-operator/bin/oadp-dev/ go install -mod=mod sigs.k8s.io/kustomize/kustomize/v5@v5.2.1 ; rm -rf $TMP_DIR ; } GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/operator-sdk generate kustomize manifests -q cd config/manager && GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/kustomize edit set image controller=quay.io/konveyor/oadp-operator:latest GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/kustomize build config/manifests | GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/operator-sdk generate bundle -q --extra-service-accounts "velero,non-admin-controller,oadp-vm-file-restore-controller-manager" --overwrite --version 99.0.0 --channels="dev" --default-channel="dev" WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation WARN[0000] ClusterServiceVersion validation: [CSVFileNotValid] (oadp-operator.v99.0.0) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects. INFO[0000] Creating bundle.Dockerfile INFO[0000] Creating bundle/metadata/annotations.yaml INFO[0000] Bundle metadata generated successfully Using Container Tool: docker Using Container Tool: docker [ -f /Users/tkaovila/oadp-operator/bin/yq ] || { set -e ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading github.com/mikefarah/yq/v4@v4.28.1" ; GOBIN=/Users/tkaovila/oadp-operator/bin go install -mod=mod github.com/mikefarah/yq/v4@v4.28.1 ; rm -rf $TMP_DIR ; } go: creating new go.mod: module tmp Downloading github.com/mikefarah/yq/v4@v4.28.1 Using Container Tool: docker [ -f /Users/tkaovila/oadp-operator/bin/yq ] || { set -e ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading github.com/mikefarah/yq/v4@v4.28.1" ; GOBIN=/Users/tkaovila/oadp-operator/bin go install -mod=mod github.com/mikefarah/yq/v4@v4.28.1 ; rm -rf $TMP_DIR ; } cp bundle.Dockerfile build/Dockerfile.bundle GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/operator-sdk bundle validate ./bundle WARN[0000] Warning: Value velero.io/v1, Kind=BackupRepository: provided API should have an example annotation WARN[0000] Warning: Value oadp.openshift.io/v1alpha1, Kind=VirtualMachineBackupsDiscovery: provided API should have an example annotation WARN[0000] Warning: Value oadp.openshift.io/v1alpha1, Kind=CloudStorage: provided API should have an example annotation WARN[0000] Warning: Value velero.io/v2alpha1, Kind=DataUpload: provided API should have an example annotation WARN[0000] Warning: Value velero.io/v2alpha1, Kind=DataDownload: provided API should have an example annotation WARN[0000] Warning: Value oadp.openshift.io/v1alpha1, Kind=VirtualMachineFileRestore: provided API should have an example annotation WARN[0000] Warning: Value : (oadp-operator.v99.0.0) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects. INFO[0000] All validation tests have completed successfully gsed -e 's/ createdAt: .*/ createdAt: "2025-02-28T20:03:54Z"/' bundle/manifests/oadp-operator.clusterserviceversion.yaml > bundle/manifests/oadp-operator.clusterserviceversion.yaml.tmp mv bundle/manifests/oadp-operator.clusterserviceversion.yaml.tmp bundle/manifests/oadp-operator.clusterserviceversion.yaml Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com> --------- Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com> Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Tiger Kaovilai <tkaovila@redhat.com>


Summary
This PR integrates the VM file restore feature into OADP operator, following the same pattern as the non-admin controller integration.
Features
DPA.Spec.VMFileRestore.EnableChanges
Prerequisites
kubevirt-velero-pluginmust be configured in defaultPlugins (required)openshift-velero-pluginmust be configured in defaultPlugins (required)Test Plan
Implements
migtools/oadp-vm-file-restore#10
🤖 Generated with Claude Code