Skip to content

Conversation

@weshayutin
Copy link
Contributor

Why the changes were made

Tiger informs me I'm doing it wrong :)

@coderabbitai
Copy link

coderabbitai bot commented Nov 12, 2025

Walkthrough

Documentation updated to replace two separate after-action PROW CI commands with a single make update step to run after creating the CI configuration files.

Changes

Cohort / File(s) Summary
Documentation Update
docs/developer/PROW_CI.md
Replaced guidance that previously instructed running two separate commands to update PROW CI configuration with a single consolidated make update command to be executed after creating the CI files.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

  • Focus review on the single documentation file to ensure the command substitution is accurate and any surrounding instructions remain consistent.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 4d8fff6 and 4b9a136.

📒 Files selected for processing (1)
  • docs/developer/PROW_CI.md (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**

⚙️ CodeRabbit configuration file

-Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity.

Files:

  • docs/developer/PROW_CI.md

Comment @coderabbitai help to get the list of available commands and usage tips.

Co-authored-by: Tiger Kaovilai <passawit.kaovilai@gmail.com>
@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 13, 2025
Copy link
Member

@shubham-pampattiwar shubham-pampattiwar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Nov 18, 2025
@openshift-ci
Copy link

openshift-ci bot commented Nov 18, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: kaovilai, mpryc, shubham-pampattiwar, weshayutin

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [kaovilai,mpryc,shubham-pampattiwar]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link

openshift-ci bot commented Nov 18, 2025

@weshayutin: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-merge-bot openshift-merge-bot bot merged commit 24e3758 into openshift:oadp-dev Nov 18, 2025
6 checks passed
kaovilai added a commit to shubham-pampattiwar/oadp-operator that referenced this pull request Nov 18, 2025
❯ gh pr checkout 2003 --recurse-submodules
remote: Enumerating objects: 77, done.
remote: Counting objects: 100% (29/29), done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 77 (delta 15), reused 18 (delta 14), pack-reused 48 (from 1)
Unpacking objects: 100% (77/77), 118.21 KiB | 1.01 MiB/s, done.
From https://github.com/shubham-pampattiwar/oadp-operator
 * [new branch]        vm-file-restore-integration -> shub/vm-file-restore-integration
Previous HEAD position was 24e3758 update prow notes, operator-config (openshift#2029)
branch 'vm-file-restore-integration' set up to track 'shub/vm-file-restore-integration'.
Switched to a new branch 'vm-file-restore-integration'

~/oadp-operator vm-file-restore-integration
❯ make bundle
Using Container Tool: docker
[ -f /Users/tkaovila/oadp-operator/bin/oadp-dev/controller-gen ] || { set -e ; mkdir -p /Users/tkaovila/oadp-operator/bin/oadp-dev/ ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading sigs.k8s.io/controller-tools/cmd/controller-gen@v0.16.5 to branch directory" ; GOBIN=/Users/tkaovila/oadp-operator/bin/oadp-dev/ go install -mod=mod sigs.k8s.io/controller-tools/cmd/controller-gen@v0.16.5 ; rm -rf $TMP_DIR ; }
GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
[ -f /Users/tkaovila/oadp-operator/bin/oadp-dev/kustomize ] || { set -e ; mkdir -p /Users/tkaovila/oadp-operator/bin/oadp-dev/ ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading sigs.k8s.io/kustomize/kustomize/v5@v5.2.1 to branch directory" ; GOBIN=/Users/tkaovila/oadp-operator/bin/oadp-dev/ go install -mod=mod sigs.k8s.io/kustomize/kustomize/v5@v5.2.1 ; rm -rf $TMP_DIR ; }
GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/operator-sdk generate kustomize manifests -q
cd config/manager && GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/kustomize edit set image controller=quay.io/konveyor/oadp-operator:latest
GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/kustomize build config/manifests | GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/operator-sdk generate bundle -q --extra-service-accounts "velero,non-admin-controller,oadp-vm-file-restore-controller-manager" --overwrite --version 99.0.0  --channels="dev" --default-channel="dev"
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [CSVFileNotValid] (oadp-operator.v99.0.0) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects.
INFO[0000] Creating bundle.Dockerfile
INFO[0000] Creating bundle/metadata/annotations.yaml
INFO[0000] Bundle metadata generated successfully
Using Container Tool: docker
Using Container Tool: docker
[ -f /Users/tkaovila/oadp-operator/bin/yq ] || { set -e ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading github.com/mikefarah/yq/v4@v4.28.1" ; GOBIN=/Users/tkaovila/oadp-operator/bin go install -mod=mod github.com/mikefarah/yq/v4@v4.28.1 ; rm -rf $TMP_DIR ; }
go: creating new go.mod: module tmp
Downloading github.com/mikefarah/yq/v4@v4.28.1
Using Container Tool: docker
[ -f /Users/tkaovila/oadp-operator/bin/yq ] || { set -e ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading github.com/mikefarah/yq/v4@v4.28.1" ; GOBIN=/Users/tkaovila/oadp-operator/bin go install -mod=mod github.com/mikefarah/yq/v4@v4.28.1 ; rm -rf $TMP_DIR ; }
cp bundle.Dockerfile build/Dockerfile.bundle
GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/operator-sdk bundle validate ./bundle
WARN[0000] Warning: Value velero.io/v1, Kind=BackupRepository: provided API should have an example annotation
WARN[0000] Warning: Value oadp.openshift.io/v1alpha1, Kind=VirtualMachineBackupsDiscovery: provided API should have an example annotation
WARN[0000] Warning: Value oadp.openshift.io/v1alpha1, Kind=CloudStorage: provided API should have an example annotation
WARN[0000] Warning: Value velero.io/v2alpha1, Kind=DataUpload: provided API should have an example annotation
WARN[0000] Warning: Value velero.io/v2alpha1, Kind=DataDownload: provided API should have an example annotation
WARN[0000] Warning: Value oadp.openshift.io/v1alpha1, Kind=VirtualMachineFileRestore: provided API should have an example annotation
WARN[0000] Warning: Value : (oadp-operator.v99.0.0) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects.
INFO[0000] All validation tests have completed successfully
gsed -e 's/    createdAt: .*/    createdAt: "2025-02-28T20:03:54Z"/' bundle/manifests/oadp-operator.clusterserviceversion.yaml > bundle/manifests/oadp-operator.clusterserviceversion.yaml.tmp
mv bundle/manifests/oadp-operator.clusterserviceversion.yaml.tmp bundle/manifests/oadp-operator.clusterserviceversion.yaml

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
weshayutin pushed a commit that referenced this pull request Nov 20, 2025
* Add VM file restore feature integration

This commit integrates the VM file restore feature into OADP operator,
following the same pattern as the non-admin controller integration.

Features:
- Enable/disable VM file restore via DPA.Spec.VMFileRestore.Enable
- Automatic deployment of oadp-vm-file-restore-controller when enabled
- Support for custom resource limits via DPA configuration
- Image override support for all 4 required images:
  * VM file restore controller
  * File access container
  * SSH sidecar
  * FileBrowser sidecar

Changes:
- API: Added VMFileRestore struct to DataProtectionApplication spec
- API: Added 4 image key constants for unsupportedOverrides
- Controller: Created vmfilerestore_controller.go with reconciliation logic
- Validation: Added VM file restore validation requiring kubevirt and openshift plugins
- CRDs: Added VirtualMachineBackupsDiscovery and VirtualMachineFileRestore
- RBAC: Added ClusterRole, binding, and ServiceAccount for controller
- Bundle: Updated CSV with new CRDs, environment variables, and related images
- Documentation: Created comprehensive user guide at docs/config/vm_file_restore.md
- Tests: Added 33 unit test scenarios with full coverage

Prerequisites:
- KubeVirt must be installed in the cluster
- kubevirt-velero-plugin must be configured in defaultPlugins (required)
- openshift-velero-plugin must be configured in defaultPlugins (required)

Implements: migtools/oadp-vm-file-restore#10

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix VM file restore controller deployment and permissions

This commit fixes issues found during live cluster testing:

1. Add missing RBAC permissions:
   - events permission for controller to create events
   - coordination.k8s.io/leases for leader election

2. Fix reconciliation loop in OADP operator:
   - Only update dynamic container fields (Image, ImagePullPolicy, Env, Resources)
   - Make PodSecurityContext conditional (set only if nil)
   - Prevents continuous reconciliation by leaving static fields unchanged

3. Bundle generation fixes:
   - Add vm-file-restore RBAC kustomization.yaml
   - Add oadp-vm-file-restore-controller-manager to Makefile BUNDLE_GEN_FLAGS
   - Reference vm-file-restore RBAC in config/manifests/kustomization.yaml
   - Clean unwanted RBAC from CSV (buckets, velero-privileged SCC, wildcards)

Tested on live cluster - controller now deploys successfully, performs
leader election, and no longer causes reconciliation loops.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix code formatting and clean unwanted RBAC permissions

- Fix import grouping (gci) in validator_test.go, vmfilerestore_controller.go, and vmfilerestore_controller_test.go
- Replace custom contains helper with strings.Contains from stdlib
- Remove unwanted RBAC from config/rbac/role.yaml:
  - Full wildcard permissions (apiGroups: *, resources: *, verbs: *)
  - buckets resources (3 locations)
  - velero-privileged SCC reference
  - Duplicate privileged SCC entry
  - Extra velero.io resource listings
- Remove unwanted RBAC from bundle CSV:
  - Full wildcard from velero ServiceAccount

All tests pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* lint fix

* make bundle

* `make bundle` after `gh pr checkout 2003 --recurse-submodules`

❯ gh pr checkout 2003 --recurse-submodules
remote: Enumerating objects: 77, done.
remote: Counting objects: 100% (29/29), done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 77 (delta 15), reused 18 (delta 14), pack-reused 48 (from 1)
Unpacking objects: 100% (77/77), 118.21 KiB | 1.01 MiB/s, done.
From https://github.com/shubham-pampattiwar/oadp-operator
 * [new branch]        vm-file-restore-integration -> shub/vm-file-restore-integration
Previous HEAD position was 24e3758 update prow notes, operator-config (#2029)
branch 'vm-file-restore-integration' set up to track 'shub/vm-file-restore-integration'.
Switched to a new branch 'vm-file-restore-integration'

~/oadp-operator vm-file-restore-integration
❯ make bundle
Using Container Tool: docker
[ -f /Users/tkaovila/oadp-operator/bin/oadp-dev/controller-gen ] || { set -e ; mkdir -p /Users/tkaovila/oadp-operator/bin/oadp-dev/ ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading sigs.k8s.io/controller-tools/cmd/controller-gen@v0.16.5 to branch directory" ; GOBIN=/Users/tkaovila/oadp-operator/bin/oadp-dev/ go install -mod=mod sigs.k8s.io/controller-tools/cmd/controller-gen@v0.16.5 ; rm -rf $TMP_DIR ; }
GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
[ -f /Users/tkaovila/oadp-operator/bin/oadp-dev/kustomize ] || { set -e ; mkdir -p /Users/tkaovila/oadp-operator/bin/oadp-dev/ ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading sigs.k8s.io/kustomize/kustomize/v5@v5.2.1 to branch directory" ; GOBIN=/Users/tkaovila/oadp-operator/bin/oadp-dev/ go install -mod=mod sigs.k8s.io/kustomize/kustomize/v5@v5.2.1 ; rm -rf $TMP_DIR ; }
GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/operator-sdk generate kustomize manifests -q
cd config/manager && GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/kustomize edit set image controller=quay.io/konveyor/oadp-operator:latest
GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/kustomize build config/manifests | GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/operator-sdk generate bundle -q --extra-service-accounts "velero,non-admin-controller,oadp-vm-file-restore-controller-manager" --overwrite --version 99.0.0  --channels="dev" --default-channel="dev"
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [CSVFileNotValid] (oadp-operator.v99.0.0) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects.
INFO[0000] Creating bundle.Dockerfile
INFO[0000] Creating bundle/metadata/annotations.yaml
INFO[0000] Bundle metadata generated successfully
Using Container Tool: docker
Using Container Tool: docker
[ -f /Users/tkaovila/oadp-operator/bin/yq ] || { set -e ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading github.com/mikefarah/yq/v4@v4.28.1" ; GOBIN=/Users/tkaovila/oadp-operator/bin go install -mod=mod github.com/mikefarah/yq/v4@v4.28.1 ; rm -rf $TMP_DIR ; }
go: creating new go.mod: module tmp
Downloading github.com/mikefarah/yq/v4@v4.28.1
Using Container Tool: docker
[ -f /Users/tkaovila/oadp-operator/bin/yq ] || { set -e ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading github.com/mikefarah/yq/v4@v4.28.1" ; GOBIN=/Users/tkaovila/oadp-operator/bin go install -mod=mod github.com/mikefarah/yq/v4@v4.28.1 ; rm -rf $TMP_DIR ; }
cp bundle.Dockerfile build/Dockerfile.bundle
GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/operator-sdk bundle validate ./bundle
WARN[0000] Warning: Value velero.io/v1, Kind=BackupRepository: provided API should have an example annotation
WARN[0000] Warning: Value oadp.openshift.io/v1alpha1, Kind=VirtualMachineBackupsDiscovery: provided API should have an example annotation
WARN[0000] Warning: Value oadp.openshift.io/v1alpha1, Kind=CloudStorage: provided API should have an example annotation
WARN[0000] Warning: Value velero.io/v2alpha1, Kind=DataUpload: provided API should have an example annotation
WARN[0000] Warning: Value velero.io/v2alpha1, Kind=DataDownload: provided API should have an example annotation
WARN[0000] Warning: Value oadp.openshift.io/v1alpha1, Kind=VirtualMachineFileRestore: provided API should have an example annotation
WARN[0000] Warning: Value : (oadp-operator.v99.0.0) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects.
INFO[0000] All validation tests have completed successfully
gsed -e 's/    createdAt: .*/    createdAt: "2025-02-28T20:03:54Z"/' bundle/manifests/oadp-operator.clusterserviceversion.yaml > bundle/manifests/oadp-operator.clusterserviceversion.yaml.tmp
mv bundle/manifests/oadp-operator.clusterserviceversion.yaml.tmp bundle/manifests/oadp-operator.clusterserviceversion.yaml

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

---------

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Tiger Kaovilai <tkaovila@redhat.com>
weshayutin pushed a commit to weshayutin/oadp-operator that referenced this pull request Nov 22, 2025
* Add VM file restore feature integration

This commit integrates the VM file restore feature into OADP operator,
following the same pattern as the non-admin controller integration.

Features:
- Enable/disable VM file restore via DPA.Spec.VMFileRestore.Enable
- Automatic deployment of oadp-vm-file-restore-controller when enabled
- Support for custom resource limits via DPA configuration
- Image override support for all 4 required images:
  * VM file restore controller
  * File access container
  * SSH sidecar
  * FileBrowser sidecar

Changes:
- API: Added VMFileRestore struct to DataProtectionApplication spec
- API: Added 4 image key constants for unsupportedOverrides
- Controller: Created vmfilerestore_controller.go with reconciliation logic
- Validation: Added VM file restore validation requiring kubevirt and openshift plugins
- CRDs: Added VirtualMachineBackupsDiscovery and VirtualMachineFileRestore
- RBAC: Added ClusterRole, binding, and ServiceAccount for controller
- Bundle: Updated CSV with new CRDs, environment variables, and related images
- Documentation: Created comprehensive user guide at docs/config/vm_file_restore.md
- Tests: Added 33 unit test scenarios with full coverage

Prerequisites:
- KubeVirt must be installed in the cluster
- kubevirt-velero-plugin must be configured in defaultPlugins (required)
- openshift-velero-plugin must be configured in defaultPlugins (required)

Implements: migtools/oadp-vm-file-restore#10

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix VM file restore controller deployment and permissions

This commit fixes issues found during live cluster testing:

1. Add missing RBAC permissions:
   - events permission for controller to create events
   - coordination.k8s.io/leases for leader election

2. Fix reconciliation loop in OADP operator:
   - Only update dynamic container fields (Image, ImagePullPolicy, Env, Resources)
   - Make PodSecurityContext conditional (set only if nil)
   - Prevents continuous reconciliation by leaving static fields unchanged

3. Bundle generation fixes:
   - Add vm-file-restore RBAC kustomization.yaml
   - Add oadp-vm-file-restore-controller-manager to Makefile BUNDLE_GEN_FLAGS
   - Reference vm-file-restore RBAC in config/manifests/kustomization.yaml
   - Clean unwanted RBAC from CSV (buckets, velero-privileged SCC, wildcards)

Tested on live cluster - controller now deploys successfully, performs
leader election, and no longer causes reconciliation loops.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix code formatting and clean unwanted RBAC permissions

- Fix import grouping (gci) in validator_test.go, vmfilerestore_controller.go, and vmfilerestore_controller_test.go
- Replace custom contains helper with strings.Contains from stdlib
- Remove unwanted RBAC from config/rbac/role.yaml:
  - Full wildcard permissions (apiGroups: *, resources: *, verbs: *)
  - buckets resources (3 locations)
  - velero-privileged SCC reference
  - Duplicate privileged SCC entry
  - Extra velero.io resource listings
- Remove unwanted RBAC from bundle CSV:
  - Full wildcard from velero ServiceAccount

All tests pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* lint fix

* make bundle

* `make bundle` after `gh pr checkout 2003 --recurse-submodules`

❯ gh pr checkout 2003 --recurse-submodules
remote: Enumerating objects: 77, done.
remote: Counting objects: 100% (29/29), done.
remote: Compressing objects: 100% (14/14), done.
remote: Total 77 (delta 15), reused 18 (delta 14), pack-reused 48 (from 1)
Unpacking objects: 100% (77/77), 118.21 KiB | 1.01 MiB/s, done.
From https://github.com/shubham-pampattiwar/oadp-operator
 * [new branch]        vm-file-restore-integration -> shub/vm-file-restore-integration
Previous HEAD position was 24e3758 update prow notes, operator-config (openshift#2029)
branch 'vm-file-restore-integration' set up to track 'shub/vm-file-restore-integration'.
Switched to a new branch 'vm-file-restore-integration'

~/oadp-operator vm-file-restore-integration
❯ make bundle
Using Container Tool: docker
[ -f /Users/tkaovila/oadp-operator/bin/oadp-dev/controller-gen ] || { set -e ; mkdir -p /Users/tkaovila/oadp-operator/bin/oadp-dev/ ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading sigs.k8s.io/controller-tools/cmd/controller-gen@v0.16.5 to branch directory" ; GOBIN=/Users/tkaovila/oadp-operator/bin/oadp-dev/ go install -mod=mod sigs.k8s.io/controller-tools/cmd/controller-gen@v0.16.5 ; rm -rf $TMP_DIR ; }
GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
[ -f /Users/tkaovila/oadp-operator/bin/oadp-dev/kustomize ] || { set -e ; mkdir -p /Users/tkaovila/oadp-operator/bin/oadp-dev/ ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading sigs.k8s.io/kustomize/kustomize/v5@v5.2.1 to branch directory" ; GOBIN=/Users/tkaovila/oadp-operator/bin/oadp-dev/ go install -mod=mod sigs.k8s.io/kustomize/kustomize/v5@v5.2.1 ; rm -rf $TMP_DIR ; }
GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/operator-sdk generate kustomize manifests -q
cd config/manager && GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/kustomize edit set image controller=quay.io/konveyor/oadp-operator:latest
GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/kustomize build config/manifests | GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/operator-sdk generate bundle -q --extra-service-accounts "velero,non-admin-controller,oadp-vm-file-restore-controller-manager" --overwrite --version 99.0.0  --channels="dev" --default-channel="dev"
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [OperationFailed] provided API should have an example annotation
WARN[0000] ClusterServiceVersion validation: [CSVFileNotValid] (oadp-operator.v99.0.0) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects.
INFO[0000] Creating bundle.Dockerfile
INFO[0000] Creating bundle/metadata/annotations.yaml
INFO[0000] Bundle metadata generated successfully
Using Container Tool: docker
Using Container Tool: docker
[ -f /Users/tkaovila/oadp-operator/bin/yq ] || { set -e ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading github.com/mikefarah/yq/v4@v4.28.1" ; GOBIN=/Users/tkaovila/oadp-operator/bin go install -mod=mod github.com/mikefarah/yq/v4@v4.28.1 ; rm -rf $TMP_DIR ; }
go: creating new go.mod: module tmp
Downloading github.com/mikefarah/yq/v4@v4.28.1
Using Container Tool: docker
[ -f /Users/tkaovila/oadp-operator/bin/yq ] || { set -e ; TMP_DIR=$(mktemp -d) ; cd $TMP_DIR ; go mod init tmp ; echo "Downloading github.com/mikefarah/yq/v4@v4.28.1" ; GOBIN=/Users/tkaovila/oadp-operator/bin go install -mod=mod github.com/mikefarah/yq/v4@v4.28.1 ; rm -rf $TMP_DIR ; }
cp bundle.Dockerfile build/Dockerfile.bundle
GOFLAGS="-mod=mod" /Users/tkaovila/oadp-operator/bin/oadp-dev/operator-sdk bundle validate ./bundle
WARN[0000] Warning: Value velero.io/v1, Kind=BackupRepository: provided API should have an example annotation
WARN[0000] Warning: Value oadp.openshift.io/v1alpha1, Kind=VirtualMachineBackupsDiscovery: provided API should have an example annotation
WARN[0000] Warning: Value oadp.openshift.io/v1alpha1, Kind=CloudStorage: provided API should have an example annotation
WARN[0000] Warning: Value velero.io/v2alpha1, Kind=DataUpload: provided API should have an example annotation
WARN[0000] Warning: Value velero.io/v2alpha1, Kind=DataDownload: provided API should have an example annotation
WARN[0000] Warning: Value oadp.openshift.io/v1alpha1, Kind=VirtualMachineFileRestore: provided API should have an example annotation
WARN[0000] Warning: Value : (oadp-operator.v99.0.0) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects.
INFO[0000] All validation tests have completed successfully
gsed -e 's/    createdAt: .*/    createdAt: "2025-02-28T20:03:54Z"/' bundle/manifests/oadp-operator.clusterserviceversion.yaml > bundle/manifests/oadp-operator.clusterserviceversion.yaml.tmp
mv bundle/manifests/oadp-operator.clusterserviceversion.yaml.tmp bundle/manifests/oadp-operator.clusterserviceversion.yaml

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>

---------

Signed-off-by: Tiger Kaovilai <tkaovila@redhat.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Tiger Kaovilai <tkaovila@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants