OPRUN-4509: Synchronize From Upstream Repositories#669
OPRUN-4509: Synchronize From Upstream Repositories#669openshift-merge-bot[bot] merged 73 commits intoopenshift:mainfrom
Conversation
Provides an API which allows custom probe definitions to determine readiness of the CER phases. Objects can be selected for in one of two ways: by GroupKind, or by Label (matchLabels and matchExpressions). They can then be tested via any of: ConditionEqual, FieldsEqual, and FieldValue. ConditionEqual checks that the object has a condition matching the type and status provided. FieldsEqual uses two provided field paths and checks for equality. FieldValue uses a provided field path and checks that the value is equal to the provided expected value. Signed-off-by: Daniel Franz <dfranz@redhat.com>
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.79.1 to 1.79.3. - [Release notes](https://github.com/grpc/grpc-go/releases) - [Commits](grpc/grpc-go@v1.79.1...v1.79.3) --- updated-dependencies: - dependency-name: google.golang.org/grpc dependency-version: 1.79.3 dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Signed-off-by: grokspawn <jordan@nimblewidget.com>
Add a test to ensure that OLM is not reverting user changes like kubectl rollout restart. Assisted-by: Cursor/Claude
Signed-off-by: dtfranz <dfranz@redhat.com> UPSTREAM: <carry>: Update generate-manifests to handle new directory The `default` directory was renamed `base`. Signed-off-by: Todd Short <todd.short@me.com> The `base` directory was moved to `base\operator-controller`. Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: Drop commitchecker Signed-off-by: Alexander Greene <greene.al1991@gmail.com> UPSTREAM: <carry>: Updating ose-olm-operator-controller-container image to be consistent with ART Reconciling with https://github.com/openshift/ocp-build-data/tree/4022cd290f00a44d667dda03f2d78d84a488c7ed/images/ose-olm-operator-controller.yml UPSTREAM: <carry>: update owners * Remove alumni from owners * Add m1kola to approvers Signed-off-by: Mikalai Radchuk <mradchuk@redhat.com> UPSTREAM: <carry>: Add pointer to tooling README UPSTREAM: <carry>: Disable Validating Admission Policy APIs downstream Signed-off-by: Mikalai Radchuk <mradchuk@redhat.com> UPSTREAM: <carry>: Updating ose-olm-operator-controller-container image to be consistent with ART for 4.16 Reconciling with https://github.com/openshift/ocp-build-data/tree/6250d54c4686a708ca5985afb73080e8ca9a1f7f/images/ose-olm-operator-controller.yml UPSTREAM: <carry>: Enable Validating Admission Policy APIs downstream * This reverts commit 3f079c4. * Includes Validating Admission Policy manifests Signed-off-by: Mikalai Radchuk <mradchuk@redhat.com> UPSTREAM: <carry>: manifests: set required-scc for openshift workloads UPSTREAM: <carry>: Updating ose-olm-operator-controller-container image to be consistent with ART for 4.17 Reconciling with https://github.com/openshift/ocp-build-data/tree/4c1326094222f9209876f06833179a1b9178faf7/images/ose-olm-operator-controller.yml UPSTREAM: <carry>: add everettraven to approvers+reviewers Signed-off-by: everettraven <everettraven@gmail.com> UPSTREAM: <carry>: add openshift kustomize overlay to enable TLS communication with catalogd. Configure the CA certs using the configmap injection method via service-ca-operator Signed-off-by: everettraven <everettraven@gmail.com> UPSTREAM: <carry>: Add tmshort to approvers Also `s/runtime/framework/g` in the DOWNSTREAM_OWNERS Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: Updating ose-olm-operator-controller-container image to be consistent with ART for 4.18 Reconciling with https://github.com/openshift/ocp-build-data/tree/dd68246f3237db5db458127566fc7b05b55e1660/images/ose-olm-operator-controller.yml UPSTREAM: <carry>: Properly copy and call kustomize Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: manifests: add hostPath mount for /etc/containers Signed-off-by: Joe Lanford <joe.lanford@gmail.com> UPSTREAM: <carry>: Add test-e2e target for downstream Makefile to be run by openshift/release. Signed-off-by: dtfranz <dfranz@redhat.com> UPSTREAM: <carry>: Add downstream verify makefile target Signed-off-by: dtfranz <dfranz@redhat.com> UPSTREAM: <carry>: openshift: template log verbosity to be managed by cluster-olm-operator Signed-off-by: Joe Lanford <joe.lanford@gmail.com> UPSTREAM: <carry>: Add global-pull-secret flag Pass global-pull-secret to the manager container. Signed-off-by: Mikalai Radchuk <mradchuk@redhat.com> UPSTREAM: <carry>: Update openshift CAs to operator-controller The /run/secrets/kubernetes.io/serviceaccount/ directory is projected into the pod and contains the following CA certificates: * configmap/kube-root-ca.crt as ca.crt * configmap/openshift-service-ca.crt as service-ca.crt Update the --ca-certs-dir argument to reference the directory. Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: Add HowTo for origin tests Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: Add e2e registry Dockerfile Signed-off-by: dtfranz <dfranz@redhat.com> UPSTREAM: <carry>: add nodeSelector and tolerations to operator-controller deployment via kustomize patch Signed-off-by: everettraven <everettraven@gmail.com> UPSTREAM: <carry>: namespace: use privileged PSA for audit and warn levels Signed-off-by: Joe Lanford <joe.lanford@gmail.com> UPSTREAM: <carry>: Enable downstream e2e Signed-off-by: dtfranz <dfranz@redhat.com> UPSTREAM: <carry>: Remove m1kola from owners Signed-off-by: Mikalai Radchuk <mradchuk@redhat.com> UPSTREAM: <carry>: Updating ose-olm-operator-controller-container image to be consistent with ART for 4.19 Reconciling with https://github.com/openshift/ocp-build-data/tree/a39508c86497b4e5e463d7b2c78e51e577be9e7d/images/ose-olm-operator-controller.yml UPSTREAM: <carry>: generate and mount service-ca server cert Signed-off-by: Joe Lanford <joe.lanford@gmail.com> UPSTREAM: <carry>: Add support for proxy trustedCAs Just map the list of trusted ca certs into the deployment Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: Fix error to build the image Copy correct (new) executable name for operator-controller Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: Fix make verify for mac os envs Joe Lanford <joe.lanford@gmail.com> UPSTREAM: <carry>: Move operator-controller openshift files to its own dir UPSTREAM: <carry>: Upgrade OCP images from 4.18 to 4.19 UPSTREAM: <carry>: Add Openshift's catalogd manifests - Move to openshift/catalogd the specific manifest under: https://github.com/openshift/operator-framework-catalogd/tree/main/openshift - Add call to generate catalogd manifest to 'make manifest'. Make verify test is now done for catalogd and operator-controller Openshift's manifests UPSTREAM: <carry>: resolve issue with pre-mature mounting of trusted CA configmap Signed-off-by: Joe Lanford <joe.lanford@gmail.com> UPSTREAM: <carry>: Add /etc/docker to the operator-controller and catalogd deployments This allows for use of the any image.config.openshift.io trusted CAs Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: fixup catalogd.Dockerfile paths Signed-off-by: Joe Lanford <joe.lanford@gmail.com> UPSTREAM: <carry>: Resolve issue with pre-mature mounting of service CA configmap Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: use projected volume for CAs to avoid subPath limitations Signed-off-by: Joe Lanford <joe.lanford@gmail.com> UPSTREAM: <carry>: Revert "UPSTREAM: <carry>: use projected volume for CAs to avoid subPath limitations" This reverts commit 548caa4. UPSTREAM: <carry>: use projected volume for CAs to avoid subPath limitations Signed-off-by: Joe Lanford <joe.lanford@gmail.com> UPSTREAM: <carry>: Remove vet from openshift verify The `vet` target was removed upstream. Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: Skip another upstream test Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: Cleanup openshift/Makefile by removing no longer required comments regards catalogd e2e tests UPSTREAM: <carry>: Enable OCP metrics collection by default Enables OCP to collect Prometheus metrics for both catalogd and operator-controller by default. This is accomplished via ServiceMonitor CRs which are now created for both projects. UPSTREAM: <carry>: Fix catalogd.Dockerfile to use new paths The root catalogd directory has been removed Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: Update DOWNSTREAM_OWNERS_ALIASES Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: Add openshift node selector annotation Signed-off-by: Catherine Chan-Tse <cchantse@redhat.com> (cherry picked from commit 9b4a113) UPSTREAM: <carry>: Add caalogd-cas-dir option to op-con Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: set the SElinux type Signed-off-by: Jian Zhang <jiazha@redhat.com> UPSTREAM: <carry>: Add initial stack to run tests to validate the catalogs UPSTREAM: <carry>: Add vendor files for the catalog-sync tests UPSTREAM: <carry>: Bump catalog versions to 4.19 Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: revert "Bump catalog versions to 4.19" This reverts commit a98980b. UPSTREAM: <carry>: Update HOWTO-origin-tests techpreview is no longer a required option. Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: [DefaultCatalogTests]: Allow to pass auth path for docker credentials" UPSTREAM: <carry>: fix: set NoLchown=true to allow image unpack on OCPci UPSTREAM: <carry>: [DefaultCatalogTests]: Moving parse of ENVVAR to the caller (follow-up 345) UPSTREAM: <carry>: [Default Catalog]: Create tmp dir to extract layers with right permissions to avoid issues scenarios UPSTREAM: <carry>: [Default Catalog](cleanp) Remove hack directory which is not used UPSTREAM: <carry>: Change code implementation to extract layers in OCP env UPSTREAM: <carry>: Add vendor files for change in the extract code implementation UPSTREAM: <carry>: [Default Catalog Tests]: Final cleanups and enhancements of initial implementation UPSTREAM: <carry>: SELinux type for operator-controller Signed-off-by: Jian Zhang <jiazha@redhat.com> UPSTREAM: <carry>: Bump catalog versions to 4.19 Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: [Default Catalog Consistency Test] (feat) add check for executable files in filesystem Checks if given paths exist and point to executable files or valid symlinks. UPSTREAM: <carry>: [Default Catalog Consistency Test]: fix junit output format to allow generate xml UPSTREAM: <carry>: [Default Catalog Consistency Test] (feat) add check to validate multi-arch support UPSTREAM: <carry>: [Default Catalog Consistency Test]: Enable CatalogChecks UPSTREAM: <carry>: [Default Catalog Consistency Test]: Rename Tests suite and small cleanups UPSTREAM: <carry>: Updating ose-olm-operator-controller-container image to be consistent with ART for 4.20 Reconciling with https://github.com/openshift/ocp-build-data/tree/dfb5c7d531490cfdc61a3b88bc533702b9624997/images/ose-olm-operator-controller.yml UPSTREAM: <carry>: Updating ose-olm-catalogd-container image to be consistent with ART for 4.20 Reconciling with https://github.com/openshift/ocp-build-data/tree/dfb5c7d531490cfdc61a3b88bc533702b9624997/images/ose-olm-catalogd.yml UPSTREAM: <carry>: Update e2e registry to use 1.24/4.20 Update the e2e registry Dockerfile to use golang 1.24/OCP 4.20 Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: [Catalog Default Tests]: Upgrade go version to 1.24.3, dependencies and fix new lint issue UPSTREAM: <carry>: Add structure to allow move the orgin tests using OTE This commit introduces a binary and supporting structure to enable the execution of OpenShift origin (olmv1) tests using the Open Test Environment (OTE). It lays the groundwork for moving origin test in openshift/origin to be executed from this repository using OTE. UPSTREAM: <carry>: Add support for experimental manifests Update the openshift kustomize configuration for both operator-controller and catalogd. Update the manifest generation scripts to put the core generation code into a function (ignore-whitespace will help with the review), so that it can be called twice; once for standard, and once for experimental. Move around some of the kustomization directives to * Create a patch kustomization (Component) file and move the patch directives from olmv1-ns there. This allows it to be referenced from a different directory. * Add a kustomization file for tusted-ca. This allows it to be referenced from a different directory. * Move the setting of the namePrefix for operator-controller; this makes the generation compatible with upstream feature components. * Define experimental kustomization files that reference existing components. * Reference the correct CRDs (standard or experimental). * Add references to upstream feature components into the experimental manifests. This *will* add `--feature-gates` options from the upstream feature components to the experimental manifests. The cluster-olm-operator will strip those arguments from the deployments before adding the enabled feature gates. Update the Dockerfiles to include the experimental manifests and a copy script (`cp-manifests`) into the image containers. The complexity of having multiple sets of manifests mean that the simple initContainer copy mechanism found in cluster-olm-operator is no longer sufficient. This attempts to keep backwards compatibility with older versions of cluster-olm-operator, specifically by keeping the original (standard) manifests in the original location, and adding the experimental manifests in a new directory. The new `cp-manifests` script is used by newer versions of cluster-olm-operator. Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: [OTE] - chore: follow up openshift#383 – remove unreachable target call UPSTREAM: <carry>: Remove build of test image registry Upstream now uses a different image Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: Add test-experimental-e2e target to openshift Makefile This adds a test-experimental-e2e target to allow the CI to run the experimental e2e test. Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: [OTE]: Add binary in the operator controller image to allow proper integration with OCP tests UPSTREAM: <carry>: Fix experimental manifest copying The standard manifest was being copied rather than the experimental manifest. This meant that the expected feature-flags are not present. This is failing now that we are doing a check for those feature-flags. Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: Update manifest generation for upstream rbac/webhooks Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: [OTE] - Add tracking mechanism UPSTREAM: <carry>: Update OTE dep to get fix UPSTREAM: <carry>: [OTE] Add Readme UPSTREAM: <carry>: set GIT_COMMIT env from SOURCE_GIT_COMMIT in Dockerfiles for operator-controller and catalogd Signed-off-by: Rashmi Gottipati <chowdary.grashmi@gmail.com> UPSTREAM: <carry>: add openshift specific build target to pass commit info downstream Signed-off-by: Ankita Thomas <ankithom@redhat.com> UPSTREAM: <carry>: add source commit into binaries when linking - Removes extra GIT_COMMIT set - fixup Dockerfiles after rebase - consider "" unset so build-info can fill commit/date - double quote go flags & honor GIT_COMMIT if set - improve robustness of build-info parsing - Trim whitespace on all version fields - isUnset and valueOrUnknown now call strings.TrimSpace - Avoid clobbering values injected via ldflags - set repoState from build-info only when repoState is still unset - set version from build-info only when unset and build-info value is non-empty UPSTREAM: <carry>: OTE add first test from openshift/origin olmv1.go UPSTREAM: <carry>: Migrate tasks from openshift/origin olm v1.go file which are remaining This commit moves the final OLMv1 tests from openshift/origin/test/extended/olm/olmv1.go to their proper location in this repository. This migration is part of a larger effort to streamline development by co-locating tests with the component they validate. This will reduce CI overhead and allow for faster, more atomic changes. Assisted-by: Gemini UPSTREAM: <carry>: OTE - How to test locally with OCP instances UPSTREAM: <carry>: [OTE] Refac: refac helper and olmv1 test to create namespace instead to use pre-existent UPSTREAM: <carry>: [OTE] add webhook tests Migrates OLMv1 webhook operator tests from using external YAML files to defining resources in Go structs. This change removes file dependencies, improving test reliability and simplifying test setup. The migration is a refactoring of code from openshift/origin#30059. The new code uses better naming conventions and adapts the tests to work with a controller-runtime client, enhancing test consistency and maintainability. The migration covers all core test scenarios: - Validating, mutating, and conversion webhooks. - Certificate and secret rotation tolerance. Assisted-by: Gemini UPSTREAM: <carry>: OTE: rewrite the upgrade incompatible operator test This test replaces the existing upgrade incompatible test. The main change is that operator and catalog bundles are created on-the-fly to support OCP 4.20. This means we are no longer dependent on public operators for this test. This creates new bundles in the OCP ImageRegistry, this requires using a number of OCP APIs, including using a raw API URL to invoke the build. This is done by invoking an external k8s client (either `oc` or `kubectl`), and passing it a tarball of the bundle to be created. So, it can't be done by the golang k8sClient normally available (i.e. the create input is a tarball not a YAML file). This introduces the use of go-bindata to store the bundle contents. It also pulls in openshift mage, buld and operator APIs. Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: Handle service-ca cert availability/rotation There is problem when the service-ca certificate is not available at pod start. This is an issue because the SystemCertPool is created from SSL_CERT_DIR, which may include the empty service-ca. The SystemCertPool is never regenerated during the lifetime of the program execution, so it will never get updated when the service-ca is filled. Thus, we need to use --pull-cas-dir to reference the CAs that we want to use. This will also allow OLMv1 to reload the service-ca when it is reloaded (after 2 years, mind you). Removing the SSL_CERT_DIR setting, and adding the --pull-cas-dir flag ought to be equivalent to what we have now (i.e. SSL_CERT_DIR and no --pull-cas-dir), except that rotation will be handled better. Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: [OTE] add webhook tests Revert "UPSTREAM: <carry>: [OTE] add webhook tests" This reverts commit 9963614. UPSTREAM: <carry>: Upgrade OCP Catalog images from 4.19 to 4.20 UPSTREAM: <carry>: Remove bindata generation from build Using go-bindata is causing problems with ART builds. This removes the use of go-bindata from the builds. This will subsequently require that users MANUALLY run the `bindata` target to refresh the bindata, or use the `build-update` target. This is a quickfix to put out the fire. Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: [OTE] Add webhook tests - Add dumping of container logs and `kubectl describe pods` output for better diagnostics. - Include targeted certificate details dump (`tls.crt` parse) when failures occur. - Add additional check to verify webhook responsiveness after certificate rotation. This change is a refactor of code from openshift/origin#30059. Assisted-by: Gemini UPSTREAM: <carry>: OTE add logs and dumps for olmv1 test and fix helper for clusterextensions UPSTREAM: <carry>: [OTE] Migrate preflight checks from openshift/origin Migrated OLMv1 operator preflight checks from using external YAML files to defining ClusterRole permissions directly in Go structs. This improves test reliability and simplifies test setup by removing file dependencies. The changes ensure precise replication of original test scenarios, including specific permission omissions for services, create verbs, ClusterRoleBindings, ConfigMap resourceNames, and escalate/bind verbs. Assisted-by: Gemini UPSTREAM: <carry>: [OTE] Add webhook to validate openshift-service-ca certificate rotation This change is a refactor of code from openshift/origin#30059. Assisted-by: Gemini UPSTREAM: <carry>: Adds ResourceVersion checks to the tls secret deletion test, mirroring the logic used in the certificate rotation test. This makes the test more robust by ensuring a new secret is created, not just that an existing one is still present. UPSTREAM: <carry>: [OTE] - Readme:Add info to help use payload-aggregate with new tests UPSTREAM: <carry>: remove obsolete owners Signed-off-by: grokspawn <jordan@nimblewidget.com> UPSTREAM: <carry>: [OTE] add catalog tests from openshift/origin This commit migrates the olmv1_catalog set of tests from openshift/origin to OTE as part the broad effort to migrate all tests. Assisted-by: Gemini UPSTREAM: <carry>: Migrate single/own namespace tests This commit migrates the OLMv1 single and own namespace watch mode tests from openshift/origin/test/extended/olm/olmv1-singleownnamespace.go to this repository. This is part of the effort to move component-specific tests into their respective downstream locations. Assisted-by: Gemini UPSTREAM: <carry>: Adds ResourceVersion checks to the tls secret deletion test, mirroring the logic used in the certificate rotation test. This makes the test more robust by ensuring a new secret is created, not just that an existing one is still present. This reverts commit 0bb1953. UPSTREAM: <carry>: [OTE] Add webhook to validate openshift-service-ca certificate rotation This reverts commit e9e3220. UPSTREAM: <carry>: Ensure unique name for bad-catalog tests UPSTREAM: <carry>: Revert "Handle service-ca cert availability/rotation" This reverts commit 9cc13d8. UPSTREAM: <carry>: grant QE approver permission for OTE UPSTREAM: <carry>: Update webhook ote tests to use latest webhook-operator Signed-off-by: Per Goncalves da Silva <pegoncal@redhat.com> UPSTREAM: <carry>: update operator-controller to v1.5.1 UPSTREAM: <carry>: configure watchnamespace using spec.config for OTE tests UPSTREAM: <carry>: add jiazha to approvers UPSTREAM: <carry>: Create combined manifests for comparison Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: Use Helm charts for openshift manifests Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: add support for tests-private cases and add the case UPSTREAM: <carry>: Fix cp-manifests copying of helm charts The method used to copy the helm charts is including an extra `helm` directory in the destination path, that is making the cluster-olm-operator code just a bit more complicated than it needs to be. This fixes the copy location. Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: Remove kustomize manifests from images and repo Now that helm manifests are being used to dynamically generate the manifests, the pre-generated manifests are no longer needed. So, we can remove them from the repo and the images. However, because we still want to verify the manifests are "good", we are still creating a "single-file" version of the manifests for verification purposes, and to allow us to see what changes are happening to the manifests (from upstream and/or downstream sources). Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: Add pedjak and trgeiger as reviewers UPSTREAM: <carry>: migrate more cases from tests-private and enhance suites with filters UPSTREAM: <carry>: Updating ose-olm-operator-controller-container image to be consistent with ART for 4.21 Reconciling with https://github.com/openshift/ocp-build-data/tree/4fbe3fab45239dc4be6f5d9d98a0bf36e0274ec9/images/ose-olm-operator-controller.yml UPSTREAM: <carry>: Updating ose-olm-catalogd-container image to be consistent with ART for 4.21 Reconciling with https://github.com/openshift/ocp-build-data/tree/4fbe3fab45239dc4be6f5d9d98a0bf36e0274ec9/images/ose-olm-catalogd.yml UPSTREAM: <carry>: OTE: Enable disconnected environment and build test operator controller image Signed-off-by: Per Goncalves da Silva <pegoncal@redhat.com> UPSTREAM: <carry>: for incompatible test add func to wait builder and deployer SA creation by OCP controller UPSTREAM: <carry>: Fix VERSION replacement in catalog bindata Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: check kubeconfig only run-test and run-suite UPSTREAM: <carry>: Clean up cp-manifests There is no longer a need to copy conditionally Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: Update does-not-exist and simple install to work in a disconnected environment Signed-off-by: Todd Short <todd.short@me.com> UPSTREAM: <carry>: support webhook case in disconnected UPSTREAM: <carry>: Consolidate build API This consolidates the in-cluster building of a bundle and catalog. The catalog and bundle bindata are inputs, along with a set of replacements so that catalog and bundle templates can be used to create the images. This can be done in the BeforeEach() for a set of tests that use the same data. Signed-off-by: Todd Short <todd.short@me.com>
…images from openshift/catalogd/manifests.yaml
Signed-off-by: Todd Short <todd.short@me.com>
…oss to avoid flakes
Signed-off-by: Todd Short <todd.short@me.com>
…uess and waiting for k8s cleanups Co-Author: kuiwang@redhat.com
…nts ( Follow-Up of: 714977c )
… uninstall Assisted-by: Cursor
… format Fix k8s.io/kubernetes replace version from v1.30.1-0... to v0.0.0-... format to resolve bumper tool verification failures. Add hack/ocp-replace.sh script to manage OCP fork replaces properly. Assisted-by: Cursor
…row job for migrated qe cases
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
The current pod simply does a `sleep 1000`, which means that the startup, liveness and readiness probes all fail. Use a busybox containter to run a simple script and httpd server to emulate the probes.
Signed-off-by: Todd Short <todd.short@me.com>
The test operator's httpd script uses python3's http.server which binds to 0.0.0.0 (IPv4 only) by default. On IPv6-only networks (e.g. metal-ipi-ovn-ipv6-techpreview), the startup/liveness/readiness probes connect to the pod's IPv6 address but nothing is listening, causing the operator pod to never become Ready and the OLMv1 ClusterExtension install test to time out. Adding --bind :: makes python3 http.server listen on all interfaces including IPv6, fixing the test on dual-stack and IPv6-only clusters. This resolves the 0% pass rate on: - periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv6-techpreview Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…g for CE install tests With BoxcutterRuntime, Installed=True is only set after all availability probes pass, which can take longer on TechPreview clusters (IPv6, multi-arch). Increases install-specific timeout from 5m to 10m and logs condition state on each poll to aid debugging flaky failures.
|
@openshift-bot: This pull request explicitly references no jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
WalkthroughThis pull request introduces a new Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
|
@openshift-bot: GitHub didn't allow me to request PR reviews from the following users: openshift/openshift-team-operator-framework. Note that only openshift members and repo collaborators can review this PR, and authors cannot review their own PRs. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is APPROVED Approval requirements bypassed by manually added approval. This pull-request has been approved by: openshift-bot The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
1 similar comment
|
[APPROVALNOTIFIER] This PR is APPROVED Approval requirements bypassed by manually added approval. This pull-request has been approved by: openshift-bot The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (1)
test/e2e/steps/steps.go (1)
1651-1663: Use the rollout command’s exit status instead of parsing its message.Because Line 152 makes the CLI configurable, matching
"successfully rolled out"here turns this waiter into a dependency on client-specific, human-readable output fromkubectl/oc. The exit code fromrollout status --watch=falseis the more stable contract.Suggested change
waitFor(ctx, func() bool { - out, err := k8sClient("rollout", "status", "deployment/"+deploymentName, "-n", sc.namespace, "--watch=false") + _, err := k8sClient("rollout", "status", "deployment/"+deploymentName, "-n", sc.namespace, "--watch=false") if err != nil { logger.V(1).Info("Failed to get rollout status", "deployment", deploymentName, "error", err) return false } - // Successful rollout shows "successfully rolled out" - if strings.Contains(out, "successfully rolled out") { - logger.V(1).Info("Rollout completed successfully", "deployment", deploymentName) - return true - } - logger.V(1).Info("Rollout not yet complete", "deployment", deploymentName, "status", out) - return false + logger.V(1).Info("Rollout completed successfully", "deployment", deploymentName) + return true })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/e2e/steps/steps.go` around lines 1651 - 1663, The waiter currently inspects the human-readable output from k8sClient("rollout", "status", ...) to detect completion; instead use the command's exit status (the returned error) as the stable success signal. In the waitFor closure (the call that invokes k8sClient), replace the strings.Contains(out, "successfully rolled out") check with a check that err == nil to indicate rollout success, keep the existing logger.V(1).Info calls but log the error when err != nil (and include deploymentName and sc.namespace for context), and avoid depending on the CLI output string to determine success.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cmd/catalogd/main.go`:
- Around line 280-294: The code loads file-based kubeconfig into restConfig when
cfg.kubeconfig is set but still unconditionally requires extracting an
in-cluster service account JWT (causing startup to fail outside the cluster);
modify the startup sequence so the service-account-dependent pull-secret setup
is only attempted when running in-cluster (e.g., when ctrl.GetConfigOrDie() was
used or when SA detection succeeds) — gate the logic that reads the SA/JWT and
configures pull secrets behind an "in-cluster" check or successful SA detection,
using the existing restConfig/cfg.kubeconfig/ctrl.GetConfigOrDie and the
service-account extraction routine (the code that currently fails when
extracting the SA) to decide whether to run that setup before calling
ctrl.NewManager.
In `@cmd/operator-controller/main.go`:
- Around line 331-343: The startup currently attempts in-cluster service-account
/ pull-secret wiring before honoring cfg.kubeconfig which can cause off-cluster
runs to fail; make all code that reads the service-account token or wires
pull-secrets (e.g., any calls to functions like loadServiceAccountToken,
readServiceAccountJWT, reconcilePullSecrets, or direct
os.ReadFile("/var/run/secrets/...")) conditional so it only runs when
cfg.kubeconfig is empty (i.e., running in-cluster). Ensure the kubeconfig branch
(cfg.kubeconfig != "") short-circuits any in-cluster-only logic and proceeds to
use clientcmd.BuildConfigFromFlags, and avoid calling ctrl.GetConfigOrDie or
other in-cluster dependent functions before that guard.
In `@docs/api-reference/olmv1-api-reference.md`:
- Around line 27-29: The docs reference an undefined heading "ProgressionProbe"
(anchors to `#progressionprobe`) which breaks intra-doc links; either add a proper
"ProgressionProbe" section documenting the probe API (include description,
fields, methods, examples and an H2/H3 heading that exactly matches
"ProgressionProbe") or remove/replace the anchors that point to
`#progressionprobe`; update any references at the same places that mention
ProgressionProbe so they point to the new heading or to an existing section, and
ensure the anchor text/heading slug matches exactly.
In
`@helm/olmv1/base/operator-controller/crd/experimental/olm.operatorframework.io_clusterextensionrevisions.yaml`:
- Around line 226-484: The spec.progressionProbes array must be made immutable
like spec.phases; add the immutability marker to the source API type (the
ProgressionProbes field on the ClusterExtensionRevision spec struct) using the
same kubebuilder validation tag you used for phases (e.g.
+kubebuilder:validation:Immutable or the project's equivalent), then regenerate
the CRD so the generated YAML for spec.progressionProbes contains
x-kubernetes-immutable: true (matching the existing immutability behavior for
spec.phases).
In `@test/e2e/features/revision.feature`:
- Line 140: The scenario title has incorrect subject-verb agreement: update the
Scenario line containing "Phases does not progress when user-provided
progressionProbes do not pass" to "Phases do not progress when user-provided
progressionProbes do not pass" (search for that exact scenario string in the
feature to locate and replace it).
---
Nitpick comments:
In `@test/e2e/steps/steps.go`:
- Around line 1651-1663: The waiter currently inspects the human-readable output
from k8sClient("rollout", "status", ...) to detect completion; instead use the
command's exit status (the returned error) as the stable success signal. In the
waitFor closure (the call that invokes k8sClient), replace the
strings.Contains(out, "successfully rolled out") check with a check that err ==
nil to indicate rollout success, keep the existing logger.V(1).Info calls but
log the error when err != nil (and include deploymentName and sc.namespace for
context), and avoid depending on the CLI output string to determine success.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 8d0f56a3-e890-4e3f-8d65-87e22defb8c6
⛔ Files ignored due to path filters (20)
go.sumis excluded by!**/*.sumopenshift/tests-extension/go.sumis excluded by!**/*.sumopenshift/tests-extension/vendor/github.com/operator-framework/operator-controller/api/v1/clusterextensionrevision_types.gois excluded by!**/vendor/**openshift/tests-extension/vendor/github.com/operator-framework/operator-controller/api/v1/zz_generated.deepcopy.gois excluded by!**/vendor/**openshift/tests-extension/vendor/google.golang.org/grpc/internal/envconfig/envconfig.gois excluded by!**/vendor/**openshift/tests-extension/vendor/google.golang.org/grpc/internal/transport/client_stream.gois excluded by!**/vendor/**openshift/tests-extension/vendor/google.golang.org/grpc/internal/transport/http2_client.gois excluded by!**/vendor/**openshift/tests-extension/vendor/google.golang.org/grpc/internal/transport/transport.gois excluded by!**/vendor/**openshift/tests-extension/vendor/google.golang.org/grpc/server.gois excluded by!**/vendor/**openshift/tests-extension/vendor/google.golang.org/grpc/stream.gois excluded by!**/vendor/**openshift/tests-extension/vendor/google.golang.org/grpc/version.gois excluded by!**/vendor/**openshift/tests-extension/vendor/modules.txtis excluded by!**/vendor/**vendor/google.golang.org/grpc/internal/envconfig/envconfig.gois excluded by!vendor/**,!**/vendor/**vendor/google.golang.org/grpc/internal/transport/client_stream.gois excluded by!vendor/**,!**/vendor/**vendor/google.golang.org/grpc/internal/transport/http2_client.gois excluded by!vendor/**,!**/vendor/**vendor/google.golang.org/grpc/internal/transport/transport.gois excluded by!vendor/**,!**/vendor/**vendor/google.golang.org/grpc/server.gois excluded by!vendor/**,!**/vendor/**vendor/google.golang.org/grpc/stream.gois excluded by!vendor/**,!**/vendor/**vendor/google.golang.org/grpc/version.gois excluded by!vendor/**,!**/vendor/**vendor/modules.txtis excluded by!vendor/**,!**/vendor/**
📒 Files selected for processing (27)
api/v1/clusterextensionrevision_types.goapi/v1/zz_generated.deepcopy.goapplyconfigurations/api/v1/assertion.goapplyconfigurations/api/v1/clusterextensionrevisionspec.goapplyconfigurations/api/v1/conditionequalprobe.goapplyconfigurations/api/v1/fieldsequalprobe.goapplyconfigurations/api/v1/fieldvalueprobe.goapplyconfigurations/api/v1/objectselector.goapplyconfigurations/api/v1/progressionprobe.goapplyconfigurations/utils.gocmd/catalogd/main.gocmd/operator-controller/main.gocommitchecker.yamldocs/api-reference/olmv1-api-reference.mdgo.modhack/api-lint-diff/run.shhelm/olmv1/base/operator-controller/crd/experimental/olm.operatorframework.io_clusterextensionrevisions.yamlinternal/operator-controller/applier/phase.gointernal/operator-controller/applier/phase_test.gointernal/operator-controller/controllers/clusterextensionrevision_controller.gomanifests/experimental-e2e.yamlmanifests/experimental.yamlopenshift/tests-extension/go.modtest/e2e/features/revision.featuretest/e2e/features/user-managed-fields.featuretest/e2e/steps/steps.gotest/e2e/steps/testdata/pvc-probe-sa-boxcutter-rbac-template.yaml
| // Create manager with kubeconfig support for non-default kubeconfig | ||
| var restConfig *rest.Config | ||
| if cfg.kubeconfig != "" { | ||
| setupLog.Info("loading kubeconfig from file", "path", cfg.kubeconfig) | ||
| restConfig, err = clientcmd.BuildConfigFromFlags("", cfg.kubeconfig) | ||
| if err != nil { | ||
| setupLog.Error(err, "unable to load kubeconfig") | ||
| return err | ||
| } | ||
| } else { | ||
| restConfig = ctrl.GetConfigOrDie() | ||
| } | ||
|
|
||
| mgr, err := ctrl.NewManager(restConfig, ctrl.Options{ | ||
| Scheme: scheme, |
There was a problem hiding this comment.
--kubeconfig mode can still fail outside the cluster due to unconditional SA JWT dependency.
Even when Line 282 loads file-based kubeconfig, Line 266 still requires extracting a service account from in-cluster JWT and returns on failure. This can block startup for local/off-cluster kubeconfig usage. Please gate service-account-dependent pull-secret setup behind successful SA detection (or in-cluster mode), so kubeconfig mode is actually usable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cmd/catalogd/main.go` around lines 280 - 294, The code loads file-based
kubeconfig into restConfig when cfg.kubeconfig is set but still unconditionally
requires extracting an in-cluster service account JWT (causing startup to fail
outside the cluster); modify the startup sequence so the
service-account-dependent pull-secret setup is only attempted when running
in-cluster (e.g., when ctrl.GetConfigOrDie() was used or when SA detection
succeeds) — gate the logic that reads the SA/JWT and configures pull secrets
behind an "in-cluster" check or successful SA detection, using the existing
restConfig/cfg.kubeconfig/ctrl.GetConfigOrDie and the service-account extraction
routine (the code that currently fails when extracting the SA) to decide whether
to run that setup before calling ctrl.NewManager.
| // Load REST config with kubeconfig support for non-default kubeconfig | ||
| var restConfig *rest.Config | ||
| if cfg.kubeconfig != "" { | ||
| setupLog.Info("loading kubeconfig from file", "path", cfg.kubeconfig) | ||
| restConfig, err = clientcmd.BuildConfigFromFlags("", cfg.kubeconfig) | ||
| if err != nil { | ||
| setupLog.Error(err, "unable to load kubeconfig") | ||
| return err | ||
| } | ||
| } else { | ||
| restConfig = ctrl.GetConfigOrDie() | ||
| } | ||
| mgr, err := ctrl.NewManager(restConfig, ctrl.Options{ |
There was a problem hiding this comment.
Kubeconfig path is undermined by earlier mandatory in-cluster service-account lookup.
Although Line 333 supports explicit kubeconfig, startup can still fail earlier at Line 272 when service-account JWT is unavailable (typical off-cluster run). Consider making SA/pull-secret wiring conditional so kubeconfig-based execution works outside the cluster.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cmd/operator-controller/main.go` around lines 331 - 343, The startup
currently attempts in-cluster service-account / pull-secret wiring before
honoring cfg.kubeconfig which can cause off-cluster runs to fail; make all code
that reads the service-account token or wires pull-secrets (e.g., any calls to
functions like loadServiceAccountToken, readServiceAccountJWT,
reconcilePullSecrets, or direct os.ReadFile("/var/run/secrets/...")) conditional
so it only runs when cfg.kubeconfig is empty (i.e., running in-cluster). Ensure
the kubeconfig branch (cfg.kubeconfig != "") short-circuits any in-cluster-only
logic and proceeds to use clientcmd.BuildConfigFromFlags, and avoid calling
ctrl.GetConfigOrDie or other in-cluster dependent functions before that guard.
| _Appears in:_ | ||
| - [ProgressionProbe](#progressionprobe) | ||
|
|
There was a problem hiding this comment.
Add the missing ProgressionProbe section or drop these anchors.
Line 28 and Line 469 link to #progressionprobe, but this file never defines that heading. The new probe API is therefore only partially documented and the generated reference ships with broken intra-doc navigation.
Also applies to: 468-469
🧰 Tools
🪛 markdownlint-cli2 (0.21.0)
[warning] 28-28: Link fragments should be valid
(MD051, link-fragments)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/api-reference/olmv1-api-reference.md` around lines 27 - 29, The docs
reference an undefined heading "ProgressionProbe" (anchors to `#progressionprobe`)
which breaks intra-doc links; either add a proper "ProgressionProbe" section
documenting the probe API (include description, fields, methods, examples and an
H2/H3 heading that exactly matches "ProgressionProbe") or remove/replace the
anchors that point to `#progressionprobe`; update any references at the same
places that mention ProgressionProbe so they point to the new heading or to an
existing section, and ensure the anchor text/heading slug matches exactly.
| progressionProbes: | ||
| description: |- | ||
| progressionProbes is an optional field which provides the ability to define custom readiness probes | ||
| for objects defined within spec.phases. As documented in that field, most kubernetes-native objects | ||
| within the phases already have some kind of readiness check built-in, but this field allows for checks | ||
| which are tailored to the objects being rolled out - particularly custom resources. | ||
|
|
||
| Probes defined within the progressionProbes list will apply to every phase in the revision. However, the probes will only | ||
| execute against phase objects which are a match for the provided selector type. For instance, a probe using a GroupKind selector | ||
| for ConfigMaps will automatically be considered to have passed for any non-ConfigMap object, but will halt any phase containing | ||
| a ConfigMap if that particular object does not pass the probe check. | ||
|
|
||
| The maximum number of probes is 20. | ||
| items: | ||
| description: ProgressionProbe provides a custom probe definition, | ||
| consisting of an object selection method and assertions. | ||
| properties: | ||
| assertions: | ||
| description: |- | ||
| assertions is a required list of checks which will run against the objects selected by the selector. If | ||
| one or more assertions fail then the phase within which the object lives will be not be considered | ||
| 'Ready', blocking rollout of all subsequent phases. | ||
| items: | ||
| description: Assertion is a discriminated union which defines | ||
| the probe type and definition used as an assertion. | ||
| properties: | ||
| conditionEqual: | ||
| description: conditionEqual contains the expected condition | ||
| type and status. | ||
| properties: | ||
| status: | ||
| description: |- | ||
| status sets the expected condition status. | ||
|
|
||
| Allowed values are "True" and "False". | ||
| enum: | ||
| - "True" | ||
| - "False" | ||
| type: string | ||
| type: | ||
| description: type sets the expected condition type, | ||
| i.e. "Ready". | ||
| maxLength: 200 | ||
| minLength: 1 | ||
| type: string | ||
| required: | ||
| - status | ||
| - type | ||
| type: object | ||
| fieldValue: | ||
| description: fieldValue contains the expected field path | ||
| and value found within. | ||
| properties: | ||
| fieldPath: | ||
| description: |- | ||
| fieldPath sets the field path for the field to check, i.e. "status.phase". The probe will fail | ||
| if the path does not exist. | ||
| maxLength: 200 | ||
| minLength: 1 | ||
| type: string | ||
| x-kubernetes-validations: | ||
| - message: must contain a valid field path. valid | ||
| fields contain upper or lower-case alphanumeric | ||
| characters separated by the "." character. | ||
| rule: self.matches('^[a-zA-Z0-9]+(?:\\.[a-zA-Z0-9]+)*$') | ||
| value: | ||
| description: value sets the expected value found at | ||
| fieldPath, i.e. "Bound". | ||
| maxLength: 200 | ||
| minLength: 1 | ||
| type: string | ||
| required: | ||
| - fieldPath | ||
| - value | ||
| type: object | ||
| fieldsEqual: | ||
| description: fieldsEqual contains the two field paths | ||
| whose values are expected to match. | ||
| properties: | ||
| fieldA: | ||
| description: |- | ||
| fieldA sets the field path for the first field, i.e. "spec.replicas". The probe will fail | ||
| if the path does not exist. | ||
| maxLength: 200 | ||
| minLength: 1 | ||
| type: string | ||
| x-kubernetes-validations: | ||
| - message: must contain a valid field path. valid | ||
| fields contain upper or lower-case alphanumeric | ||
| characters separated by the "." character. | ||
| rule: self.matches('^[a-zA-Z0-9]+(?:\\.[a-zA-Z0-9]+)*$') | ||
| fieldB: | ||
| description: |- | ||
| fieldB sets the field path for the second field, i.e. "status.readyReplicas". The probe will fail | ||
| if the path does not exist. | ||
| maxLength: 200 | ||
| minLength: 1 | ||
| type: string | ||
| x-kubernetes-validations: | ||
| - message: must contain a valid field path. valid | ||
| fields contain upper or lower-case alphanumeric | ||
| characters separated by the "." character. | ||
| rule: self.matches('^[a-zA-Z0-9]+(?:\\.[a-zA-Z0-9]+)*$') | ||
| required: | ||
| - fieldA | ||
| - fieldB | ||
| type: object | ||
| type: | ||
| description: |- | ||
| type is a required field which specifies the type of probe to use. | ||
|
|
||
| The allowed probe types are "ConditionEqual", "FieldsEqual", and "FieldValue". | ||
|
|
||
| When set to "ConditionEqual", the probe checks objects that have reached a condition of specified type and status. | ||
| When set to "FieldsEqual", the probe checks that the values found at two provided field paths are matching. | ||
| When set to "FieldValue", the probe checks that the value found at the provided field path matches what was specified. | ||
| enum: | ||
| - ConditionEqual | ||
| - FieldsEqual | ||
| - FieldValue | ||
| type: string | ||
| required: | ||
| - type | ||
| type: object | ||
| x-kubernetes-validations: | ||
| - message: conditionEqual is required when type is ConditionEqual, | ||
| and forbidden otherwise | ||
| rule: 'self.type == ''ConditionEqual'' ?has(self.conditionEqual) | ||
| : !has(self.conditionEqual)' | ||
| - message: fieldsEqual is required when type is FieldsEqual, | ||
| and forbidden otherwise | ||
| rule: 'self.type == ''FieldsEqual'' ?has(self.fieldsEqual) | ||
| : !has(self.fieldsEqual)' | ||
| - message: fieldValue is required when type is FieldValue, | ||
| and forbidden otherwise | ||
| rule: 'self.type == ''FieldValue'' ?has(self.fieldValue) | ||
| : !has(self.fieldValue)' | ||
| maxItems: 20 | ||
| minItems: 1 | ||
| type: array | ||
| x-kubernetes-list-type: atomic | ||
| selector: | ||
| description: |- | ||
| selector is a required field which defines the method by which we select objects to apply the below | ||
| assertions to. Any object which matches the defined selector will have all the associated assertions | ||
| applied against it. | ||
|
|
||
| If no objects within a phase are selected by the provided selector, then all assertions defined here | ||
| are considered to have succeeded. | ||
| properties: | ||
| groupKind: | ||
| description: |- | ||
| groupKind specifies the group and kind of objects to select. | ||
|
|
||
| Required when type is "GroupKind". | ||
|
|
||
| Uses the Kubernetes format specified here: | ||
| https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#GroupKind | ||
| properties: | ||
| group: | ||
| type: string | ||
| kind: | ||
| type: string | ||
| required: | ||
| - group | ||
| - kind | ||
| type: object | ||
| label: | ||
| description: |- | ||
| label is the label selector definition. | ||
|
|
||
| Required when type is "Label". | ||
|
|
||
| A probe using a Label selector will be executed against every object matching the labels or expressions; you must use care | ||
| when using this type of selector. For example, if multiple Kind objects are selected via labels then the probe is | ||
| likely to fail because the values of different Kind objects rarely share the same schema. | ||
|
|
||
| The LabelSelector field uses the following Kubernetes format: | ||
| https://pkg.go.dev/k8s.io/apimachinery/pkg/apis/meta/v1#LabelSelector | ||
| Requires exactly one of matchLabels or matchExpressions. | ||
| properties: | ||
| matchExpressions: | ||
| description: matchExpressions is a list of label selector | ||
| requirements. The requirements are ANDed. | ||
| items: | ||
| description: |- | ||
| A label selector requirement is a selector that contains values, a key, and an operator that | ||
| relates the key and values. | ||
| properties: | ||
| key: | ||
| description: key is the label key that the selector | ||
| applies to. | ||
| type: string | ||
| operator: | ||
| description: |- | ||
| operator represents a key's relationship to a set of values. | ||
| Valid operators are In, NotIn, Exists and DoesNotExist. | ||
| type: string | ||
| values: | ||
| description: |- | ||
| values is an array of string values. If the operator is In or NotIn, | ||
| the values array must be non-empty. If the operator is Exists or DoesNotExist, | ||
| the values array must be empty. This array is replaced during a strategic | ||
| merge patch. | ||
| items: | ||
| type: string | ||
| type: array | ||
| x-kubernetes-list-type: atomic | ||
| required: | ||
| - key | ||
| - operator | ||
| type: object | ||
| type: array | ||
| x-kubernetes-list-type: atomic | ||
| matchLabels: | ||
| additionalProperties: | ||
| type: string | ||
| description: |- | ||
| matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels | ||
| map is equivalent to an element of matchExpressions, whose key field is "key", the | ||
| operator is "In", and the values array contains only "value". The requirements are ANDed. | ||
| type: object | ||
| type: object | ||
| x-kubernetes-map-type: atomic | ||
| x-kubernetes-validations: | ||
| - message: exactly one of matchLabels or matchExpressions | ||
| must be set | ||
| rule: (has(self.matchExpressions) && !has(self.matchLabels)) | ||
| || (!has(self.matchExpressions) && has(self.matchLabels)) | ||
| type: | ||
| description: |- | ||
| type is a required field which specifies the type of selector to use. | ||
|
|
||
| The allowed selector types are "GroupKind" and "Label". | ||
|
|
||
| When set to "GroupKind", all objects which match the specified group and kind will be selected. | ||
| When set to "Label", all objects which match the specified labels and/or expressions will be selected. | ||
| enum: | ||
| - GroupKind | ||
| - Label | ||
| type: string | ||
| required: | ||
| - type | ||
| type: object | ||
| x-kubernetes-validations: | ||
| - message: groupKind is required when type is GroupKind, and | ||
| forbidden otherwise | ||
| rule: 'self.type == ''GroupKind'' ?has(self.groupKind) : !has(self.groupKind)' | ||
| - message: label is required when type is Label, and forbidden | ||
| otherwise | ||
| rule: 'self.type == ''Label'' ?has(self.label) : !has(self.label)' | ||
| required: | ||
| - assertions | ||
| - selector | ||
| type: object | ||
| maxItems: 20 | ||
| minItems: 1 | ||
| type: array | ||
| x-kubernetes-list-type: atomic |
There was a problem hiding this comment.
Make spec.progressionProbes immutable.
Line 226 introduces rollout-gating state on ClusterExtensionRevision, but unlike spec.phases this field has no update validation. Because ClusterExtensionRevision is documented as an immutable snapshot, allowing in-place edits here lets users retroactively change rollout behavior for an existing revision instead of forcing a new one. Please add the same immutability guard at the source type so both generated CRDs stay consistent.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@helm/olmv1/base/operator-controller/crd/experimental/olm.operatorframework.io_clusterextensionrevisions.yaml`
around lines 226 - 484, The spec.progressionProbes array must be made immutable
like spec.phases; add the immutability marker to the source API type (the
ProgressionProbes field on the ClusterExtensionRevision spec struct) using the
same kubebuilder validation tag you used for phases (e.g.
+kubebuilder:validation:Immutable or the project's equivalent), then regenerate
the CRD so the generated YAML for spec.progressionProbes contains
x-kubernetes-immutable: true (matching the existing immutability behavior for
spec.phases).
| And resource "persistentvolumeclaim/test-pvc" is installed | ||
| And resource "configmap/test-configmap" is installed | ||
|
|
||
| Scenario: Phases does not progress when user-provided progressionProbes do not pass |
There was a problem hiding this comment.
Minor grammatical issue in scenario name.
"Phases does not progress" should be "Phases do not progress" for correct subject-verb agreement.
Suggested fix
- Scenario: Phases does not progress when user-provided progressionProbes do not pass
+ Scenario: Phases do not progress when user-provided progressionProbes do not pass📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Scenario: Phases does not progress when user-provided progressionProbes do not pass | |
| Scenario: Phases do not progress when user-provided progressionProbes do not pass |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/e2e/features/revision.feature` at line 140, The scenario title has
incorrect subject-verb agreement: update the Scenario line containing "Phases
does not progress when user-provided progressionProbes do not pass" to "Phases
do not progress when user-provided progressionProbes do not pass" (search for
that exact scenario string in the feature to locate and replace it).
|
Hi @Xia-Zhao-rh , could you help verify it? Thanks! |
|
/retitle OPRUN-4509: Synchronize From Upstream Repositories |
|
@openshift-bot: This pull request references OPRUN-4509 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/assign @Xia-Zhao-rh |
|
@openshift-bot: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
/verified by @Xia-Zhao-rh |
|
@Xia-Zhao-rh: This PR has been marked as verified by DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
The downstream repository has been updated with the following following upstream commits:
The
vendor/directory has been updated and the following commits were carried:@catalogd-updateThis pull request is expected to merge without any human intervention. If tests are failing here, changes must land upstream to fix any issues so that future downstreaming efforts succeed.
/cc @openshift/openshift-team-operator-framework