Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add OLM upgrade test to file-integrity-operator #30613

Merged

Conversation

mrogers950
Copy link
Contributor

@mrogers950 mrogers950 commented Jul 19, 2022

Adds bundle-install and bundle-upgrade aws tests

@openshift-ci openshift-ci bot requested review from jhrozek and xiaojiey July 19, 2022 18:42
@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 19, 2022
test:
- as: e2e-upgrade
cli: latest
commands: make e2e
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

expecting that we'll need adjustments in make e2e to work against a catalog-installed operator.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I started kicking the tires here openshift/file-integrity-operator#269

@mrogers950 mrogers950 force-pushed the fio_upgrade_test branch 3 times, most recently from ab2eae3 to b758b2d Compare July 19, 2022 20:05
@mrogers950
Copy link
Contributor Author

/retest

test:
- as: e2e-upgrade
cli: latest
commands: make e2e
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I started kicking the tires here openshift/file-integrity-operator#269

OO_CHANNEL: release-v0.1
OO_INSTALL_NAMESPACE: '!create'
OO_PACKAGE: file-integrity-operator
OO_TARGET_NAMESPACES: '!install'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if we can specify the version without hard-coding it. The docs I've read use version strings.

@mrogers950
Copy link
Contributor Author

Updated to hopefully fix the bundle build part.

@mrogers950
Copy link
Contributor Author

/test all

@mrogers950
Copy link
Contributor Author

Can't currently use 4.11 index for the bundle base: https://issues.redhat.com/browse/DPTP-2969

@mrogers950
Copy link
Contributor Author

I split it into bundle install and bundle upgrade tests.

On the upgrade tests, I wasn't able to specify OO_INITIAL_CHANNEL and OO_INITIAL_CSV:

 resolve MultiStageTestConfiguration: [test/e2e-bundle-aws-upgrade: workflow/optional-operators-ci-aws-upgrade: parameter \"OO_INITIAL_CHANNEL\" is overridden in [test/e2e-bundle-aws-upgr
ade] but not declared in any step, test/e2e-bundle-aws-upgrade: workflow/optional-operators-ci-aws-upgrade: parameter \"OO_INITIAL_CSV\" is overridden in [test/e2e-bundle-aws-upgrade] but
 not declared in any step]"     

@mrogers950
Copy link
Contributor Author

/retest

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 29, 2022

@mrogers950: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/rehearse/openshift/file-integrity-operator/master/ci-index-upgrade-bundle 7468b240a1aa5d51bd9819a04950c3071fc3dfb9 link unknown /test pj-rehearse
ci/rehearse/openshift/file-integrity-operator/master/e2e-aws-upgrade f8e6f0969921046f5e2966e0efc245b4cb9964b5 link unknown /test pj-rehearse

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@mrogers950
Copy link
Contributor Author

mrogers950 commented Jul 29, 2022

Saw this on ci/rehearse/openshift/file-integrity-operator/master/e2e-bundle-aws Pending — Job triggered.

Hadn't seen it before, may be a flake.

=== RUN   TestFileIntegrityPruneBackup
I0729 16:42:56.838815   12206 request.go:665] Waited for 1.171002199s due to client-side throttling, not priority and fairness, request: GET:https://api.ci-op-dfd7f8b0-6d54d.origin-ci-int-aws.dev.rhcloud.com:6443/apis/monitoring.coreos.com/v1beta1?timeout=32s
I0729 16:43:06.838963   12206 request.go:665] Waited for 11.171114877s due to client-side throttling, not priority and fairness, request: GET:https://api.ci-op-dfd7f8b0-6d54d.origin-ci-int-aws.dev.rhcloud.com:6443/apis/operator.openshift.io/v1?timeout=32s
I0729 16:43:17.038891   12206 request.go:665] Waited for 8.196964036s due to client-side throttling, not priority and fairness, request: GET:https://api.ci-op-dfd7f8b0-6d54d.origin-ci-int-aws.dev.rhcloud.com:6443/apis/node.k8s.io/v1beta1?timeout=32s
    helpers.go:270: Using existing cluster resources in namespace file-integrity
    wait_util.go:59: Deployment available (1/1)
    client.go:47: resource type  with namespace/name (file-integrity/e2e-test-prune-backup) created
    helpers.go:372: Created FileIntegrity: &{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:e2e-test-prune-backup GenerateName: Namespace:file-integrity SelfLink: UID:fd78b450-f3be-4218-b2dd-351796d1278d ResourceVersion:34428 Generation:1 CreationTimestamp:2022-07-29 16:43:29 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[] OwnerReferences:[] Finalizers:[] ClusterName: ManagedFields:[{Manager:e2e.test Operation:Update APIVersion:fileintegrity.openshift.io/v1alpha1 Time:2022-07-29 16:43:29 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:spec":{".":{},"f:config":{".":{},"f:gracePeriod":{},"f:maxBackups":{}},"f:debug":{},"f:nodeSelector":{".":{},"f:node-role.kubernetes.io/worker":{}},"f:tolerations":{}}} Subresource:}]} Spec:{NodeSelector:map[node-role.kubernetes.io/worker:] Config:{Name: Namespace: Key: GracePeriod:20 MaxBackups:5} Debug:true Tolerations:[{Key:node-role.kubernetes.io/master Operator:Exists Value: Effect:NoSchedule TolerationSeconds:<nil>}]} Status:{Phase:}}
    helpers.go:856: Got (Active) result #1 out of 0 needed.
    helpers.go:867: FileIntegrity ready (Active)
    helpers.go:408: FileIntegrity deployed successfully
    helpers.go:856: Got (Active) result #1 out of 0 needed.
    helpers.go:867: FileIntegrity ready (Active)
    e2e_test.go:225: Asserting that the FileIntegrity check is in a SUCCESS state after deploying it
    helpers.go:1003: ip-10-0-128-4.ec2.internal Succeeded
    helpers.go:1003: ip-10-0-153-154.ec2.internal Succeeded
    helpers.go:1003: ip-10-0-239-105.ec2.internal Succeeded
    e2e_test.go:228: Asserting that we have OK node condition events
    e2e_test.go:231: Setting MaxBackups to 1
    helpers.go:856: Got (Active) result #1 out of 5 needed.
    helpers.go:856: Got (Active) result #2 out of 5 needed.
    helpers.go:856: Got (Active) result #3 out of 5 needed.
    helpers.go:856: Got (Active) result #4 out of 5 needed.
    helpers.go:856: Got (Active) result #5 out of 5 needed.
    helpers.go:856: Got (Active) result #6 out of 5 needed.
    helpers.go:867: FileIntegrity ready (Active)
    e2e_test.go:251: Asserting that the FileIntegrity check is in a SUCCESS state after updating config
    helpers.go:1003: ip-10-0-128-4.ec2.internal Succeeded
    helpers.go:1003: ip-10-0-153-154.ec2.internal Succeeded
    helpers.go:1003: ip-10-0-239-105.ec2.internal Succeeded
    helpers.go:856: Got (Active) result #1 out of 5 needed.
    helpers.go:856: Got (Active) result #2 out of 5 needed.
    helpers.go:856: Got (Active) result #3 out of 5 needed.
    helpers.go:856: Got (Active) result #4 out of 5 needed.
    helpers.go:856: Got (Active) result #5 out of 5 needed.
    helpers.go:856: Got (Active) result #6 out of 5 needed.
    helpers.go:867: FileIntegrity ready (Active)
    e2e_test.go:262: Asserting that the FileIntegrity check is in a SUCCESS state after re-initializing the database
    helpers.go:1003: ip-10-0-128-4.ec2.internal Succeeded
    helpers.go:1003: ip-10-0-153-154.ec2.internal Succeeded
    helpers.go:1003: ip-10-0-239-105.ec2.internal Succeeded
    helpers.go:856: Got (Active) result #1 out of 5 needed.
    helpers.go:856: Got (Active) result #2 out of 5 needed.
    helpers.go:856: Got (Active) result #3 out of 5 needed.
    helpers.go:856: Got (Active) result #4 out of 5 needed.
    helpers.go:856: Got (Active) result #5 out of 5 needed.
    helpers.go:856: Got (Active) result #6 out of 5 needed.
    helpers.go:867: FileIntegrity ready (Active)
    e2e_test.go:273: Asserting that the FileIntegrity check is in a SUCCESS state after re-initializing the database
    helpers.go:1003: ip-10-0-128-4.ec2.internal Succeeded
    helpers.go:1003: ip-10-0-153-154.ec2.internal Succeeded
    helpers.go:1003: ip-10-0-239-105.ec2.internal Succeeded
    e2e_test.go:285: Verifying that there's only a single DB and log backup
    client.go:47: resource type  with namespace/name (file-integrity/test-backup-pod) created
    helpers.go:1244: container in pod test-backup-pod not finished yet
    helpers.go:1228: {test-backup-pod {nil nil &ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2022-07-29 17:01:34 +0000 UTC,FinishedAt:2022-07-29 17:01:34 +0000 UTC,ContainerID:cri-o://843fdd52de6b4a2e5e5357bcad3e9c7ad950bab83daa6a00ac96b438b7c8eb83,}} {nil nil nil} false 0 quay.io/prometheus/busybox:latest quay.io/prometheus/busybox@sha256:60ded79a99eb70aa36d57c598707c961ffa4f9f7b63237823a780eaf6d437a78 cri-o://843fdd52de6b4a2e5e5357bcad3e9c7ad950bab83daa6a00ac96b438b7c8eb83 0xc00042[690](https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30613/rehearse-30613-pull-ci-openshift-file-integrity-operator-master-e2e-bundle-aws/1553036505608359936#1:build-log.txt%3A690)5}
    e2e_test.go:288: the expected exit code did not match 0
    helpers.go:1590: wrote logs for file-integrity-operator-bd45865bc-kxtb8/self
time="2022-07-29T17:01:47Z" level=info msg="Skipping cleanup function since --skip-cleanup-error is true"
--- FAIL: TestFileIntegrityPruneBackup (1131.61s)

https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30613/rehearse-30613-pull-ci-openshift-file-integrity-operator-master-e2e-bundle-aws/1553036505608359936

@mrogers950
Copy link
Contributor Author

/retest

Copy link
Contributor

@jhrozek jhrozek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Aug 1, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 1, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jhrozek, mrogers950

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-robot openshift-merge-robot merged commit cc460a8 into openshift:master Aug 1, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Aug 1, 2022

@mrogers950: Updated the following 2 configmaps:

  • job-config-master-presubmits configmap in namespace ci at cluster app.ci using the following files:
    • key openshift-file-integrity-operator-master-presubmits.yaml using file ci-operator/jobs/openshift/file-integrity-operator/openshift-file-integrity-operator-master-presubmits.yaml
  • ci-operator-master-configs configmap in namespace ci at cluster app.ci using the following files:
    • key openshift-file-integrity-operator-master.yaml using file ci-operator/config/openshift/file-integrity-operator/openshift-file-integrity-operator-master.yaml

In response to this:

Adds bundle-install and bundle-upgrade aws tests

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
4 participants