Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug 2004596: bump RHCOS 4.10 boot image metadata #5280

Merged
merged 1 commit into from Oct 20, 2021

Conversation

miabbott
Copy link
Member

@miabbott miabbott commented Oct 8, 2021

This updates the RHCOS 4.10 boot image metadata in the installer. This
change includes fixes in the boot media for the following BZs:

2002215 - Multipath day1 not working on s390x
2004391 - RHCOS-4.9 failed to boot in FIPS mode on s390x
2004449 - Boot option recovery menu prevents image boot
2006690 - OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception"
2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick
2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods
2011960 - [tracker] Storage operator is not available after reboot cluster instances

Changes generated with:

./hack/update-rhcos-bootimage.py https://rhcos-redirector.apps.art.xq1c.p1.openshiftapps.com/art/storage/releases/rhcos-4.10-aarch64/410.84.202110141117-0/aarch64/meta.json aarch64
./hack/update-rhcos-bootimage.py https://rhcos-redirector.apps.art.xq1c.p1.openshiftapps.com/art/storage/releases/rhcos-4.10-ppc64le/410.84.202110141003-0/ppc64le/meta.json ppc64le
./hack/update-rhcos-bootimage.py https://rhcos-redirector.apps.art.xq1c.p1.openshiftapps.com/art/storage/releases/rhcos-4.10-s390x/410.84.202110141003-0/s390x/meta.json s390x
./hack/update-rhcos-bootimage.py https://rhcos-redirector.apps.art.xq1c.p1.openshiftapps.com/art/storage/releases/rhcos-4.10/410.84.202110140201-0/x86_64/meta.json amd64
plume cosa2stream --target data/data/rhcos-stream.json --distro rhcos --url https://rhcos-redirector.apps.art.xq1c.p1.openshiftapps.com/art/storage/releases aarch64=410.84.202110141117-0 ppc64le=410.84.202110141003-0 s390x=410.84.202110141003-0 x86_64=410.84.202110140201-0

Verification Steps:

  1. Install a new 4.10 cluster
  2. oc debug node/<node name> -- chroot /host rpm-ostree status
  3. Verify that the deployment version matches the version from this PR
    that matches the architecture you are testing on. (i.e. aarch64
    should have version 410.84.202110141117-0)

@openshift-ci openshift-ci bot added bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. labels Oct 8, 2021
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 8, 2021

@miabbott: This pull request references Bugzilla bug 2004596, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.10.0) matches configured target release for branch (4.10.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

Requesting review from QA contact:
/cc @mike-nguyen

In response to this:

Bug 2004596: bump RHCOS 4.10 boot image metadata

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@miabbott
Copy link
Member Author

miabbott commented Oct 8, 2021

/test e2e-azure
/test e2e-gcp
/test e2e-vsphere
/test e2e-metal-ipi-ovn-dualstack
/test e2e-metal-ipi-ovn-ipv6
/test e2e-metal-ipi-virtualmedia

@miabbott miabbott marked this pull request as draft October 11, 2021 14:11
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Oct 11, 2021
This updates the RHCOS 4.10 boot image metadata in the installer. This
change includes fixes in the boot media for the following BZs:

2002215 - Multipath day1 not working on s390x
2004391 - RHCOS-4.9 failed to boot in FIPS mode on s390x
2004449 - Boot option recovery menu prevents image boot
2006690 - OS boot failure "x64 Exception Type 06 - Invalid Opcode Exception"
2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick
2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods
2011960 - [tracker] Storage operator is not available after reboot cluster instances

Changes generated with:

```
./hack/update-rhcos-bootimage.py https://rhcos-redirector.apps.art.xq1c.p1.openshiftapps.com/art/storage/releases/rhcos-4.10-aarch64/410.84.202110141117-0/aarch64/meta.json aarch64
./hack/update-rhcos-bootimage.py https://rhcos-redirector.apps.art.xq1c.p1.openshiftapps.com/art/storage/releases/rhcos-4.10-ppc64le/410.84.202110141003-0/ppc64le/meta.json ppc64le
./hack/update-rhcos-bootimage.py https://rhcos-redirector.apps.art.xq1c.p1.openshiftapps.com/art/storage/releases/rhcos-4.10-s390x/410.84.202110141003-0/s390x/meta.json s390x
./hack/update-rhcos-bootimage.py https://rhcos-redirector.apps.art.xq1c.p1.openshiftapps.com/art/storage/releases/rhcos-4.10/410.84.202110140201-0/x86_64/meta.json amd64
plume cosa2stream --target data/data/rhcos-stream.json --distro rhcos --url https://rhcos-redirector.apps.art.xq1c.p1.openshiftapps.com/art/storage/releases aarch64=410.84.202110141117-0 ppc64le=410.84.202110141003-0 s390x=410.84.202110141003-0 x86_64=410.84.202110140201-0
```

**Verification Steps:**

1. Install a new 4.10 cluster
2. `oc debug node/<node name> -- chroot /host rpm-ostree status`
3. Verify that the deployment version matches the version from this PR
that matches the architecture you are testing on. (i.e. `aarch64`
should have version `410.84.202110141117-0`)
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 14, 2021

@miabbott: This pull request references Bugzilla bug 2004596, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.10.0) matches configured target release for branch (4.10.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

Requesting review from QA contact:
/cc @mike-nguyen

In response to this:

Bug 2004596: bump RHCOS 4.10 boot image metadata

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@miabbott miabbott marked this pull request as ready for review October 14, 2021 15:00
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Oct 14, 2021
@miabbott
Copy link
Member Author

/retest

Copy link
Member

@mike-nguyen mike-nguyen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All BZs dependent on the boot image bump have been pre-verified. This is good to go when CI passes

@miabbott
Copy link
Member Author

/assign @patrickdillon

Required CI tests are passing...will try to triage any of the other failures

@miabbott
Copy link
Member Author

e2e-vsphere seems to being having infra issues related to VMC:

Error: POST https://vcenter.sddc-44-236-21-251.vmwarevmc.com/rest/com/vmware/cis/session: 503 Service Unavailable

@miabbott
Copy link
Member Author

e2e-openstack-kuryr is hitting quota issues:

level=fatal msg=failed to fetch Cluster: failed to fetch dependency of "Cluster": failed to generate asset "Platform Quota Check": error(MissingQuota): RAM is not available because the required number of resources (98304) is more than remaining quota of 95312

@miabbott
Copy link
Member Author

e2e-ovirt only uses 2 worker nodes? Regardless, the 3 control plane/2 worker nodes booted successfully and got the kubelet running successfully. Failures look to be related to SCCs...

Error creating: pods "ovirt-csi-driver-controller-5d88d46574-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted: .spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.initContainers[0].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.initContainers[0].securityContext.containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, spec.initContainers[0].securityContext.containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, spec.initContainers[0].securityContext.containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, spec.containers[0].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[0].securityContext.containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, spec.containers[0].securityContext.containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, spec.containers[0].securityContext.containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, spec.containers[1].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[1].securityContext.containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, spec.containers[1].securityContext.containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, spec.containers[1].securityContext.containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, spec.containers[2].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[2].securityContext.containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, spec.containers[2].securityContext.containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, spec.containers[2].securityContext.containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, spec.containers[3].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[3].securityContext.containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, spec.containers[3].securityContext.containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, spec.containers[3].securityContext.containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, spec.containers[4].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[4].securityContext.containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, spec.containers[4].securityContext.containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, spec.containers[4].securityContext.containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, spec.containers[5].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[5].securityContext.containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, spec.containers[5].securityContext.containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, spec.containers[5].securityContext.containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, spec.containers[6].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[6].securityContext.containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, spec.containers[6].securityContext.containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, spec.containers[6].securityContext.containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount] for ReplicaSet.apps/v1/ovirt-csi-driver-controller-5d88d46574 -n openshift-cluster-csi-drivers happened 11 times
Error creating: pods "ovirt-csi-driver-node-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted: .spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[3]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[4]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.initContainers[0].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.initContainers[0].securityContext.containers[0].hostPort: Invalid value: 10300: Host ports are not allowed to be used, spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed, spec.containers[0].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[0].securityContext.containers[0].hostPort: Invalid value: 10300: Host ports are not allowed to be used, spec.containers[1].securityContext.privileged: Invalid value: true: Privileged containers are not allowed, spec.containers[1].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[1].securityContext.containers[0].hostPort: Invalid value: 10300: Host ports are not allowed to be used, spec.containers[2].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[2].securityContext.containers[0].hostPort: Invalid value: 10300: Host ports are not allowed to be used, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount] for DaemonSet.apps/v1/ovirt-csi-driver-node -n openshift-cluster-csi-drivers happened 11 times
Error creating: pods "console-operator-59dffc8b6f-" is forbidden: unable to validate against any security context constraint: [provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount] for ReplicaSet.apps/v1/console-operator-59dffc8b6f -n openshift-console-operator happened 13 times
Error creating: pods "console-operator-59dffc8b6f-" is forbidden: unable to validate against any security context constraint: [provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount] for ReplicaSet.apps/v1/console-operator-59dffc8b6f -n openshift-console-operator happened 15 times


failed: (700ms) 2021-10-15T13:41:55 "[sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]"```

@miabbott
Copy link
Member Author

e2e-aws-workers-rhel7 shows that there were 3 control/3 worker nodes running RHCOS 4.10, but when the RHEL 7 workers were tried to be added, that failed because the playbook couldn't install openshift-clients

"msg": "No package matching 'openshift-clients-4.10*' found available, installed or updated",

@miabbott
Copy link
Member Author

e2e-aws-workers-rhel8 shows that there were 3 control/3 worker nodes running RHCOS 4.10, but when the RHEL 8 workers were tried to be added, that failed because the playbook couldn't install some packages:

    "failures": [
        "No package openshift-clients-4.10* available.", 
        "No package openshift-hyperkube-4.10* available."
    ], 

@miabbott
Copy link
Member Author

e2e-openstack looks like there were 3 control Machines provisioned, but only 2 control plane Nodes show up. Not sure what happened there, but the 2 control/3 worker nodes booted + got the kubelet running successfully.

@miabbott
Copy link
Member Author

e2e-crc looks like the cluster got started, but when CRC was started, it hit some local DNS issues?

level=info msg="Resizing /dev/vda4 filesystem"
Failed internal DNS query: ;; connection timed out; no servers could be reached
: Temporary error: ssh command error:
command : host -R 3 foo.apps-crc.testing
err     : Process exited with status 1
 (x2)

@miabbott
Copy link
Member Author

/retest

2 similar comments
@bgilbert
Copy link
Contributor

/retest

@bgilbert
Copy link
Contributor

/retest

@miabbott
Copy link
Member Author

/test e2e-vpshere

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 18, 2021

@miabbott: The specified target(s) for /test were not found.
The following commands are available to trigger required jobs:

  • /test e2e-aws
  • /test e2e-aws-upgrade
  • /test e2e-gcp-upgrade
  • /test e2e-metal-ipi-ovn-ipv6-required
  • /test gofmt
  • /test golint
  • /test govet
  • /test images
  • /test okd-images
  • /test okd-unit
  • /test okd-verify-codegen
  • /test openstack-manifests
  • /test shellcheck
  • /test tf-lint
  • /test unit
  • /test verify-codegen
  • /test verify-vendor
  • /test yaml-lint

The following commands are available to trigger optional jobs:

  • /test e2e-aws-disruptive
  • /test e2e-aws-fips
  • /test e2e-aws-proxy
  • /test e2e-aws-rhel8
  • /test e2e-aws-shared-vpc
  • /test e2e-aws-single-node
  • /test e2e-aws-upi
  • /test e2e-aws-workers-rhel7
  • /test e2e-aws-workers-rhel8
  • /test e2e-azure
  • /test e2e-azure-resourcegroup
  • /test e2e-azure-shared-vpc
  • /test e2e-azure-upi
  • /test e2e-azurestack
  • /test e2e-azurestack-upi
  • /test e2e-crc
  • /test e2e-gcp
  • /test e2e-gcp-shared-vpc
  • /test e2e-gcp-upi
  • /test e2e-gcp-upi-xpn
  • /test e2e-kubevirt
  • /test e2e-libvirt
  • /test e2e-metal
  • /test e2e-metal-assisted
  • /test e2e-metal-ipi
  • /test e2e-metal-ipi-ovn-dualstack
  • /test e2e-metal-ipi-ovn-ipv6
  • /test e2e-metal-ipi-virtualmedia
  • /test e2e-metal-single-node-live-iso
  • /test e2e-openstack
  • /test e2e-openstack-kuryr
  • /test e2e-openstack-parallel
  • /test e2e-openstack-proxy
  • /test e2e-openstack-upi
  • /test e2e-ovirt
  • /test e2e-vsphere
  • /test e2e-vsphere-upi
  • /test okd-e2e-aws
  • /test okd-e2e-aws-upgrade
  • /test okd-e2e-gcp
  • /test okd-e2e-gcp-upgrade
  • /test okd-e2e-vsphere
  • /test tf-fmt

Use /test all to run the following jobs that were automatically triggered:

  • pull-ci-openshift-installer-master-e2e-aws
  • pull-ci-openshift-installer-master-e2e-aws-fips
  • pull-ci-openshift-installer-master-e2e-aws-single-node
  • pull-ci-openshift-installer-master-e2e-aws-upgrade
  • pull-ci-openshift-installer-master-e2e-aws-workers-rhel7
  • pull-ci-openshift-installer-master-e2e-aws-workers-rhel8
  • pull-ci-openshift-installer-master-e2e-crc
  • pull-ci-openshift-installer-master-e2e-libvirt
  • pull-ci-openshift-installer-master-e2e-metal-ipi-ovn-ipv6
  • pull-ci-openshift-installer-master-e2e-metal-single-node-live-iso
  • pull-ci-openshift-installer-master-e2e-openstack
  • pull-ci-openshift-installer-master-e2e-openstack-kuryr
  • pull-ci-openshift-installer-master-e2e-ovirt
  • pull-ci-openshift-installer-master-gofmt
  • pull-ci-openshift-installer-master-golint
  • pull-ci-openshift-installer-master-govet
  • pull-ci-openshift-installer-master-images
  • pull-ci-openshift-installer-master-okd-images
  • pull-ci-openshift-installer-master-okd-unit
  • pull-ci-openshift-installer-master-okd-verify-codegen
  • pull-ci-openshift-installer-master-openstack-manifests
  • pull-ci-openshift-installer-master-shellcheck
  • pull-ci-openshift-installer-master-tf-fmt
  • pull-ci-openshift-installer-master-tf-lint
  • pull-ci-openshift-installer-master-unit
  • pull-ci-openshift-installer-master-verify-codegen
  • pull-ci-openshift-installer-master-verify-vendor
  • pull-ci-openshift-installer-master-yaml-lint

In response to this:

/test e2e-vpshere

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@miabbott
Copy link
Member Author

/test e2e-vsphere

@rphillips
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Oct 18, 2021
@e-tienne
Copy link
Contributor

/test e2e-azure
/test e2e-gcp

@miabbott
Copy link
Member Author

e2e-azure got the cluster up and running, but the e2e tests failed on:

[sig-instrumentation][Late] Alerts shouldn't report any alerts in firing or pending state apart from Watchdog and AlertmanagerReceiversNotConfigured and have no gaps in Watchdog firing [Skipped:Disconnected] [Suite:openshift/conformance/parallel]

@patrickdillon
Copy link
Contributor

/approve

rhel worker failure bz is captured here (just fyi): https://bugzilla.redhat.com/show_bug.cgi?id=2003646

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 20, 2021
@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

1 similar comment
@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

Copy link
Contributor

@e-tienne e-tienne left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 20, 2021

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: e-tienne, mike-nguyen, patrickdillon

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

4 similar comments
@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 20, 2021

@miabbott: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-workers-rhel7 ab33000 link false /test e2e-aws-workers-rhel7
ci/prow/e2e-aws-workers-rhel8 ab33000 link false /test e2e-aws-workers-rhel8
ci/prow/e2e-crc ab33000 link false /test e2e-crc
ci/prow/e2e-azure ab33000 link false /test e2e-azure

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

7 similar comments
@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit d641288 into openshift:master Oct 20, 2021
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Oct 20, 2021

@miabbott: All pull requests linked via external trackers have merged:

Bugzilla bug 2004596 has been moved to the MODIFIED state.

In response to this:

Bug 2004596: bump RHCOS 4.10 boot image metadata

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants