New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Revert "Bug 2000216: Image policy should mutate DeploymentConfigs, StatefulSets, and new CronJobs" #1032
Conversation
…atefulSets, and new CronJobs"
@stbenjam: the contents of this pull request could not be automatically validated. The following commits could not be validated and must be approved by a top-level approver:
|
@stbenjam: This pull request references Bugzilla bug 2000216, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@stbenjam: An error was encountered querying GitHub for users with public email (xiuwang@redhat.com) for bug 2000216 on the Bugzilla server at https://bugzilla.redhat.com. No known errors were detected, please see the full error message for details. Full error message.
non-200 OK status code: 403 Forbidden body: "{\n \"documentation_url\": \"https://docs.github.com/en/free-pro-team@latest/rest/overview/resources-in-the-rest-api#secondary-rate-limits\",\n \"message\": \"You have exceeded a secondary rate limit. Please wait a few minutes before you try again.\"\n}\n"
Please contact an administrator to resolve this issue, then request a bug refresh with In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/test e2e-aws-upgrade |
@stbenjam: This pull request references Bugzilla bug 2000216, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: stbenjam The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@stbenjam: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/close |
@stbenjam: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@stbenjam: This pull request references Bugzilla bug 2000216. The bug has been updated to no longer refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Reverts #1014
Starting with 4.10.0-0.nightly-2021-11-03-064540, we are failing release payload acceptance on SDN upgrades. Out of all the PR's this looks like possibly the most relevant. Reverting to test the hunch.
I'm not entirely clear what's wrong, but upgrades are failing with
the \"master\" pool should be updated before the CVO reports available at the new version
and authentication operator failing to upgrade.Example failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.10-e2e-aws-upgrade/1455719487931158528
The timeline I can piece together from the above looks like this:
@ 03:36:09, etcd reported unhealthy members: ip-10-0-169-115.us-west-1.compute.internal , which is a master
@ 03:36:27 Node ip-10-0-169-115.us-west-1.compute.internal status is now: NodeSchedulable
@ 03:36:33 Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver
@ 03:36:33 Created container oauth-apiserver on the ip-10-0-169-115 master
@ 03:36:36 master was updating after cluster version reached level: the "master" pool should be updated before the CVO reports available at the new version