-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1889540: manifests: Allow 'for: 20m' for CloudCredentialOperatorDown #262
Bug 1889540: manifests: Allow 'for: 20m' for CloudCredentialOperatorDown #262
Conversation
The alert was born with a 5m 'for' as CCOperatorDown in 63af2de (add alert for when operator is down, 2019-10-28, openshift#132). But folks are unlikely to be churning their creds so quickly that the occasional longer operator outage is worth waking an admin with a midnight alarm. This is true in general, although folks have been revisiting this alert in the context of 4.5->4.6 updates, where a shift in leader leasing has lead to a risk of an 8 minute delay as a 4.6 operator waits patiently before picking up a lease abandoned by a 4.5 operator [1]. The new 20m threshold allows for two such delays with room to spare, and also ensures we aren't waking folks up if theres a brief network or registry outage while pods are being rescheduled or anything minor like that. Some things are worth more agressive thresholds, but I don't think the cred operator is one of them. [1]: https://bugzilla.redhat.com/show_bug.cgi?id=1889540#c4
@wking: This pull request references Bugzilla bug 1889540, which is invalid:
Comment In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/bugzilla refresh |
@wking: This pull request references Bugzilla bug 1889540, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I am in agreement and I really wish this was in 4.6.0 now. :( Thanks Trevor! /lgtm |
/retest |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dgoodwin, wking The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest Please review the full test history for this PR and help us cut down flakes. |
4 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest |
Hah, I seem to have broken the operator :p. Upgrade:
$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_cloud-credential-operator/262/pull-ci-openshift-cloud-credential-operator-master-e2e-upgrade/1320873201856679936/artifacts/e2e-upgrade/gather-extra/clusterversion.json | jq -r '.items[].status.history[] | .startedTime + " " + .completionTime + " " + .version + " " + .state + " " + (.verified | tostring)'
2020-10-27T00:22:38Z 4.7.0-0.ci.test-2020-10-26-234654-ci-op-h7f9j7x7 Partial false
2020-10-26T23:54:37Z 2020-10-27T00:19:53Z 4.7.0-0.ci.test-2020-10-26-234143-ci-op-h7f9j7x7 Completed false
$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_cloud-credential-operator/262/pull-ci-openshift-cloud-credential-operator-master-e2e-upgrade/1320873201856679936/artifacts/e2e-upgrade/gather-extra/clusterversion.json | jq -r '.items[].status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + .reason + ": " + .message' | sort
2020-10-26T23:54:37Z RetrievedUpdates=False NoChannel: The update channel has not been configured.
2020-10-27T00:19:53Z Available=True : Done applying 4.7.0-0.ci.test-2020-10-26-234143-ci-op-h7f9j7x7
2020-10-27T00:22:38Z Progressing=True ClusterOperatorNotAvailable: Unable to apply 4.7.0-0.ci.test-2020-10-26-234654-ci-op-h7f9j7x7: the cluster operator cloud-credential has not yet successfully rolled out
2020-10-27T00:57:11Z Failing=True ClusterOperatorNotAvailable: Cluster operator cloud-credential is still updating
$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_cloud-credential-operator/262/pull-ci-openshift-cloud-credential-operator-master-e2e-upgrade/1320873201856679936/artifacts/e2e-upgrade/gather-extra/clusteroperators.json | jq -r '.items[] | select(.metadata.name == "cloud-credential").status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + .reason + ": " + .message' | sort
2020-10-26T23:55:39Z Available=True :
2020-10-26T23:55:39Z Upgradeable=True :
2020-10-26T23:55:42Z Degraded=False :
2020-10-27T00:09:31Z Progressing=False :
$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_cloud-credential-operator/262/pull-ci-openshift-cloud-credential-operator-master-e2e-upgrade/1320873201856679936/artifacts/e2e-upgrade/gather-extra/clusteroperators.json | jq -r '.items[] | select(.metadata.name == "cloud-credential").status.versions'
[
{
"name": "operator",
"version": "4.7.0-0.ci.test-2020-10-26-234143-ci-op-h7f9j7x7"
}
] Ah, the operator pod is crash-looping: $ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_cloud-credential-operator/262/pull-ci-openshift-cloud-credential-operator-master-e2e-upgrade/1320873201856679936/artifacts/e2e-upgrade/gather-extra/pods/openshift-cloud-credential-operator_cloud-credential-operator-649f878d55-6fp8r_cloud-credential-operator.log
Copying system trust bundle
cp: cannot remove '/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem': Permission denied |
/retest Please review the full test history for this PR and help us cut down flakes. |
3 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
2 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/hold @akhil-rane is working on the problems introduced into the build cluster. |
openshift/release#13491 should fix the failures. /hold cancel |
/retest Please review the full test history for this PR and help us cut down flakes. |
/hold Turns out openshift/release#13491 only moved postsubmit builds, not presubmit builds. |
Trying again after openshift/release#13499: /hold cancel |
/retest Please review the full test history for this PR and help us cut down flakes. |
And indeed: $ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_cloud-credential-operator/262/pull-ci-openshift-cloud-credential-operator-master-e2e-aws/1325872495944798208/artifacts/e2e-aws/gather-extra/clusteroperators.json | jq -r '.items[] | select(.metadata.name == "marketplace").status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + (.reason // "-") + ": " + (.message // "-")'
2020-11-09T19:02:47Z Progressing=False OperatorAvailable: Successfully progressed to release version: 4.7.0-0.ci.test-2020-11-09-184839-ci-op-zbzw3qs6
2020-11-09T19:02:47Z Available=True OperatorAvailable: Available release version: 4.7.0-0.ci.test-2020-11-09-184839-ci-op-zbzw3qs6 Dunno why it isn't setting |
/retest Please review the full test history for this PR and help us cut down flakes. |
2 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
So that is Docker's new throttling, rhbz#1895107. |
We will eventually break through all the flakes ;). |
/retest Please review the full test history for this PR and help us cut down flakes. |
3 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/cherrypick release-4.6 |
@wking: once the present PR merges, I will cherry-pick it on top of release-4.6 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@wking: All pull requests linked via external trackers have merged: Bugzilla bug 1889540 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@wking: new pull request created: #267 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The alert was born with a 5m
for
as CCOperatorDown in 63af2de (#132). But folks are unlikely to be churning their creds so quickly that the occasional longer operator outage is worth waking an admin with a midnight alarm. This is true in general, although folks have been revisiting this alert in the context of 4.5->4.6 updates, where a shift in leader leasing has lead to a risk of an 8 minute delay as a 4.6 operator waits patiently before picking up a lease abandoned by a 4.5 operator. The new 20m threshold allows for two such delays with room to spare, and also ensures we aren't waking folks up if theres a brief network or registry outage while pods are being rescheduled or anything minor like that. Some things are worth more agressive thresholds, but I don't think the cred operator is one of them.