Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-33018: pkg/daemon: Only re-bootstrap kubelet on X.509 errors #4351

Merged

Conversation

wking
Copy link
Member

@wking wking commented May 6, 2024

Since 4d447c5 (#4106) landed the initial re-bootstrap work, we had been triggering re-bootstraps on a number of errors:

  • failed to watch
  • unknown authority
  • error on the server

But failed to watch message is about what failed (a watch), and not about how/why it failed. Which leads to situations like:

$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-upgrade-from-stable-4.15-e2e-metal-ipi-upgrade-ovn-ipv6/1787177729708265472/artifacts/e2e-metal-ipi-upgrade-ovn-ipv6/gather-extra/artifacts/nodes/master-0.ostest.test.metalkube.org/journal | zgrep -A1 'Re-bootstrapping kubelet'
May 05 20:30:04.956832 master-0.ostest.test.metalkube.org root[197839]: machine-config-daemon[191096]: Re-bootstrapping kubelet in response to deferred kubeconfig changes and github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: Get "https://api-int.ostest.test.metalkube.org:6443/apis/config.openshift.io/v1/featuregates?allowWatchBookmarks=true&resourceVersion=83332&timeout=9m14s&timeoutSeconds=554&watch=true": dial tcp: lookup api-int.ostest.test.metalkube.org on [fd2e:6f44:5dd8:c956::1]:53: no such host
May 05 20:30:04.966735 master-0.ostest.test.metalkube.org systemd[1]: Stopping Kubernetes Kubelet...

But dial tcp: lookup api-int... :53: no such host is a DNS issue, and not an X.509 issue, so re-bootstrapping the kubelet will not help. And without MCO-1154 making graceful shutdown more reliable (vs. leaving a dirty disk after a partial rollout attempt). The errors we do want to re-bootstrap on currently look like:

W0104 01:14:10.660589  675519 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://api-int.c-rh-c-eph.8p0c.p1.openshiftapps.com:6443/api/v1/nodes?resourceVersion=2971418211": tls: failed to verify certificate: x509: certificate signed by unknown authority

I'm not sure where error on the server came from; 4d447c5 doesn't discuss that choice. I'm dropping it for now, and we can restore it if we get more clarity on what it was doing. And as I pointed out above failed to watch isn't talking about why we failed. With this commit, we just want to match against the X.509 failures.

I'm picking x509 as a hopefully-reliable substring, because I expect that will turn up in any trust-handshake error. I'd also be ok with certificate signed by unknown authority, or other long substring. I'm not as excited about the outgoing unknown authority, because I don't understand why we'd get that without having the whole certificate signed by unknown authority substring. But practically, any of the x509: certificate signed by unknown authority substrings should be somewhat-reliable triggers, and we can look at more robust backstopping in later work.

Since 4d447c5 (backportable version of api-int cert work,
2024-01-09, openshift#4106) landed the initial re-bootstrap work, we had been
triggering re-bootstraps on a number of errors:

* "failed to watch"
* "unknown authority"
* "error on the server"

But "failed to watch" message is about what failed (a watch), and not
about how/why it failed.  Which leads to situations like [1]:

  $ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-upgrade-from-stable-4.15-e2e-metal-ipi-upgrade-ovn-ipv6/1787177729708265472/artifacts/e2e-metal-ipi-upgrade-ovn-ipv6/gather-extra/artifacts/nodes/master-0.ostest.test.metalkube.org/journal | zgrep -A1 'Re-bootstrapping kubelet'
  May 05 20:30:04.956832 master-0.ostest.test.metalkube.org root[197839]: machine-config-daemon[191096]: Re-bootstrapping kubelet in response to deferred kubeconfig changes and github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: Get "https://api-int.ostest.test.metalkube.org:6443/apis/config.openshift.io/v1/featuregates?allowWatchBookmarks=true&resourceVersion=83332&timeout=9m14s&timeoutSeconds=554&watch=true": dial tcp: lookup api-int.ostest.test.metalkube.org on [fd2e:6f44:5dd8:c956::1]:53: no such host
  May 05 20:30:04.966735 master-0.ostest.test.metalkube.org systemd[1]: Stopping Kubernetes Kubelet...

But "dial tcp: lookup api-int... :53: no such host" is a DNS issue,
and not an X.509 issue, so re-bootstrapping the kubelet will not help.
And without [2] making graceful shutdown more reliable (vs. leaving a
dirty disk after a partial rollout attempt).  The errors we do want to
re-bootstrap on currently look like:

  W0104 01:14:10.660589  675519 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://api-int.c-rh-c-eph.8p0c.p1.openshiftapps.com:6443/api/v1/nodes?resourceVersion=2971418211": tls: failed to verify certificate: x509: certificate signed by unknown authority

I'm not sure where "error on the server" came from; 4d447c5 doesn't
discuss that choice.  I'm dropping it for now, and we can restore it
if we get more clarity on what it was doing.  And as I pointed out
above "failed to watch" isn't talking about why we failed.  With this
commit, we just want to match against the X.509 failures.

I'm picking "x509" as a hopefully-reliable substring, because I expect
that will turn up in any trust-handshake error.  I'd also be ok with
"certificate signed by unknown authority", or other long substring.
I'm not as excited about the outgoing "unknown authority", because I
don't understand why we'd get that without having the whole
"certificate signed by unknown authority" substring.  But practically,
any of the "x509: certificate signed by unknown authority" substrings
should be somewhat-reliable triggers, and we can look at more robust
backstopping in later work.

[1]: https://prow.ci.openshift.org/view/gs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-upgrade-from-stable-4.15-e2e-metal-ipi-upgrade-ovn-ipv6/1787177729708265472
[2]: https://issues.redhat.com/browse/MCO-1154
@openshift-ci-robot openshift-ci-robot added jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. labels May 6, 2024
@openshift-ci-robot
Copy link
Contributor

@wking: This pull request references Jira Issue OCPBUGS-33018, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.16.0) matches configured target version for branch (4.16.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @sergiordlr

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

Since 4d447c5 (#4106) landed the initial re-bootstrap work, we had been triggering re-bootstraps on a number of errors:

  • failed to watch
  • unknown authority
  • error on the server

But failed to watch message is about what failed (a watch), and not about how/why it failed. Which leads to situations like:

$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-upgrade-from-stable-4.15-e2e-metal-ipi-upgrade-ovn-ipv6/1787177729708265472/artifacts/e2e-metal-ipi-upgrade-ovn-ipv6/gather-extra/artifacts/nodes/master-0.ostest.test.metalkube.org/journal | zgrep -A1 'Re-bootstrapping kubelet'
May 05 20:30:04.956832 master-0.ostest.test.metalkube.org root[197839]: machine-config-daemon[191096]: Re-bootstrapping kubelet in response to deferred kubeconfig changes and github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: Get "https://api-int.ostest.test.metalkube.org:6443/apis/config.openshift.io/v1/featuregates?allowWatchBookmarks=true&resourceVersion=83332&timeout=9m14s&timeoutSeconds=554&watch=true": dial tcp: lookup api-int.ostest.test.metalkube.org on [fd2e:6f44:5dd8:c956::1]:53: no such host
May 05 20:30:04.966735 master-0.ostest.test.metalkube.org systemd[1]: Stopping Kubernetes Kubelet...

But dial tcp: lookup api-int... :53: no such host is a DNS issue, and not an X.509 issue, so re-bootstrapping the kubelet will not help. And without MCO-1154 making graceful shutdown more reliable (vs. leaving a dirty disk after a partial rollout attempt). The errors we do want to re-bootstrap on currently look like:

W0104 01:14:10.660589 675519 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://api-int.c-rh-c-eph.8p0c.p1.openshiftapps.com:6443/api/v1/nodes?resourceVersion=2971418211": tls: failed to verify certificate: x509: certificate signed by unknown authority

I'm not sure where error on the server came from; 4d447c5 doesn't discuss that choice. I'm dropping it for now, and we can restore it if we get more clarity on what it was doing. And as I pointed out above failed to watch isn't talking about why we failed. With this commit, we just want to match against the X.509 failures.

I'm picking x509 as a hopefully-reliable substring, because I expect that will turn up in any trust-handshake error. I'd also be ok with certificate signed by unknown authority, or other long substring. I'm not as excited about the outgoing unknown authority, because I don't understand why we'd get that without having the whole certificate signed by unknown authority substring. But practically, any of the x509: certificate signed by unknown authority substrings should be somewhat-reliable triggers, and we can look at more robust backstopping in later work.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Contributor

@yuqi-zhang yuqi-zhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on our discussions so far I think this is probably the most straightforward fix. Let's try getting this in and seeing if this helps the metal jobs. This should be generally safe anyways since most platforms wouldn't need to do this

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label May 6, 2024
Copy link
Contributor

openshift-ci bot commented May 6, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: wking, yuqi-zhang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 6, 2024
@openshift-merge-bot openshift-merge-bot bot merged commit 2766d7c into openshift:master May 6, 2024
14 of 15 checks passed
@openshift-ci-robot
Copy link
Contributor

@wking: Jira Issue OCPBUGS-33018: All pull requests linked via external trackers have merged:

Jira Issue OCPBUGS-33018 has been moved to the MODIFIED state.

In response to this:

Since 4d447c5 (#4106) landed the initial re-bootstrap work, we had been triggering re-bootstraps on a number of errors:

  • failed to watch
  • unknown authority
  • error on the server

But failed to watch message is about what failed (a watch), and not about how/why it failed. Which leads to situations like:

$ curl -s https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-release-master-nightly-4.16-upgrade-from-stable-4.15-e2e-metal-ipi-upgrade-ovn-ipv6/1787177729708265472/artifacts/e2e-metal-ipi-upgrade-ovn-ipv6/gather-extra/artifacts/nodes/master-0.ostest.test.metalkube.org/journal | zgrep -A1 'Re-bootstrapping kubelet'
May 05 20:30:04.956832 master-0.ostest.test.metalkube.org root[197839]: machine-config-daemon[191096]: Re-bootstrapping kubelet in response to deferred kubeconfig changes and github.com/openshift/client-go/config/informers/externalversions/factory.go:125: Failed to watch *v1.FeatureGate: Get "https://api-int.ostest.test.metalkube.org:6443/apis/config.openshift.io/v1/featuregates?allowWatchBookmarks=true&resourceVersion=83332&timeout=9m14s&timeoutSeconds=554&watch=true": dial tcp: lookup api-int.ostest.test.metalkube.org on [fd2e:6f44:5dd8:c956::1]:53: no such host
May 05 20:30:04.966735 master-0.ostest.test.metalkube.org systemd[1]: Stopping Kubernetes Kubelet...

But dial tcp: lookup api-int... :53: no such host is a DNS issue, and not an X.509 issue, so re-bootstrapping the kubelet will not help. And without MCO-1154 making graceful shutdown more reliable (vs. leaving a dirty disk after a partial rollout attempt). The errors we do want to re-bootstrap on currently look like:

W0104 01:14:10.660589 675519 reflector.go:533] k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://api-int.c-rh-c-eph.8p0c.p1.openshiftapps.com:6443/api/v1/nodes?resourceVersion=2971418211": tls: failed to verify certificate: x509: certificate signed by unknown authority

I'm not sure where error on the server came from; 4d447c5 doesn't discuss that choice. I'm dropping it for now, and we can restore it if we get more clarity on what it was doing. And as I pointed out above failed to watch isn't talking about why we failed. With this commit, we just want to match against the X.509 failures.

I'm picking x509 as a hopefully-reliable substring, because I expect that will turn up in any trust-handshake error. I'd also be ok with certificate signed by unknown authority, or other long substring. I'm not as excited about the outgoing unknown authority, because I don't understand why we'd get that without having the whole certificate signed by unknown authority substring. But practically, any of the x509: certificate signed by unknown authority substrings should be somewhat-reliable triggers, and we can look at more robust backstopping in later work.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@wking wking deleted the only-rebootstrap-on-x509-errors branch May 6, 2024 21:05
@openshift-bot
Copy link
Contributor

[ART PR BUILD NOTIFIER]

This PR has been included in build ose-machine-config-operator-container-v4.16.0-202405070318.p0.g2766d7c.assembly.stream.el9 for distgit ose-machine-config-operator.
All builds following this will include this PR.

@openshift-merge-robot
Copy link
Contributor

Fix included in accepted release 4.16.0-0.nightly-2024-05-08-222442

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants