New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug 1795776: data/bootstrap: delay the removal of bootstrap mcs #3007
Conversation
/test e2e-aws-upi |
The bootstrap MCS is responsible for serving Ignition configs to the booting control plane machines while we wait for the full control plane (which will host the in-cluster MCS). If we remove this too early, it will prevent the control plane machines from booting and the control plane from starting. The etcd health check, which is the gate before we remove the MCS, reports healthly (since the one-node, bootstrap etcd cluster is actually up) before the control plane machines had a chance to boot. The result is that one or more of the control plane machines get stuck during boot (because Ignition is still trying to fetch a config) and the cluster fails to bootstrap. This moves the bootstrap MCS removal after bootkube has finished, which will ensure that all of the initial manifests have been loaded into the cluster (including the MCO/MCS manifests).
All three of the IPI installs and the UPI install all completed. I'm going to force push to clean up the commit message and get in another round of testing, but I think this is probably ready to go. |
/lgtm |
/test aws-e2e-upi |
No one saw that typo... /test e2e-aws-upi |
@crawford: This pull request references Bugzilla bug 1795776, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: abhinavdahiya The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The two AWS jobs that failed, both bootstrapped properly. In the FIPS case, tests failed because of OLM. In the other, the installation failed because two of the workers failed to come up. While we dig into that further (we don't think it's related to this PR), I want to get the CI machine rolling again. /test e2e-aws |
@crawford: All pull requests linked via external trackers have merged. Bugzilla bug 1795776 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@crawford: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
The bootstrap MCS is responsible for serving Ignition configs to the
booting control plane machines while we wait for the full control plane
(which will host the in-cluster MCS). If we remove this too early, it
will prevent the control plane machines from booting and the control
plane from starting. The etcd health check, which is the gate before we
remove the MCS, reports healthly (since the one-node, bootstrap etcd
cluster is actually up) before the control plane machines had a chance
to boot. The result is that one or more of the control plane machines
get stuck during boot (because Ignition is still trying to fetch a
config) and the cluster fails to bootstrap.
This moves the bootstrap MCS removal after bootkube has finished, which
will ensure that all of the initial manifests have been loaded into the
cluster (including the MCO/MCS manifests).
Thank you, @abhinavdahiya, for identifying the culprit.