Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Worker upgrade started after Pods became Ready #546

Merged
merged 1 commit into from
Aug 30, 2023

Conversation

irumaru
Copy link
Contributor

@irumaru irumaru commented Aug 27, 2023

During cluster upgrade, change to start worker node upgrade after all kube-system pods are up and running, instead of immediately updating the control plane after the control plane upgrade is finishe>

The reason for the change is to eliminate service downtime. Prior to this change, some services would be down for tens of seconds during worker node upgrades.

For more information, see the following Issue
Fixes: #537

type KubeSystemEvent struct {
Items []struct {
Reason string `json:"reason"`
EventTime string `json:"eventTime"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you try with just time.Time? I'd expect it to work as these kube jsons should come straight from go's json.Marshal


// Returns true if a "schedule" is found from an event newer than the time specified in the argument.
func (h *Host) FindScheduleFromKubeSystemEvents(now time.Time) (bool, error) {
output, err := h.ExecOutput(h.Configurer.K0sCmdf("kubectl -n kube-system get events -o json"), exec.Sudo(h))
Copy link
Contributor

@kke kke Aug 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think something like this should work:

kubectl -n kube-system get events --field-selector reason=Scheduled -o json

This should make it possible to remove some extra logic.

@irumaru
Copy link
Contributor Author

irumaru commented Aug 28, 2023

Thank you.
I was able to simplify the code.
63a7c89

For more information, see the following Issue
Fixes: k0sproject#537

Signed-off-by: Kimmo Lehto <klehto@mirantis.com>
@kke kke force-pushed the upgrade-wait-kubesystem-pods-ready branch from 63a7c89 to 50a4a9b Compare August 30, 2023 11:27
@kke kke merged commit 156eb4b into k0sproject:main Aug 30, 2023
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Upgrading a cluster with two worker nodes causes downtime
2 participants