-
Notifications
You must be signed in to change notification settings - Fork 14.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
clarify kubelet upgrade process #12326
Comments
Is this about in-place updates of the kubelet? Do the kubeadm upgrade tests cover that scenario? The kube-up GCE upgrade tests just replace machines running the old kubelet with machines running a newer one, bypassing the in-place upgrade questions here. Unless or until we have testing for in-place upgrades, the conservative answer is that the upgrade process for a kubelet is to provision a new machine with the desired kubelet version. |
Yes
I don't know. @kubernetes/sig-cluster-lifecycle, @kubernetes/sig-testing? |
yes, but our tests are failing due to problems with the upgrade framework in k/k (e.g. ginkgo skipping); possibly some other reasons too. the current tests are pretty much unmaintained and there are plans to replace them with something else next cycle, hopefully. we do recommend draining in our 12 -> 13 upgrade process. |
|
/wg lts |
/language en |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/kind feature |
cc: @freehan |
/remove-lifecycle stale |
There is evidence of users upgrading kubelets between minor versions without draining pods (kubernetes/kubernetes#84443). If that is required, the kubelet upgrade docs need to be made explicit. |
Is it required? Do we have a formal decision on that? |
Please, let also us address whether drain is required when upgrading between patch versions, e.g., 1.17.0 to 1.17.1. These upgrades are, arguably, more frequent than upgrades between minor versions, and therefore users have a greater incentive to skip drain. |
WRT workload stability, the process is the same for PATCH iterations, so a drain would be required there too. one potential difference with MINOR updates is the CPU checkpoint format of the kubelet is not supposed to change on PATCH iterations. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/assign @derekwaynecarr @dchen1107 Routing to sig-node leads. If we require draining nodes before minor version upgrade or reconfiguration (and that is required, as far as I can tell), that needs to be made explicit |
My opinion, a cordon -> drain -> upgrade -> uncordon path is the safest thing to document for all situations. We should be able to do patch level upgrades (z-stream, in x.y.z) without draining but, imho, there is no point in complicating the guidance. |
cordon -> drain -> upgrade kubelet -> uncordon is the only path supported by SIG Node today. In the past, there were efforts to do in-place upgrade kubelet, including containerized kubelet by CoreOS team to simplify the upgrading flow, but none of them are officially supported by the community. Let's make the upgrade flow explicit in the doc for now, but open for the enhancement. |
Now that it's official, I'll work up a docs PR 🙂 |
/triage accepted |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Opened #26098 to update the doc. The cluster-upgrade doc already included this information
|
Follow up from #11060, tracked in #12329
Upgrade process for
kubelet
is not sufficiently clear in user-facing documentation:Page to Update:
https://kubernetes.io/docs/setup/version-skew-policy/
The text was updated successfully, but these errors were encountered: