-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhanced kubernetes version upgrades for workload clusters #3203
Comments
This issue requires a certain degree of coordination across several components, so the first question in my mind is where to implement this logic. |
/milestone v0.4.0 We should revisit in v1alpha4 timeframe, probably needs a more detailed proposal |
cc @rbitia Ria, this might fit into your "cluster group" proposal? |
We have a relatively small (but growing) number of clusters so we're currently doing upgrades sort of manually. Conceptually, we think about our clusters in 3 streams - alpha, beta and stable - and roll out upgrades and configuration changes according to stream. Our plan right now is to have common configuration for a stream in a CR ( I don't think that it's CAPIs responsibility to implement all of that (or any) but if we can do some of the common stuff (version upgrades) here, that seems like it would be super valuable for the whole community. It also seems like the logic would be broadly applicable - copy template, update 1Names are illustrative and not definitive. Something, something hard problems in Computer Science. 2Grossly over-simplified here for effect. |
Thanks for the extra context @JTarasovic, from everything I'm hearing here it might be worth considering some extra utilities/libraries/commands under |
I find that if I change the "spec.version" field in an existing KubeadmControlPlane object and apply the change, usually the controllers will upgrade my control plane, without me introducing a new (AWS)MachineTemplate. It sounds like that's not supposed to work, and yet it does—most of the time. Why is that? |
Does it actually change the version of the running cluster - eg It did not in our experience. It would roll the control plane instances but they'd still be on the previous version. |
This is how upgrading k8s version on control planes works currently: https://cluster-api.sigs.k8s.io/tasks/kubeadm-control-plane.html?highlight=rolling#how-to-upgrade-the-kubernetes-control-plane-version Note that you might need to update the image as well if you are specifying the image to use in the machine template. |
Yes, it shows the new version there. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Any updates or actions items here? |
I think the
as that should allow folks to build controllers on top of it. I'm cool with closing this issue in favor of that. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I think the |
/remove-lifecycle rotten |
We are in a really similar situation with a large number of clusters and three different pipelines/streams for development/staging/production clusters. We are starting the development of a new component to handle this in a similar fashion (copy template, update If I understand it correctly, this proposal adds Should we submit a new CAEP proposal for discussion? |
Same use case here, looping over machine compute scalable resources e.g machineDeployments to upgrade them one by one against the current control plane version. For scenarios where more control is required it'd be possibly good to have |
we have similar use case, we are using gitops + capi, to upgrade our clusters, for now we have to create new machinetemplate, update kcp, wait for that to finish delete old template, create new machinetemplate for machinedeployment, wait for rollout, delete old machinetemplate.. an operator or additional feature/resource that could handle this lifecycle as a whole (declaritively) would be ideal for us, so we can upgrade the KCP and machinedeployments machinetemplate references at same time and let the cluster reconcile and upgrade the controlplane and workers in correct order, then purge unwanted machinetemplates |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
What about closing this given the ClusterClass work? |
Agree. This will be 100% covered by what we want to do with ClusterClass. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/close |
@fabriziopandini: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
User Story
As an operator, I would like to be able to easily update the Kubernetes version of my workload clusters to be able to stay on top of security patches and new features.
Detailed Description
The procedure for updating the k8s version currently* is to copy the
MachineTemplate
for KCP, update KCP w/ new version and reference to newMachineTemplate
which causes a rollout. Rinse and repeat forMachineDeployments
.Ideally, I'd be able to declare my intent to upgrade the workload cluster and that would be reconciled and rolled out for me.
Anything else you would like to add:
Discussed on 17 June 2020 weekly meeting.
/kind feature
The text was updated successfully, but these errors were encountered: