-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document rollout support for K Worker Nodes #3401
Comments
One approach would be to introduce the |
I don't necessarily see an issue with introducing an upgradeAfter field, however I'm wondering if it would make sense to have more generic support for something similar to |
+1 Adding a similar functionality to MachineDeployment as well, I'd wait for v0.4.0 and rename these fields to |
A We can wait for 0.4.0 |
/milestone v0.4.0 |
upgradeAfter
for K Worker NodesrolloutAfter
for K Worker Nodes
After a looking into this a bit further, MachineDeployment rollout is already supported. In fact, it follows the same conventions as a regular Deployment rollout. -- a Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, .spec.template) is changed, so a Taking the same approach, when an (arbitrary) annotation is added to the MD.spec.template, a rollout is triggered:
Changes to the MD.spec.template result in a new MachineSet. We can see the revision number annotation in the MD updated to the latest MS. We can also see our restartedAt annotation in the new MS.spec.template as well. @vincepri Should we document the above approach? The rollout process for KCP is a bit different, but if that's not documented, we should add that as well. |
Am I wrong or the approach described above triggers an immediate upgrade, while in this issue we are seeking fo triggering a deferred upgrade? |
Yes, it triggers an immediate rollout, which IMO is fine. This approach follows closely with the general Deployment/RelicaSet model. KCP handles things a bit differently, it has a specific field for "forcing" a rollout to occur immediately or sometime in the future. |
Thanks, if this is the case we already have this doc https://cluster-api.sigs.k8s.io/tasks/change-machine-template.html#changing-infrastructure-machine-templates; I'm +1 to improve it if necessary |
Yes, IMO the doc should be improved. That specific page talks about changing machine templates. However, the use case here is to issue an immediate rollout, irrespective of if anything has changed in the MD. |
rolloutAfter
for K Worker Nodes
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Add a section that documents `KubeadmControlPlane.Spec.UpgradeAfter` and how to achieve a similar effect for machines managed by a `MachineDeployment`. Fixes kubernetes-sigs#3401
/lifecycle active |
User Story
As a user/operator I would like to roll out all K worker nodes to new hardware for various reasons.
As a developer/user/operator I would like to have symmetry between control-plane and worker node roll outs for added simplicity.
Detailed Description
KCP.Spec.UpgradeAfter allows machines to be rolled out after a specific date and time even if nothing in the Spec has changed. This approach has some benefits - it can be used to move control-plane nodes to new hardware, perform cert rotation, allow changes in infra machine templates to be reflected in control plane nodes (w/o creating a brand new template), ...
[edited]
CAPI should support a similar approach for worker nodes.CAPI already supports immediate rollout of worker Nodes -- see comment below. We should improve the docs to reflect this.
/kind feature
The text was updated successfully, but these errors were encountered: