-
Notifications
You must be signed in to change notification settings - Fork 42.5k
Description
kubectl rolling-update is useful for incrementally deploying a new replication controller. But if you have an existing replication controller and want to do a rolling restart of all the pods that it manages, you are forced to do a no-op update to an RC with a new name and the same spec. It would be useful to be able to do a rolling restart without needing to change the RC or to give the RC spec, so anyone with access to kubectl could easily initiate a restart without worrying about having the spec locally, making sure it's the same/up to date, etc. This could work in a few different ways:
- A new command,
kubectl rolling-restartthat takes an RC name and incrementally deletes all the pods controlled by the RC and allows the RC to recreate them. - Same as 1, but instead of deleting each pod, the command iterates through the pods and issues some kind of "restart" command to each pod incrementally (does this exist? is this a pattern we prefer?). The advantage of this one is that the pods wouldn't get unnecessarily rebalanced to other machines.
kubectl rolling-updatewith a flag that lets you specify an old RC only, and it follows the logic of either 1 or 2.kubectl rolling-updatewith a flag that lets you specify an old RC only, and it auto-generates a new RC based on the old one and proceeds with normal rolling update logic.
All of the above options would need the MaxSurge and MaxUnavailable options recently introduced (see #11942) along with readiness checks along the way to make sure that the restarting is done without taking down all the pods.
@nikhiljindal @kubernetes/kubectl