Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rolling restart of pods #13488

Closed
ghodss opened this issue Sep 2, 2015 · 112 comments · Fixed by #77423
Closed

Rolling restart of pods #13488

ghodss opened this issue Sep 2, 2015 · 112 comments · Fixed by #77423
Labels
area/app-lifecycle kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/apps Categorizes an issue or PR as relevant to SIG Apps.
Projects

Comments

@ghodss
Copy link
Contributor

ghodss commented Sep 2, 2015

kubectl rolling-update is useful for incrementally deploying a new replication controller. But if you have an existing replication controller and want to do a rolling restart of all the pods that it manages, you are forced to do a no-op update to an RC with a new name and the same spec. It would be useful to be able to do a rolling restart without needing to change the RC or to give the RC spec, so anyone with access to kubectl could easily initiate a restart without worrying about having the spec locally, making sure it's the same/up to date, etc. This could work in a few different ways:

  1. A new command, kubectl rolling-restart that takes an RC name and incrementally deletes all the pods controlled by the RC and allows the RC to recreate them.
  2. Same as 1, but instead of deleting each pod, the command iterates through the pods and issues some kind of "restart" command to each pod incrementally (does this exist? is this a pattern we prefer?). The advantage of this one is that the pods wouldn't get unnecessarily rebalanced to other machines.
  3. kubectl rolling-update with a flag that lets you specify an old RC only, and it follows the logic of either 1 or 2.
  4. kubectl rolling-update with a flag that lets you specify an old RC only, and it auto-generates a new RC based on the old one and proceeds with normal rolling update logic.

All of the above options would need the MaxSurge and MaxUnavailable options recently introduced (see #11942) along with readiness checks along the way to make sure that the restarting is done without taking down all the pods.

@nikhiljindal @kubernetes/kubectl

@nikhiljindal
Copy link
Contributor

cc @ironcladlou @bgrant0607

Whats the use case for restarting the pods without any changes to the spec?

Note that there wont be any way to rollback the change if pods started failing when they were restarted.

@ghodss
Copy link
Contributor Author

ghodss commented Sep 2, 2015

Whenever services get into some wedged or undesirable state (maxed out connections and are now stalled, bad internal state, etc.). It's usually one of the first troubleshooting steps if a service is seriously misbehaving.

If the first pod fails as it is restarted, I would expect it to cease continuing or continue retrying to start the pod.

@smarterclayton
Copy link
Contributor

Also, a rolling restart with no spec change reallocates pods across the
cluster.

However, I would also like the ability to do this without rescheduling the
pods. That could be a rolling label change, but may pick up new dynamic
config or clear the local file state.

On Wed, Sep 2, 2015 at 12:01 AM, Sam Ghods notifications@github.com wrote:

Whenever services get into some wedged or undesirable state (maxed out
connections and are now stalled, bad internal state, etc.). It's usually
one of the first troubleshooting steps if the service is seriously
misbehaving.

If the first pod fails as it is restarted, I would expect it to cease
continuing or continue retrying to start the pod.


Reply to this email directly or view it on GitHub
#13488 (comment)
.

Clayton Coleman | Lead Engineer, OpenShift

@ghodss
Copy link
Contributor Author

ghodss commented Sep 2, 2015

@smarterclayton Is that like my option 2 listed above? Though why would labels be changed?

@bgrant0607 bgrant0607 added priority/backlog Higher priority than priority/awaiting-more-evidence. area/kubectl team/ux labels Sep 2, 2015
@bgrant0607
Copy link
Member

Re. wedged: That's what liveness probes are for.

Re. rebalancing: see #12140

If we did support this, I'd lump it with #9043 -- the same mechanism is required.

@ghodss
Copy link
Contributor Author

ghodss commented Sep 2, 2015

I suppose this would more be for a situation where the pod is alive and responding to checks but still needs to be restarted. One example is a service with an in-memory cache or internal state that gets corrupted and needs to be cleared.

I feel like asking for an application to be restarted is a fairly common use case, but maybe I'm incorrect.

@bgrant0607
Copy link
Member

Corruption would just be one pod, which could just be killed and replaced by the RC.

The other case mentioned offline was to re-read configuration. That's dangerous to do implicitly, because restarts for any reason would cause containers to load the new configuration. It would be better to do a rolling update to push a new versioned config reference (e.g. in an env var) to the pods. This is similar to what motivated #1353.

@gmarek
Copy link
Contributor

gmarek commented Sep 9, 2015

@bgrant0607 have we decided that we don't want to do this?

@bgrant0607
Copy link
Member

@gmarek Nothing, for now. Too many things are underway already.

@bgrant0607 bgrant0607 removed their assignment Sep 9, 2015
@gmarek
Copy link
Contributor

gmarek commented Sep 10, 2015

Can we have a post v1.1 milestone (or something) for the stuff that we deem important, but we lack people to fix them straight away?

@Glennvd
Copy link

Glennvd commented Dec 1, 2015

I would be a fan of this feature as well, you don't want to be forced to switch tags for every minor update you want to roll out.

@mbmccoy
Copy link

mbmccoy commented Dec 31, 2015

I'm a fan of this feature. Use case: Easily upgrade all the pods to use a newly-pushed docker image (with imagePullPolicy: Always). I currently use a bit of a hacky solution: Rolling-update with or without the :latest tag on the image name.

@mbmccoy
Copy link

mbmccoy commented Jan 8, 2016

Another use case: Updating secrets.

@ericuldall
Copy link

I'd really like to see this feature. We run node apps on kubernetes and currently have certain use cases where we restart pods to clear in app pseudo caching.

Here's what I'm doing for now:

kubectl get pod | grep 'pod-name' | cut -d " " -f1 - | xargs -n1 -P 10 kubectl delete pod

This deletes pods 10 at a time and works well in a replication controller set up. It does not address any concerns like pod allocation or new pods failing to start. It's a quick solution when needed.

@jonaz
Copy link

jonaz commented Apr 25, 2016

I would really like to be able to do a rolling restart.
The main reason is we will feed ENV variables into pods using ConfigMap and then if we change config we need to restart the consumers of that ConfigMap.

@paunin
Copy link

paunin commented May 10, 2016

Yes, there are a lot of cases when you really want to restart pod/container without changes inside...
Configs, cache, reconnect to external services, etc. I really hope the feature will be developed.

@paunin
Copy link

paunin commented May 10, 2016

Small work around (I use deployments and I want to change configs without having real changes in image/pod):

  • create configMap
  • create deployment with ENV variable (you will use it as indicator for your deployment) in any container
  • update configMap
  • update deployment (change this ENV variable)

k8s will see that definition of the deployment has been changed and will start process of replacing pods
PS:
if someone has better solution, please share

@Lasim
Copy link

Lasim commented May 10, 2016

Thank you @paunin

@wombat
Copy link

wombat commented Jul 28, 2016

@paunin Thats exactly the case where we need it currently - We have to change ConfigMap values that are very important to the services and need to be rolled-out to the containers within minutes up to some hours. If no deployment happens in the meantime the containers will all fail at the same time and we will have partial downtime of at least some seconds

@dimileeh
Copy link

dimileeh commented Dec 19, 2019

Our GKE cluster on "rapid" release channel has upgraded itself to Kubernetes 1.16 and now kubectl rollout restart has stopped working:

kubectl rollout restart deployment myapp
error: unknown command "restart deployment myapp"

@nikhiljindal asked a while ago about the use case for updating the deployments without any changes to the specs. Maybe we're doing it in a non-optimal way, but here it is: our pre-trained ML models are loaded into memory from Google Cloud Storage. When model files get updated on GCS, we want to rollout restart our K8S deployment, which pulls the models from GCS.

I appreciate we aren't able to roll back the deployment with previous model files easily, but that's the trade-off we adopted to bring models as close as possible to the app and avoid a network call (as some might suggest).

@apelisse
Copy link
Member

apelisse commented Dec 19, 2019

hey @dimileeh

Do you happen to know what version of kubectl you're using now? and what version you used before? I'd love to know if there was a regression, but at the same time I'd be surprised if the feature had entirely disappeared.

With regard to the GCS thing, and knowing very little about your use-case so sorry if it makes no sense: I would suggest that the gcs model get a different name every time they get modified (maybe suffix with their hash), and that the name would be included in the deployment. Updating the deployment to use the new files would automatically trigger a rollout. This give you the ability to roll-back to a previous deployment/model, have a better understanding of the changes happening to the models, etc.

@dimileeh
Copy link

dimileeh commented Dec 19, 2019

hi @apelisse, thank you for your response!

When I run kubectl version from Google Cloud Terminal, I get the following:

Client Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.11-dispatcher", GitCommit:"2e298c7e992f83f47af60cf4830b11c7370f6668", GitTreeState:"clean", BuildDate:"2019-09-19T22:20:12Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.0-gke.20", GitCommit:"d324c1db214acfc1ff3d543767f33feab3f4dcaa", GitTreeState:"clean", BuildDate:"2019-11-26T20:51:21Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}

When I tried to upgrade kubectl via gcloud components update, it said I'm already using the latest versions of all products. Therefore, I think my kubectl version stayed the same while the K8S cluster upgraded from 1.15 to 1.16.

The Kubenetes documentation 1.17, 1.16 and 1.15 has nothing about kubectl rollout restart feature. So I wonder if your valuable contribution could have disappeared from 1.16?


Thank you for your suggestion on model versioning, it makes perfect sense. We thought about that but then, since we retrain our models every day, we thought we'd start accumulating too many models (and they are quite heavy). Of course, we could use some script to clean up old versions after some time, but so far we've decided to keep it simple relying on kubectl rollout restart and not caring about model versioning :)

@apelisse
Copy link
Member

I can see the docs here:
https://v1-16.docs.kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-restart-em-

@dimileeh
Copy link

Ah, thank you, I was looking here:
https://v1-16.docs.kubernetes.io/docs/reference/kubectl/cheatsheet/

@apelisse
Copy link
Member

apelisse commented Dec 19, 2019 via email

@apelisse
Copy link
Member

@dimileeh PTAL kubernetes/website#18224 (I'll cherry-pick in relevant branches once this gets merged).

@apelisse
Copy link
Member

@dimileeh I think I figured out what's wrong with your kubectl version, we'll be working on it.

@anuragtr
Copy link

Yes, we also have use case of re-starting pod without code change, after updating the configmap. This is to update a ML model without re-deploying the service.

@montanaflynn
Copy link

@anuragtr with latest versions you can run

kubectl rollout restart deploy NAME

@mauri870
Copy link
Member

I was using a custom command for that [1], glad it is now in the standard kubectl! Thanks

[1] https://github.com/mauri870/kubectl-renew

@japzio
Copy link

japzio commented Dec 14, 2020

@anuragtr with latest versions you can run

kubectl rollout restart deploy NAME

@countrogue

@shoce
Copy link

shoce commented May 2, 2021

As i understand now if i have a newer docker image tagged as :latest and a deployment using the image tagged :latest, with the kubectl rollout restart and imagePullPolicy: Always specified in the container template it is possible to restart the pods and they will pull the newer image. But if the image is still the same, rollout restart will still restart the pods of the deployment.

IS THERE a way to ask Kubernetes to check if the image has really changed and restart pods ONLY if the image differs from that one used in the pods?

I am migrating services that are managed by docker-compose, and currently i run docker-compose up -d and it restarts a service (using :latest tagged image) only if another version of the image (still tagged as :latest) is available now.

@AndrewFarley
Copy link

@shoce I believe you may misunderstand how kubernetes and this latest tag concept works.

Simply put, if you use a dynamic tag (like latest) that can have multiple different images at any given time, you can never guarantee you are always using the same version of that tag (aka the same image). Kubernetes doesn’t do a “lookup” like you seem to assume to check what sha sum the current t deployment needs to pull from your container registry. This all is possible whether or not you have image pull policy set.

An example of it helps, is let’s say you have 3 nodes and a deployment of your “widgets” service that has 3 replicas specified and let’s say your image pull policy is always. Let’s say you trigger an update to your service (although I do not know how since your image tag didn’t change. So let’s say you do something silly which I’ve seen before like set the current date into an annotation). The second this triggers on the first node it will try to bring up a new pod with your latest latest, but before that gets healthy let’s say your CI system or your dev pushed a new latest. Then after this your first pod got healthy and a new pod on the second node tried to come up. This one would now be using the newer latest.

TL;DR: Do not use latest tag for anything ever for the most part. It’s really bad practice. All registries I know of have a feature you can enable which disallows pushing over an existing tag exactly for this reason. It’s bad. There are simple use cases for latest (Eg useful for internal ci images and tooling and can be useful in docker files) but you should understand when you use and not use them. Deploying something into kubernetes with a latest tag is generally viewed as “doing something wrong/funny” in my experience.

@shoce
Copy link

shoce commented May 2, 2021

@AndrewFarley thanks, your explanation helped me a lot and the example is something i could not see before. Actually what i do is using :develop and :master tagging of docker images built against the corresponding git branches and deploy them to dev and staging environments as soon as possible. As my dev and staging environments are hosted on a single host i did not worry much.

After reading many discussions on :latest tags and Kubernetes i finally agree to drop using :latest tagging with Kubernetes even for dev and staging environments. It was simple to use with docker-compose but not ok with Kubernetes.

I have two related questions now and i would appreciate any leads. It might seem off topic but i believe these are the issues blocking people to drop using :latest tags.

  1. What would be the best way to trigger deployments' updates? Instead of the cron job i have now that does docker pull and docker-compose up -d. I have a feeling it is too much coupling to make the build system to trigger updates to the cluster. I prefer the cluster to check for updates available but may be i am wrong here.
  2. How to clean old docker images in private docker registry? Now i have :develop and :master tags only and no need to make cleaning. But with proper tagging i will have to keep only the latest :develop-timestamp-or-hash image and delete all older images. This really adds me headache as i try to have minimum possibilities of running out of disk space.

@AndrewFarley
Copy link

@shoce You should google this problem for your registry provider (eg: "Automatically delete old images on REGISTRY_PROVIDER" or perhaps "Delete untagged images on REGISTRY_PROVIDER")

Each registry tends to have their own API and/or tools to handle this. AWS, for example, has a built-in Lifecycle Policy which you can configure to automatically delete images without requiring much effort on your part. In the past on "simpler" registries I've been known to write simple scripts to query the old images and delete them.

Good luck!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/app-lifecycle kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence. sig/apps Categorizes an issue or PR as relevant to SIG Apps.
Projects
Workloads
  
Done
Development

Successfully merging a pull request may close this issue.