-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Force deployment rolling-update #27081
Comments
Reusing image labels is discouraged, but does tend to be a valid use case for development. See #23497 (comment) for discussion around the same issue for rc rolling-updates. Last I heard, deployments was explicitly not supporting same-image updates, but that may have changed. |
@jlowdermilk is correct. @rwillard If you want to trigger a rollout, change an innocuous field in the pod template, such as an annotation. You also need to ensure imagePullPolicy is Always. |
In case somebody is stuck, I managed to force update deployments by applying a simple patch: kubectl patch deployment web -p \
"{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}" This can easily be added to a Makefile or shell script. |
I'm using kubectl apply -f deployment.yaml everytime there's a new deployment and using annotations under metadata did the trick of forcing a pod re-create when there's a new deploy for me. |
Also, I need force rolling update if I change a secret. As far as I understand, it also falls to this use case. |
@pstadler i've tried your solution but it creates a new replicaset, without deleting the old one ... |
adding an annotations didn't worked too |
@Smana That's how rolling updates work, as far as I know, unless you mess with history limits. It allows for rollbacks. |
These things, a thousand times. Best practices are like sidewalks: nice and all, but real life users will surprise you by finding a good reason to just cut across the grass and should have that power when needed. |
This is a big workflow pain for me. Forcing redeploys when config has changed is a requirement for any environment I work with. |
I see this Github issue first on search results for this still. If you come across this and want to do @pstadler 's highly user-rated workaround, please be advised that a label is not an "innocuous field" as @bgrant0607 advised. If you set the label to trigger a deploy, then you'll run into label/selector errors as discussed in #26202 To quote #26202 (comment)
TL;DR follow @bgrant0607 's advice. You can force update deployments by applying a simple annotation patch:
|
Pretty amazing that such an obviously useful feature is left hanging for what appear to be ideological reasons. We have had this problem during development (which is an equally important use case by the way) for a long time. I will mention the cleanest of all the hacks -- adding an annotation, which of course should say "hack the script to force a rolling update on date xxxx" -- to the team. It's much better than killing pods manually, which is what they could think of (because people who are learning a technology are not ninjas yet). Just using a command to force a rolling-restart would be the obvious solution and they have been wondering ... |
The patch hack from @boomshadow does not work on statefulsets. I don't understand why some sort of rolling update is not supported aside from image changes, I will change the redis version rarely but tweak settings every month. Same applies to any other service where all I need to do is reload a simple config file. I should not have to bring redis down completely just to change some minor setting. |
oh lord, not yet fixed? |
@Aaron3 what issues are you having with |
this solution kubectl patch deployment web -p so, why not use currentTimestamp, that0s mean |
Why was this closed? Telling people to add a annotation to an already depoyed deployment/statefulset using patch is not that user friendly and many have called it a hack. This sounds like a useful command to add to allow a rolling redeployment even when nothing changed. |
Dupe of #13488 |
Unless you change something in the template for deployment, a rollout won't be triggered. Shouldn't I be able to trigger a rollout arbitrarily? In this case I want to rebuild my replica set with a new version of a docker image with the same label. Maybe I'm just missing something
The text was updated successfully, but these errors were encountered: