New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Force deployment rolling-update #27081

Closed
rwillard opened this Issue Jun 8, 2016 · 19 comments

Comments

Projects
None yet
@rwillard
Copy link

rwillard commented Jun 8, 2016

Unless you change something in the template for deployment, a rollout won't be triggered. Shouldn't I be able to trigger a rollout arbitrarily? In this case I want to rebuild my replica set with a new version of a docker image with the same label. Maybe I'm just missing something

@jlowdermilk

This comment has been minimized.

Copy link
Member

jlowdermilk commented Jun 28, 2016

Reusing image labels is discouraged, but does tend to be a valid use case for development. See #23497 (comment) for discussion around the same issue for rc rolling-updates. Last I heard, deployments was explicitly not supporting same-image updates, but that may have changed.

@bgrant0607

This comment has been minimized.

Copy link
Member

bgrant0607 commented Jun 29, 2016

@jlowdermilk is correct.

@rwillard If you want to trigger a rollout, change an innocuous field in the pod template, such as an annotation. You also need to ensure imagePullPolicy is Always.

@bgrant0607 bgrant0607 closed this Jun 29, 2016

@pstadler

This comment has been minimized.

Copy link

pstadler commented Aug 7, 2016

In case somebody is stuck, I managed to force update deployments by applying a simple patch:

kubectl patch deployment web -p \
  "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"

This can easily be added to a Makefile or shell script.

@lymichaels

This comment has been minimized.

Copy link

lymichaels commented Sep 19, 2016

I'm using kubectl apply -f deployment.yaml everytime there's a new deployment and using annotations under metadata did the trick of forcing a pod re-create when there's a new deploy for me.

@timanovsky

This comment has been minimized.

Copy link

timanovsky commented Dec 8, 2016

Also, I need force rolling update if I change a secret. As far as I understand, it also falls to this use case.

@Smana

This comment has been minimized.

Copy link

Smana commented Apr 6, 2017

@pstadler i've tried your solution but it creates a new replicaset, without deleting the old one ...

@Smana

This comment has been minimized.

Copy link

Smana commented Apr 6, 2017

adding an annotations didn't worked too

@neverfox

This comment has been minimized.

Copy link

neverfox commented Apr 20, 2017

@Smana That's how rolling updates work, as far as I know, unless you mess with history limits. It allows for rollbacks.

@neverfox

This comment has been minimized.

Copy link

neverfox commented Apr 20, 2017

does tend to be a valid use case for development
need force rolling update if I change a secret

These things, a thousand times. Best practices are like sidewalks: nice and all, but real life users will surprise you by finding a good reason to just cut across the grass and should have that power when needed.

@nergdron

This comment has been minimized.

Copy link

nergdron commented Apr 21, 2017

This is a big workflow pain for me. Forcing redeploys when config has changed is a requirement for any environment I work with.

@boomshadow

This comment has been minimized.

Copy link

boomshadow commented Sep 5, 2017

I see this Github issue first on search results for this still. If you come across this and want to do @pstadler 's highly user-rated workaround, please be advised that a label is not an "innocuous field" as @bgrant0607 advised.

If you set the label to trigger a deploy, then you'll run into label/selector errors as discussed in #26202

To quote #26202 (comment)

labels -- they're like names, and it's dangerous to change them.

TL;DR follow @bgrant0607 's advice. You can force update deployments by applying a simple annotation patch:

kubectl patch deployment web -p \
  "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
@bgrant0607

This comment has been minimized.

Copy link
Member

bgrant0607 commented Sep 6, 2017

Related: #9043, #13488

As for rollouts of new image with same tag, the recommended solution is to translate the tag to the digest form: #1697

@kletkeman

This comment has been minimized.

Copy link

kletkeman commented Nov 3, 2017

Pretty amazing that such an obviously useful feature is left hanging for what appear to be ideological reasons. We have had this problem during development (which is an equally important use case by the way) for a long time. I will mention the cleanest of all the hacks -- adding an annotation, which of course should say "hack the script to force a rolling update on date xxxx" -- to the team. It's much better than killing pods manually, which is what they could think of (because people who are learning a technology are not ninjas yet). Just using a command to force a rolling-restart would be the obvious solution and they have been wondering ...

@Aaron3

This comment has been minimized.

Copy link

Aaron3 commented Dec 29, 2017

The patch hack from @boomshadow does not work on statefulsets.

I don't understand why some sort of rolling update is not supported aside from image changes, I will change the redis version rarely but tweak settings every month. Same applies to any other service where all I need to do is reload a simple config file.

I should not have to bring redis down completely just to change some minor setting.

@1k2k

This comment has been minimized.

Copy link

1k2k commented Mar 14, 2018

oh lord, not yet fixed?

@jethrogb

This comment has been minimized.

Copy link

jethrogb commented Apr 24, 2018

@Aaron3 what issues are you having with kubectl patch statefulset? It appears to work fine for me.

@commandfailure

This comment has been minimized.

Copy link

commandfailure commented May 24, 2018

this solution

kubectl patch deployment web -p
"{"spec":{"template":{"metadata":{"annotations":{"date":"date +'%s'"}}}}}"
needs to include a date field,

so, why not use currentTimestamp, that0s mean
kubectl patch deployment web -p
"{"spec":{"template":{"metadata":{"creationTimestamp":"date --utc '+%FT%TZ'"}}}}"

@mitchellmaler

This comment has been minimized.

Copy link

mitchellmaler commented Jun 2, 2018

Why was this closed?

Telling people to add a annotation to an already depoyed deployment/statefulset using patch is not that user friendly and many have called it a hack.

This sounds like a useful command to add to allow a rolling redeployment even when nothing changed.

@bgrant0607

This comment has been minimized.

Copy link
Member

bgrant0607 commented Jun 5, 2018

Dupe of #13488

@kubernetes kubernetes locked as resolved and limited conversation to collaborators Jun 5, 2018

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.