New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deployment with recreate strategy does not remove old replica set #24330
Comments
I am guessing it is because you are using different selectors? |
Is there another way to force deployment when the image tag remains the same? I started out without the selector and just the app label, but that didn't do anything at all. |
If you are keeping everything as it is and just change the strategy to be |
No, I hit that before but worked my way around by deleting deployment, rs and starting a fresh deployment. This issue however happens since that moment. |
@JorritSalverda all old RSes will be kept by default unless you specify the deployment's |
If you find this surprising please complain at #23597. |
I think @JorritSalverda specifically means that old RS' are kept running, instead of downscaled once a new version is deployed. This seems to be "working as intended", as you change the From what I understand, a RS is "updated" when its "hash" differs, ie: when anything in the config is changed. So, instead of updating the |
Thanks, by removing the version from I already set the |
I too am facing this issue. My old/previous RSs are being kept alive/running. My issue in particular is this.
I use jenkins to build my project and a docker image and push it to GCR. This docker image only has the 'latest' tag. After that is done, I instruct Jenkins to change the label JenkinsBuildID of the deployment that is already in my cluster:
This does update the deployment and create a new replica set. However, the previous one will remain running and be "orphaned" as it doesn't even show on the "Old Replica Sets" section of the Kubernetes UI. Am I doing anything wrong here?? Best Regards, |
Thought this might help anyone looking at how to remove inactive replicasets (spec.replicas set to zero by the deployment). You may want to add something to filter for replicasets actually managed by a deployment as this might end up destroying manually created replicasets:
This will not delete anything, it will simply output a list of kubectl commands required to perform the cleanup. |
You don't need to run any script in order to cleanup old replica sets - just set http://kubernetes.io/docs/user-guide/deployments/#clean-up-policy |
Yep, I'm doing that but wrote that script to clean up the ones that had
already been created before.
…On Thu, 8 Dec 2016 at 16:09, Michail Kargakis ***@***.***> wrote:
You don't need to run any script in order to cleanup old replica sets -
just set .spec.revisionHistoryLimit in the Deployment to the number of
old replica sets you want to retain.
http://kubernetes.io/docs/user-guide/deployments/#clean-up-policy
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#24330 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAMuep2OFhU-ucfG78BGPR9qGYuxknd_ks5rGCujgaJpZM4IIWvA>
.
|
I came across this today, just a short heads-up: in @fcvarela command the select expression probably should look like this |
Hi |
it helpd me ,thanks you very much!! |
This did the trick for me, thanks!!!
|
I think i have similar problem to yours. After a new deployment, the pods attached to the old replica are being deleted but the replica itself is not, I end up with a replica showing 0/0. Did you also have this issue? |
I want to replace an application by running a new deployment. The image tag however does not change, although the image itself might have (that's the reason for the image pull policy always). For that reason I've added an extra label that changes on every deployment.
When applying this the deployment is accepted and a new replica set is created. However the old replica set stays alive as well.
I apply it with the following command in order to replace the variables. Take note of how the version is set to the current date and time.
The deployment description shows labels and selectors to have a mismatch.
And the rollout history acts like the second deployment never happened.
Am I understanding the strategy Recreate incorrectly? Or is this a bug?
The text was updated successfully, but these errors were encountered: