Deleting deployment does not delete pods associated with it #32985
Labels
area/app-lifecycle
area/workload-api/deployment
sig/apps
Categorizes an issue or PR as relevant to SIG Apps.
Kubernetes version (use
kubectl version
): v1.2.6Environment:
uname -a
): Linux ubuntu-m1 3.13.0-95-generic When the apiserver panics, log a stacktrace. #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/LinuxWhat happened:
I have created a deployment with 3 pods and tried to delete the deployment using REST API. Deployment object and associated replica sets were deleted but when I see 'kubectl get pods' output, I could see that there are 3 pods remaining which were created from above deployment.
Deployment object name : ultraesb-4bb6588f-db05-47b9-af6a-6717baa03dad
Remaining pod names:
ultraesb-4bb6588f-db05-47b9-af6a-6717baa03dad-1103507196-6zn50
ultraesb-4bb6588f-db05-47b9-af6a-6717baa03dad-1103507196-ms3fz
ultraesb-4bb6588f-db05-47b9-af6a-6717baa03dad-1103507196-nctq2
I checked kube-scheduler log and what I noticed is that scheduler is creating a new replica set (ultraesb-4bb6588f-db05-47b9-af6a-6717baa03dad-1103507196) while it was deleting other replica sets for the deployment. Ultimately it deleted above replica set but it missed to delete pods created from the replica set.
What you expected to happen: All the pods created from deployment should be deleted once the deployment is deleted.
How to reproduce it (as minimally and precisely as possible): This is a very rare case. 99% times when I delete a deployment, it works perfectly. Above issue occurred when I tried to delete multiple deployments in sequence (almost in parallel).
Anything else do we need to know:
The text was updated successfully, but these errors were encountered: