Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
Possible Bug: deleted deployment but related pod didn't delete #83672
Anything else we need to know?:
The details of problem:
the “aipaas-model serving” namesapce‘s deployment:
I did not found the related replicaset ,the “aipaas-model serving” namesapce‘s rs:
through the above details you will see some pods that did not belong to any deployment ,
secondly,I have check the logs from the controller manager related to pod "callmedad-05ziytkh-serving-v1-689cbd7965-bt9rr".
I guess that information :
@calmkart okay,because the related deployment resource has deleted ,just pod resource we can see,but i can give you another deployment yaml that is similar to the related deployment
this is pod yaml:
maybe you feel confused,because we used service mesh "Istio",so the yaml has a sidecar pod ,If you want other information,I am very pleasure to provide it,Tks
paste the logs near 06:09:06?
I'm having the same trouble here (kubernetes 1.15.3), but on a very specific situation.
But we don't operate on our cluster using kubectl; we use a custom solution that is kind of outdated.
This is the apiserver log in a custom format showing what he does:
This is the object before its deployment delection:
This is the object after deletion of its deployment:
What motivated me to post here is a suspiction triggered by this comment of OP: I noticed my deployments and replicasets are being created by our tool using the apiVersion 'extensions/v1beta1':
I'm guessing the problems we are facing are due to clientside operating on extensions/v1beta1 that somehow triggers an aberrant behaviour somewhere in the Controller code.
Foreground and Background are the two modes of cascading deletion.
cascading deletion is default turn on in kubectl client. if you use kubectl to delete owner, such as
but if you use api to delete a owner, such as curl, the default option is orphan (in ReplicationController、ReplicaSet、StatefulSet、DaemonSet and Deployment).so it will delete the deployment only, but not delete the rs/pods.(only delete the rs's metadata.ownerReferences)
you can set delete options to cascading, like
maybe your custom delete tool set the mode to default (orphan), pls change the mode to Foreground or Background will fixed this.
@calmkart I'm a little confused，if our client was set default (orphan), so when I deleted all deployment ,the all rs/pod should't be deleted ,but actually just accidentally happened ，most of time, deleted the deployment ,the rs,pod also will be deleted at the same time ,I want to known more details ,tks.