-
Notifications
You must be signed in to change notification settings - Fork 39.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pod is removed from store but the containers are not terminated #88613
Comments
/sig api-machinery |
/sig node |
/remove-sig api-machinery |
Is there any update on this issue? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Can we assign a SIG to look into this? |
Well, /cc @kubernetes/sig-node-bugs |
@ialidzhikov: Reiterating the mentions to trigger a notification: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/priority important-longterm |
Hi! I'm having a similar problem here. I have set "activeDeadlineSeconds" on a Pod spec. When the container runs longer than that time limit, the Pod's This happens on both standalone My pod setup:
Status of pod:
|
More info: |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
Reproduced at master( By the way, is it a valid behavior using /assign |
The problem is that a pod after activeDeadlineSeconds goes into the kubernetes/pkg/kubelet/kubelet_pods.go Lines 1469 to 1476 in 525b8e5
https://kubernetes.io/docs/concepts/workloads/pods/_print/#pod-phase Maybe we need to redefine the |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
/unassign (lack of resources...) |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
/remove-area docker We don't directly integrate with Docker any more. |
/assign |
I confirmed that this issue is addressed at master. I guess #108366 fixed this. You can update kubernetes to v1.24+ to address this issue. Let me know if there is still an issue. /close |
@gjkim42: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened:
Pod is removed from store but the associated containers can run on the Node for a very long time.
What you expected to happen:
I would expect to have a consistent behaviour and when a Pod is removed from the store, the associated containers to be terminated.
How to reproduce it (as minimally and precisely as possible):
Ensure that after 30s (.spec.activeDeadlineSeconds) the Pod will be with .status.phase=Failed and .status.reason=DeadlineExceeded. Ensure that the container will receive SIGTERM signal at this point of the time.
Delete the Pod after it is
DeadlineExceeded
.Ensure that the deletion completes right away and the pod is removed from the store.
Ensure that the associated containers will continue to run on the Node until
.spec.terminationGracePeriodSeconds
is passed.Anything else we need to know?:
Environment:
kubectl version
): v1.15.10cat /etc/os-release
):uname -a
):The text was updated successfully, but these errors were encountered: