Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Graceful Shutdown]Pod is removed from endpoints list for service even "preStop" defined and not finished #67592

Closed
crowfrog opened this issue Aug 20, 2018 · 11 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node. triage/unresolved Indicates an issue that can not or will not be resolved.

Comments

@crowfrog
Copy link

crowfrog commented Aug 20, 2018

Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature

What happened:
Pod is removed from endpoints list for service after user sends command to delete Pod
See:
https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
"Termination of Pods"

  1. (simultaneous with 3) When the Kubelet sees that a Pod has been marked as terminating because the time in 2 has been set, it begins the pod shutdown process.
    1. If the pod has defined a preStop hook, it is invoked inside of the pod. If the preStop hook is still running after the grace period expires, step 2 is then invoked with a small (2 second) extended grace period.
    2. The processes in the Pod are sent the TERM signal.
  2. (simultaneous with 3) Pod is removed from endpoints list for service, and are no longer considered part of the set of running pods for replication controllers. Pods that shutdown slowly cannot continue to serve traffic as load balancers (like the service proxy) remove them from their rotations.

What you expected to happen:
Pod is removed from endpoints list for service after "preStop" and before send the TERM signal to Pod
hope change to:

  1. (simultaneous with 3) When the Kubelet sees that a Pod has been marked as terminating because the time in 2 has been set, it begins the pod shutdown process.
    1. If the pod has defined a preStop hook, it is invoked inside of the pod. If the preStop hook is still running after the grace period expires, step 2 is then invoked with a small (2 second) extended grace period.
    2. Pod is removed from endpoints list for service, and are no longer considered part of the set of running pods for replication controllers. Pods that shutdown slowly cannot continue to serve traffic as load balancers (like the service proxy) remove them from their rotations.
    3. The processes in the Pod are sent the TERM signal.

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:
By current terminating process, "preStop" can do limited for graceful shutdown because Pod exit receiving traffic immediately.
I think if we can make such change, the Pod can have more choices to decide exiting behavior by itself.
If Pod did not define "preStop" action, the precess after changed will be same as before.
If Pod define "preStop", the Pod can use "preStop" and "readnessProbe" to decide when Pod stop receiving traffic, when Pod can shutdown gracefully and then kubernetes can send TERM signal to Pod.
This is very useful for developing a strong and graceful application running in a Pod.
Also, current terminating process impact third ingress gateway developing, such as ISTIO.
ISTIO ingress gateway can't check service endpoints list for each request. That will make it distribute some requests to a terminating Pod before ingress gateway got endpoints list update.
That means the traffic will be impacted when we do service "scale in"/"delete Pod" even if we define "preStop" for Pod.

Environment:

@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. kind/feature Categorizes issue or PR as related to a new feature. labels Aug 20, 2018
@neolit123
Copy link
Member

/sig node

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Aug 20, 2018
@crowfrog
Copy link
Author

No any update?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 29, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 28, 2019
@thockin thockin added sig/network Categorizes an issue or PR as relevant to SIG Network. triage/unresolved Indicates an issue that can not or will not be resolved. labels Mar 7, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@MartinVUALTO
Copy link

MartinVUALTO commented Jun 21, 2019

I really wish this had some responses! This is exactly the problem we have. There seems to be no way to prevent failures in a pod that is being deleted, it will always continue to receive some traffic during its shutdown phase and because TERM is sent at the same time as the preStop is executed the container just exits. If preStop was called before TERM was sent then this would mean adding sleep into preStop (as described in this blog ) would allow time for the pod to stop processing traffic and therefore not generate errors.
This is currently a serious problem for us as we use HPA and when the load starts to drop we will always get a flurry of failures because the load is spread evenly between all the pods and when HPA removes the unnecessary pods it will always generate some fails.

@hatimkapadia2030
Copy link

/reopen
Hello, Is there any update on this. We are having a similar issue.

I am running a Couchbase cluster and in order to gracefully remove a pod from the cluster I need to run a shutdown script.
Since the scripts depends on resolving the IP using the endpoints the script always fails and there is no graceful shutdown.

Any info would be appreciated

@k8s-ci-robot
Copy link
Contributor

@hatimkapadia2030: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen
Hello, Is there any update on this. We are having a similar issue.

I am running a Couchbase cluster and in order to gracefully remove a pod from the cluster I need to run a shutdown script.
Since the scripts depends on resolving the IP using the endpoints the script always fails and there is no graceful shutdown.

Any info would be appreciated

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ottoyiu
Copy link

ottoyiu commented Feb 12, 2020

We're also running into the issue. I think this ticket is worth re-opening. Since we don't have permissions, perhaps we will just re-create?

/reopen

@k8s-ci-robot
Copy link
Contributor

@ottoyiu: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

We're also running into the issue. I think this ticket is worth re-opening. Since we don't have permissions, perhaps we will just re-create?

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node. triage/unresolved Indicates an issue that can not or will not be resolved.
Projects
None yet
Development

No branches or pull requests

8 participants