Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tight retry loops should not cause cascading failure of the cluster #74405

Open
liurd opened this issue Feb 22, 2019 · 22 comments
Open

Tight retry loops should not cause cascading failure of the cluster #74405

liurd opened this issue Feb 22, 2019 · 22 comments
Labels
area/reliability kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling.

Comments

@liurd
Copy link

liurd commented Feb 22, 2019

What happened:
Problem found by accident during cluster stability testing. The application consumes nearly 7MB of memory per pod.
While the helm chart was written the way that the deployment have set too low limits - see the snippet:
containers: ...... resources: limits: cpu: 200m memory: 4Mi requests: cpu: 100m memory: 4Mi

A pod was started; once initialized, it started consuming more memory than the limit allowed (7MB > 4MB) and pod was deleted by the system
It was happening fast enough to cause the hosting VM crash and in case of many replicas all the worker nodes crashed within a few minutes.

Impact:
It will lead to whole cluster crash.

What need to enhance for K8s
And here's the problem: we have no control over the chart contents in this area, and logical error may lead to crash of a whole cluster. There must be some mechanism introduced, which prevents the system from such tight loop of killing and restarting pods with memory limits set incorrectly**:

Environment:
Kubernetes version (use kubectl version): v1.12.2 and v1.12.2.
It shall be common issue, doesn't matter with releases.

/sig node
/kind feature
/sig architecture
/sig scheduling
/sig cluster-lifecycle

@liurd liurd added the kind/feature Categorizes issue or PR as related to a new feature. label Feb 22, 2019
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Feb 22, 2019
@bgrant0607 bgrant0607 added sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. area/reliability sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. and removed sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. labels May 3, 2019
@bgrant0607 bgrant0607 changed the title Memory limits incorrectly set in a helm chart lead to cluster crash Tight restart loops should not cause cascading failure of the cluster May 3, 2019
@k8s-ci-robot k8s-ci-robot added sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. labels May 3, 2019
@bgrant0607 bgrant0607 removed the sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. label May 3, 2019
@bgrant0607
Copy link
Member

/remove-sig cluster-lifecycle

@k8s-ci-robot
Copy link
Contributor

@bgrant0607: Those labels are not set on the issue: sig/cluster-lifecycle

In response to this:

/remove-sig cluster-lifecycle

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@bgrant0607 bgrant0607 changed the title Tight restart loops should not cause cascading failure of the cluster Tight retry loops should not cause cascading failure of the cluster May 4, 2019
@k8s-ci-robot k8s-ci-robot added the sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. label May 4, 2019
@bgrant0607
Copy link
Member

/remove-sig cluster-lifecycle

@k8s-ci-robot k8s-ci-robot removed the sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. label May 4, 2019
@bgrant0607 bgrant0607 added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label May 4, 2019
@bgrant0607
Copy link
Member

Ref #2529

@bgrant0607
Copy link
Member

@liurd

Thanks for the report.

Was it a Deployment controller in the Helm chart?

After a short time, OOMing containers should have resulted in CrashLoopBackoff. Did you observe that?

Was the node considered unready at some point? Was that because its local disk filled up? Something would have to have caused the pods to be killed so they could be replaced by their controller and rescheduled.

By "crash of a whole cluster", did your etcd fill up with pods and/or events, or was apiserver DoSed with requests, or both?

What is your pod GC threshold set to?

Did you set ResourceQuota for pods and events in your namespaces?

@bgrant0607
Copy link
Member

Other refs:
#32837

cc @kow3ns

It looks like it was decided at some point (#35342) to leave broken pods/containers on nodes in a "waiting" state to avoid controller hot loops. This is a problem for clients to figure out that something is wrong and what to do about it:
#74979 (comment)

@bgrant0607
Copy link
Member

Another ref: #76370

@bgrant0607
Copy link
Member

Other possible scenarios may be in https://github.com/hjacobs/kubernetes-failure-stories

@bgrant0607
Copy link
Member

This is also a problem for conformance tests and for other clients trying to determine success or failure of workloads:
#75324

@dims
Copy link
Member

dims commented May 8, 2019

long-term-issue (note to self)

@bgrant0607
Copy link
Member

@liurd

The controller in question was a Deployment?

Do you know what specifically caused the VM to crash? The Kubelet should have put the pods into CrashLoopBackoff, as shown here: http://www.google.com/url?q=http%3A%2F%2Fcloudgeekz.com%2F1605%2Fkubernetes-application-oomkilled.html&sa=D&sntz=1&usg=AFQjCNEoSnD7t17j8cvistbkDoWlB5DAvA

When you mentioned "whole cluster crash", did you also mean the control plane (apiserver) died, or just all of the worker nodes? Approximately how many nodes were in the cluster?

Which Kubernetes distribution or service were you using, or how did you create the cluster?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 7, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 7, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@dims
Copy link
Member

dims commented Oct 7, 2019

/reopen
/remove-lifecycle rotten

@k8s-ci-robot
Copy link
Contributor

@dims: Reopened this issue.

In response to this:

/reopen
/remove-lifecycle rotten

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Oct 7, 2019
@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 7, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 5, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 4, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@bgrant0607 bgrant0607 added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Mar 5, 2020
@bgrant0607 bgrant0607 reopened this Mar 5, 2020
@ffromani
Copy link
Contributor

this issue still seems very relevant but I can't find actionable items on node side. Please add node back once we have a bit clearer picture and items we can work on.
/remove-sig node

@k8s-ci-robot k8s-ci-robot removed the sig/node Categorizes an issue or PR as relevant to SIG Node. label Jun 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/reliability kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling.
Projects
Status: Needs Triage
Status: Needs Triage
Development

No branches or pull requests

6 participants