-
Notifications
You must be signed in to change notification settings - Fork 2.3k
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cluster creating hundreds of pods because node is down #6303
Comments
You didn't fill out the issue template, so I'm not sure what version of K3s you're working with or what your cluster configuration is. It is not expected that the deployment controller would continue to create pods when there is no node available to schedule them on, or when a node does become available. Are you using an autoscaler that scaled up the deployment replica count in an attempt to create pods? Can you post more information on what specifically you're seeing, including |
k3s v1.24.4+k3s1 the most recent app where i have this issue is a standard gitlab runner deployment, i had the same situation in another deployment before where i basically used the same steps heres the deployment i had to redact most info. `apiVersion: apps/v1
|
It doesn't sound like this is a problem with k3s or rancher then, but rather just the behavior of Kubernetes itself when you configure such a Deployment? |
I'm going to convert this to a discussion as this seems like more of a question than a bug report. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Not sure if this is the right place to ask this or in rancher..
i have a k3s cluster with some nodes, and i have a deployment of an app where i made it so it only schedulles in one particular node,
i used rancher for this effect, (deployment > config > Node Scheduling > run in specific node )
the thing is that particular node went down and when i did come back it attempted to create hundreds of pods, i have the cluster overflowing with terminating pods, hundreds or thousans of them at one point in the past...
is there a way that i can make it so it just doesn't try to create pods when the available machines are down?
The text was updated successfully, but these errors were encountered: