-
Notifications
You must be signed in to change notification settings - Fork 830
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deleting a provisioner causes nodes to be cordoned and removed #2466
Comments
This behavior is expected since v0.12.0 #1934 |
Okay. Will try and remember this for next time! |
I'm curious to learn more about your experience, why was the provisioner deleted? Wasn't there another provisioner that Karpenter would use to start a new node? Not having any provisioners seems to inherently puts the cluster in an undesirable state because if the node(s) launched according to the template die for whatever reason there are no instructions on what type of node(s) to bring up next. |
I'd think that there would be reasonable/sane default / best-effort instance types launched to maintain capacity. Node type(s) may or may not match at which point a metric to denote degradation should be raised. |
especially if those nodes have workloads that have nowhere to go.... |
This is analog to just terminating an ASG. You'd see the same behavior then. If you have other providers available, the workloads will have other places to go. If not, I am wondering why you are removing the Provider to begin with. |
Labeled for closure due to inactivity in 10 days. |
Version
Karpenter: v0.16.1
Kubernetes: v1.0.0
Expected Behavior
Deleting a provisioner does not affect running nodes
Actual Behavior
Deleting a provisioner cordoned and removed nodes
Steps to Reproduce the Problem
Resource Specs and Logs
The text was updated successfully, but these errors were encountered: