You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If Karpenter deletes a node that never became Ready, in some cases it will take away the possibility of finding out why that node never became ready. I'm not sure if this is the desired behavior for most cases.
Version
Karpenter: v0.13.2
Kubernetes: v1.22.11-eks-18ef993
Expected Behavior
Karpenter deletes empty Nodes with no workloads after creating them even if Node is not ready.
Actual Behavior
Karpenter keeps trying to schedule pods on NotReady node and doesn't delete it when those pods finally get scheduled elsewhere.
Steps to Reproduce the Problem
This is a pretty rare problem and I think I've only seen it once but didn't have time to investigate.
This node never became ready:
After trying to schedule pods on it for a while they got scheduled elsewhere:
The only Pending pods belong to daemonsets for that node:
kubectl get pods -A | grep "Pending"
Karpenter logs don't have any mentions of Node being empty. Only launching a node and assigning many pods.
After doing manual
kubectl delete node
:Resource Specs and Logs
I included all logs I could think of here: https://gist.github.com/ace22b/cdc0f7ff8fcf333d54f1688c9e984255
I am not able to access the node over SSM to get the kubelet log.
Thanks.
The text was updated successfully, but these errors were encountered: