Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Karpenter not replacing a node if you created daemonset and node doesn't have enough capacity #4242

Closed
rr-krupesh-savaliya opened this issue Jul 10, 2023 · 5 comments
Labels
lifecycle/stale question Further information is requested

Comments

@rr-krupesh-savaliya
Copy link

Description

Observed Behavior:
if you create daemonsets and the currently running node does not have enough capacity to run the pod, Karpenter will not replace the node

Expected Behavior:
if you create daemonsets and the currently running node does not have enough capacity to run the pod, Karpenter should create a new node with required resource capacity

Reproduction Steps (Please include YAML):
Apply a test DaemonSet with resource-intensive demands, such as a large memory or CPU requirement, on a currently running node that lacks the necessary capacity to accommodate that pod

Versions:

  • Chart Version: v0.28
  • Kubernetes Version (kubectl version): 1.24
  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@rr-krupesh-savaliya rr-krupesh-savaliya added the bug Something isn't working label Jul 10, 2023
@engedaam
Copy link
Contributor

This a common scenario that comes up. We normally recommend using a priority class to enable the replacement of the nodes: https://karpenter.sh/docs/faq/#when-deploying-an-additional-daemonset-to-my-cluster-why-does-karpenter-not-scale-up-my-nodes-to-support-the-extra-daemonset

@engedaam engedaam added question Further information is requested and removed bug Something isn't working labels Jul 10, 2023
@github-actions
Copy link
Contributor

This issue has been inactive for 14 days. StaleBot will close this stale issue after 14 more days of inactivity.

@underrun
Copy link

underrun commented Aug 3, 2023

when the daemonset that needs placement has equivalent priority to the other pods (cluster critical in my case) karpenter will still not create vms that are large enough to place a pending daemonset pod. so if your minimum set of required daemonset pods can't schedule karpenter can end up with pending daemonset pods that never schedule.

the workaround is to exclude instances that are too small for your workload but this isn't ideal - it would be better if karpenter could handle this.

@jonathan-innis
Copy link
Contributor

the workaround is to exclude instances that are too small for your workload but this isn't ideal

Agreed that this isn't ideal. Karpenter currently supports the Gt and Lt operators that enable you to set the minimum value in an integer for instances provisioned by the provisioner. So, you could also do something like sum up all of your DS resources and then set a requirement that sets this minimum bar like

requirements:
- key: karpenter.k8s.aws/instance-cpu
  operator: Gt
  values: ["16"]
- key: karpenter.k8s.aws/instance-memory
  operator: Gt
  values: ["61440"]

@jonathan-innis
Copy link
Contributor

Looks to me like this is a duplicate of kubernetes-sigs/karpenter#731 so going to close this. Please +1 that issue if you are interested in us prioritizing that feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants