-
Notifications
You must be signed in to change notification settings - Fork 832
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
init draft of desc of resource limits #1117
Conversation
✔️ Deploy Preview for karpenter-docs-prod ready! 🔨 Explore the source changes: 6197665 🔍 Inspect the deploy log: https://app.netlify.com/sites/karpenter-docs-prod/deploys/61e9aaa072011d000881c0b1 😎 Browse the preview: https://deploy-preview-1117--karpenter-docs-prod.netlify.app |
Thanks for the feedback @suket22 ! I pushed a revised version. |
great feedback @suket22 ! I ended up putting the "check current consumption" tip under the main heading. |
@@ -47,6 +47,14 @@ spec: | |||
- key: "karpenter.sh/capacity-type" # If not included, the webhook for the AWS cloud provider will default to on-demand | |||
operator: In | |||
values: ["spot", "on-demand"] | |||
|
|||
# Resource limits constrain the total size of the cluster. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Resource limits constrain the total size of the cluster. | |
# Resource limits that can be launched by this provisioner. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused by this suggestion. It seems to be missing something between "limits" and "that can..."
|
||
The provisioner spec includes a limits section (`spec.limits.resources`), which constrains the maximum amount of resources that the provisioner will manage. | ||
|
||
Presently, Karpenter supports `memory` and `cpu` limits. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Presently, Karpenter supports `memory` and `cpu` limits. | |
Karpenter supports `memory` and `cpu` limits. |
@@ -0,0 +1,58 @@ | |||
--- | |||
title: "Set Resource Limits" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we include this in the Provisioning.md doc to avoid page sprawl?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think, right now, we have relatively low page sprawl and (to my understanding) this is an issue customers want to see movement on.
|
||
Memory limits are described with a [`BinarySI` value, such as 1000Gi.](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memory) | ||
|
||
Karpenter stops allocating resources once at least one resource limit is met/exceeded. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need a note like:
"Warning: Karpenter provisioning is highly parallel. Because of this, limit checking is eventually consistent, which can result in overrun during rapid scale outs. "
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is an excellent point.
add note re eventually consistent based on feedback from etarn |
1817b55
to
9531ba3
Compare
Co-authored-by: Suket Sharma <suket22@gmail.com>
9531ba3
to
6197665
Compare
If this is its own task, we might want to consider breaking up the Scheduling task, which contains many tasks of similar sizes all in one doc. |
1. Issue, if available:
#1003
2. Description of changes:
add new task for setting resource limits.
From standup, @ellistarn indicated non-specific complexities associated with "eventual consistency" regarding resource limits?
I'm open to moving this under the "run pods" task (subpage or new section), where do you think @chrisnegus ?
requesting review from @suket22
3. How was this change tested?
tested on my own cluster (only tested CPUs)
4. Does this change impact docs?
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.