-
Notifications
You must be signed in to change notification settings - Fork 508
Update ResourceFlavor site documentation #4500
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update ResourceFlavor site documentation #4500
Conversation
|
Hi @ChristianZaccaria. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
✅ Deploy Preview for kubernetes-sigs-kueue ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
084fdf4 to
c2e2e0f
Compare
|
/ok-to-test |
|
/release-note-edit |
|
/assgin @gabesaba |
|
|
||
| **Requires Kubernetes 1.23 or newer** | ||
|
|
||
| This approach may be best for teams that wish to schedule pods onto the appropriate nodes automatically. However, a limitation arises when multiple types of specialized hardware are present, such as two different nvidia.com/gpu resources present in the cluster, i.e., T4 and A100 GPUs. The system may not differentiate between them, meaning the pods could be scheduled on any of both types of hardware. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TBH I tend to ask people to use new lines for each sentence as this gets hard to read in markdown.
| This approach may be best for teams that wish to schedule pods onto the appropriate nodes automatically. However, a limitation arises when multiple types of specialized hardware are present, such as two different nvidia.com/gpu resources present in the cluster, i.e., T4 and A100 GPUs. The system may not differentiate between them, meaning the pods could be scheduled on any of both types of hardware. | |
| This approach may be best for teams that wish to schedule pods onto the appropriate nodes automatically. | |
| However, a limitation arises when multiple types of specialized hardware are present, such as two different nvidia.com/gpu resources present in the cluster, i.e., T4 and A100 GPUs. | |
| The system may not differentiate between them, meaning the pods could be scheduled on any of both types of hardware. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I understand your point with Markdown. These docs will ultimately be read from the website which makes sense to have them in a single parragraph. - I could do a 'break' on each sentence, while keeping it in the same parragraph if that suits? - Please see latest commit as an example
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea that is great.
|
|
||
| This approach may be best for teams that wish to schedule pods onto the appropriate nodes automatically. However, a limitation arises when multiple types of specialized hardware are present, such as two different nvidia.com/gpu resources present in the cluster, i.e., T4 and A100 GPUs. The system may not differentiate between them, meaning the pods could be scheduled on any of both types of hardware. | ||
|
|
||
| To associate a ResourceFlavor with a subset of nodes of your cluster, you can configure the `.spec.nodeLabels` field with matching node labels that uniquely identify the nodes. If you are using [cluster autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) (or equivalent controllers), make sure that the controller is configured to add those labels when adding new nodes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| To associate a ResourceFlavor with a subset of nodes of your cluster, you can configure the `.spec.nodeLabels` field with matching node labels that uniquely identify the nodes. If you are using [cluster autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) (or equivalent controllers), make sure that the controller is configured to add those labels when adding new nodes. | |
| To associate a ResourceFlavor with a subset of nodes of your cluster, you can configure the `.spec.nodeLabels` field with matching node labels that uniquely identify the nodes. | |
| If you are using [cluster autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) (or equivalent controllers), make sure that the controller is configured to add those labels when adding new nodes. |
| A sample ResourceFlavor of this type looks like the following: | ||
|
|
||
| ```yaml | ||
| apiVersion: kueue.x-k8s.io/v1beta1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we move this to a samples and use that here? That follows our code in workloads.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean like this for example?
kueue/site/content/en/docs/tasks/manage/monitor_pending_workloads/pending_workloads_in_status.md
Lines 38 to 42 in 92e194d
| To install a simple setup of cluster queue, run the following command: | |
| ```shell | |
| kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/kueue/main/site/static/examples/admin/single-clusterqueue-setup.yaml | |
| ``` |
My only concern with this, is that then the yaml and the fields are then not displayed in the docs. I could add both, the yaml and the file that can be applied. Does that sound good?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something like: https://github.com/kubernetes-sigs/kueue/blob/main/site/content/en/docs/tasks/run/jobsets.md
Where we define the sample yaml in the static folder and import it in the docs. The website renders the YAML but we also have examples users can apply without reading the website.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ohh I understand. Hmm, for this to be a "working" ResourceFlavor, we would need to know which nodeLabels to use in the YAML. nodeLabels being used to associate the ResourceFlavor with the node in the cluster. A generic one wouldn't work, so the user can't just apply the file and see real results. - The only way I see, is to instruct the user to add the label to the node before applying the file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think adding what you have for the example is sufficient for the website.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added that now, including the empty flavor. Thanks :)
18f71de to
30778a5
Compare
|
LGTM label has been added. DetailsGit tree hash: 9cc9eb2207ce1b3225ea13fe5512351e9560a6fe |
|
/assign @gabesaba |
gabesaba
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes look great! Only a couple of nits
| This occurs if the Workload didn't specify the `ResourceFlavor` labels already as part of its nodeSelector. | ||
|
|
||
| For example, for a [batch/v1.Job](https://kubernetes.io/docs/concepts/workloads/controllers/job/), Kueue adds the labels to the `.spec.template.spec.nodeSelector` field. | ||
| This guarantees that the workload's Pods can only be scheduled on the nodes targeted by the flavor that Kueue assigned to the Workload. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: consistently capitalize Workload - here and elsewhere. I have a preference for Upper Case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated, thank you!
| {{< include "examples/admin/resource-flavor-tolerations.yaml" "yaml" >}} | ||
|
|
||
| When defining a ResourceFlavor as above, you can set the following values: | ||
| - The `.metadata.name` field to reference a ResourceFlavor from a [ClusterQueue](/docs/concepts/cluster_queue) in the `.spec.resourceGroups[*].flavors[*].name` field. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
metadata.name is required, but it sounds optional here (and below in the resource-flavor-taints section).
Perhaps add something about how name is used to uniquely identify a ResourceFlavor (as you did with ClusterQueue)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've made this more explicit, thank you!
| {{< include "examples/admin/resource-flavor-taints.yaml" "yaml" >}} | ||
|
|
||
| When defining a ResourceFlavor as above, you can set the following values: | ||
| - The `.metadata.name` field to reference a ResourceFlavor from a [ClusterQueue](/docs/concepts/cluster_queue) in the `.spec.resourceGroups[*].flavors[*].name` field. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see comment above
30778a5 to
7865c80
Compare
|
/lgtm |
|
LGTM label has been added. DetailsGit tree hash: dd5dcc08356b1f5b6a4ead41e43c8e50de1d0b50 |
|
I think docs is a super important area for adoption, but often neglected, unfortunately. Thank you for keeping the lights on! /approve |
|
@mimowo: once the present PR merges, I will cherry-pick it on top of DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ChristianZaccaria, mimowo The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@mimowo thank you for the continuous support and encouragement! Happy to contribute in any way I can :) |
|
@mimowo: new pull request created: #4795 DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What type of PR is this?
/kind documentation
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes #4499
Special notes for your reviewer:
Does this PR introduce a user-facing change?