Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add "resource_name" to scaled_up_gpu_nodes_total and scaled_down_gpu_nodes_total metrics #5518

Merged
merged 1 commit into from Feb 22, 2023

Conversation

kawych
Copy link
Contributor

@kawych kawych commented Feb 17, 2023

What type of PR is this?

/kind feature

What this PR does / why we need it:

Add a resource_name field to scaled_up/down_gpu_nodes_total to differentiate between different types of GPU, which are represented by different custom resource. Credit to @hbostan for the implementation.

Does this PR introduce a user-facing change?

Add breakdown by custom resource name to GPU-related metrics

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/feature Categorizes issue or PR as related to a new feature. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. area/cluster-autoscaler labels Feb 17, 2023
@kawych kawych marked this pull request as ready for review February 20, 2023 10:03
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 20, 2023
@@ -246,7 +246,7 @@ var (
Namespace: caNamespace,
Name: "scaled_up_gpu_nodes_total",
Help: "Number of GPU nodes added by CA, by GPU name.",
}, []string{"gpu_name"},
}, []string{"resource_name", "gpu_name"},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is specifically for the resource name of a GPU, and will only be set on a GPU scale-up, I'd prefix "resource_name" with "gpu" as well (here and everywhere in this PR)? Otherwise the label name is quite ambiguous - CPU and memory are resources as well and the trigger for most scale-ups, but this won't be set for them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -270,7 +270,7 @@ var (
Namespace: caNamespace,
Name: "scaled_down_gpu_nodes_total",
Help: "Number of GPU nodes removed by CA, by reason and GPU name.",
}, []string{"reason", "gpu_name"},
}, []string{"reason", "resource_name", "gpu_name"},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know if it's safe to add a new label in the middle of the existing ones (e.g. I could imagine that some metric collector could treat earlier "gpu_name" values as "resource_name" after new 3-value metrics are emitted)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's safe, it should be exposed in prometheus format with label names, so that metric collectors have no trouble identifying the right label


// no signs of GPU
return MetricsNoGPU
func GetGpuTypeForMetrics(gpuConfig *cloudprovider.GpuConfig, availableGPUTypes map[string]struct{}, node *apiv1.Node, nodeGroup cloudprovider.NodeGroup) (string, string) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name of the function no longer reflects the returned values, and it's hard to figure out what's returned just from reading the signature. Maybe we could make the name more generic (GetGpuInfoForMetrics?), and name the return values instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

return MetricsNoGPU
func GetGpuTypeForMetrics(gpuConfig *cloudprovider.GpuConfig, availableGPUTypes map[string]struct{}, node *apiv1.Node, nodeGroup cloudprovider.NodeGroup) (string, string) {
// There is no sign of GPU
if gpuConfig == nil {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: It took me a bit to figure out that the PR doesn't change behavior for the existing GPU logic (there's still 1 difference - this function only looks at capacity, while GetNodeGpuConfig utilizes NodeHasGpu which looks at allocatable - but capacity and allocatable should be in sync for GPUs, and allocatable is arguably more correct - so it looks fine to me). This function could really use a unit test, if you're up for it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a little bit tricky, started writing a test but I found out that previous PR #5459 introduced a "hidden" import cycle (so far the cycle happens only if you use the cloudprovider test package). If you're OK with it, I'd prefer to follow up in the next PR

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cycle will probably have to be solved sooner or later, but a follow-up SGTM.

Copy link
Collaborator

@towca towca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve
/hold

I'd still like to see more "gpu" prefixes in some places since the names seem ambiguous without them. Feel free to unhold if you disagree.

cluster-autoscaler/core/scale_up.go Outdated Show resolved Hide resolved
cluster-autoscaler/metrics/metrics.go Outdated Show resolved Hide resolved
return MetricsNoGPU
func GetGpuTypeForMetrics(gpuConfig *cloudprovider.GpuConfig, availableGPUTypes map[string]struct{}, node *apiv1.Node, nodeGroup cloudprovider.NodeGroup) (string, string) {
// There is no sign of GPU
if gpuConfig == nil {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cycle will probably have to be solved sooner or later, but a follow-up SGTM.

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 21, 2023
@k8s-ci-robot k8s-ci-robot added lgtm "Looks good to me", indicates that a PR is ready to be merged. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Feb 21, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: kawych, towca

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

…nodes_total metrics

* Added the new resource_name field to scaled_up/down_gpu_nodes_total,
  representing the resource name for the gpu.
* Changed metrics registrations to use GpuConfig
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 22, 2023
@towca
Copy link
Collaborator

towca commented Feb 22, 2023

/unhold
/lgtm

@k8s-ci-robot k8s-ci-robot added lgtm "Looks good to me", indicates that a PR is ready to be merged. and removed do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. labels Feb 22, 2023
@k8s-ci-robot k8s-ci-robot merged commit c611acd into kubernetes:master Feb 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/cluster-autoscaler cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants