Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature] Allow label value interpolation in metric names #137

Closed
tmatias opened this issue Nov 20, 2018 · 13 comments
Closed

[feature] Allow label value interpolation in metric names #137

tmatias opened this issue Nov 20, 2018 · 13 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@tmatias
Copy link
Contributor

tmatias commented Nov 20, 2018

Some exporters, like the cloudwatch one, expose metrics in a generic way (a common metric name, plus a meaningful label), where the metric name per se is not meaningful enough to be used effectively (not ideal for consumers, but somewhat common to see in exporters that bridge metrics from existing sources).

Example:

aws_sqs_approximate_number_of_messages_visible_average{...,queue_name="one"} 
aws_sqs_approximate_number_of_messages_visible_average{...,queue_name="another"}

In this case specifically, we would like to retain per-queue metrics, since having an aggregate over all of them would lose meaning. Doing a 1:1 map to a kubernetes resource won't work since a single pod/deployment/etc can be consuming from more than one queue.

Currently, this can be achieved by writing multiple discovery rules, but it would easier if we could use those labels in figuring out the metric names, something like:

name:
  as: '{{.labels.queue_name}}_queue_depth'

(not sure we would want to allow full templating capabilities, tough).

@tmatias
Copy link
Contributor Author

tmatias commented Nov 20, 2018

I can send a PR if you believe this is a good fit.

@DirectXMan12
Copy link
Contributor

the hard part here is that we don't record label values internally -- just label names. This seems like a good use case for the new label selector support, but it needs to finish landing in the custom-metrics-apiserver boilerplate repo first (kubernetes-sigs/custom-metrics-apiserver#35).

@Freyert
Copy link

Freyert commented Feb 20, 2019

What's the status on this? It seems like kubernetes-incubator/custom-metrics-apiserver#35 is stalled?

It also seems like label selectors documented in 1.13 would solve this? https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-more-specific-metrics

@jtmkrueger
Copy link

Currently, this can be achieved by writing multiple discovery rules

@tmatias do you have any documentation you can point to that references how this works? I'm currently trying to figure out how to get individual sidekiq queue metrics from sidekiq-prometheus-exporter usable in a horizontal pod autoscaler

@agolomoodysaada
Copy link

Trying to achieve the same thing with Kafka

@wanghaokk
Copy link

@DirectXMan12 Hi, When will this feature be supported by k8s-prometheus-adapter?
Prometheus metric labels are meaningful in many cases, especially for HPA.

@s-urbaniak
Copy link
Contributor

@tmatias @wanghaokk: @DirectXMan12 is not actively working on this project any more, so I'll chime in as I am actively working on it now. Regarding the requested feature I see two possibilities for now:

  1. If your label values are known in forehand and is a fixed set, you could define a recording rule for each label value instance and reference those as custom metrics.
  2. Keep on using one metric and specify metricLabelSelector as https://github.com/kubernetes/metrics/blob/483643a4d1f8103e9d6c66f0735cdd9fd7542b8a/pkg/apis/custom_metrics/types.go#L87

When it comes to native support in prometheus adapter I am a hesitant to add this as a feature. The reason actually refers to point 1. above. If your label values are known in forehand, you can declare separate recording rules and reference those in the prometheus adapter config.

If your label values are not known in forehand, you effective imply an infinite number of unknown metrics which doesn't sound right and is not idiomatic in the prometheus world.

@wanghaokk
Copy link

@s-urbaniak Thanks, it seems like one must be certain of which metrics will be exposed by adapter. Unknown label values are not usually used in out monitoring system, so we would adopt method 1. as you mentioned.

@hugobcar
Copy link

hugobcar commented Nov 9, 2020

Instead of using cloudwatch-export, take a look at this tool https://github.com/hugobcar/tiamat. This renamed sqs metric names to use with prometheus-adapter

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 7, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 9, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

10 participants