-
Notifications
You must be signed in to change notification settings - Fork 551
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature] Allow label value interpolation in metric names #137
Comments
I can send a PR if you believe this is a good fit. |
the hard part here is that we don't record label values internally -- just label names. This seems like a good use case for the new label selector support, but it needs to finish landing in the custom-metrics-apiserver boilerplate repo first (kubernetes-sigs/custom-metrics-apiserver#35). |
What's the status on this? It seems like kubernetes-incubator/custom-metrics-apiserver#35 is stalled? It also seems like label selectors documented in 1.13 would solve this? https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-more-specific-metrics |
@tmatias do you have any documentation you can point to that references how this works? I'm currently trying to figure out how to get individual sidekiq queue metrics from sidekiq-prometheus-exporter usable in a horizontal pod autoscaler |
Trying to achieve the same thing with Kafka |
@DirectXMan12 Hi, When will this feature be supported by k8s-prometheus-adapter? |
@tmatias @wanghaokk: @DirectXMan12 is not actively working on this project any more, so I'll chime in as I am actively working on it now. Regarding the requested feature I see two possibilities for now:
When it comes to native support in prometheus adapter I am a hesitant to add this as a feature. The reason actually refers to point 1. above. If your label values are known in forehand, you can declare separate recording rules and reference those in the prometheus adapter config. If your label values are not known in forehand, you effective imply an infinite number of unknown metrics which doesn't sound right and is not idiomatic in the prometheus world. |
@s-urbaniak Thanks, it seems like one must be certain of which metrics will be exposed by adapter. Unknown label values are not usually used in out monitoring system, so we would adopt method 1. as you mentioned. |
Instead of using cloudwatch-export, take a look at this tool https://github.com/hugobcar/tiamat. This renamed sqs metric names to use with prometheus-adapter |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Some exporters, like the cloudwatch one, expose metrics in a generic way (a common metric name, plus a meaningful label), where the metric name per se is not meaningful enough to be used effectively (not ideal for consumers, but somewhat common to see in exporters that bridge metrics from existing sources).
Example:
In this case specifically, we would like to retain per-queue metrics, since having an aggregate over all of them would lose meaning. Doing a 1:1 map to a kubernetes resource won't work since a single pod/deployment/etc can be consuming from more than one queue.
Currently, this can be achieved by writing multiple discovery rules, but it would easier if we could use those labels in figuring out the metric names, something like:
(not sure we would want to allow full templating capabilities, tough).
The text was updated successfully, but these errors were encountered: