Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hard to distinguish which metrics query is in play ... maybe this is a feature than a defect #104

Closed
prageethw opened this issue Feb 4, 2020 · 4 comments · Fixed by #219
Labels
enhancement New feature or request

Comments

@prageethw
Copy link

Expected Behavior

I have multiple external metrics in play similar to below,

    metric-config.object.istio-requests-error-rate.prometheus/query: |
      sum(rate(istio_requests_total{destination_workload="go-demo-7-primary",
               destination_workload_namespace="go-demo-7", reporter="destination",response_code=~"5.*"}[1m])) 
      / 
      sum(rate(istio_requests_total{destination_workload="go-demo-7-primary", 
               destination_workload_namespace="go-demo-7",reporter="destination"}[1m]) > 0)* 100
      or
      sum(rate(istio_requests_total{destination_workload="go-demo-7-primary", 
               destination_workload_namespace="go-demo-7",reporter="destination"}[1m])) > bool 0 * 100

    metric-config.external.prometheus-query.prometheus/istio-requests-per-replica: |
      sum(rate(istio_requests_total{destination_service_name="go-demo-7",destination_workload_namespace="go-demo-7",
                reporter="destination"}[1m])) 
      /
      count(count(container_memory_usage_bytes{namespace="go-demo-7",pod_name=~"go-demo-7-primary.*"}) by (pod_name))
    metric-config.external.prometheus-query.prometheus/istio-requests-average-resp-time: |
      sum(rate(istio_request_duration_seconds_sum{destination_workload="go-demo-7-primary", reporter="destination"}[1m])) 
      / 
      sum(rate(istio_request_duration_seconds_count{destination_workload="go-demo-7-primary", reporter="destination"}[1m]) > 0)
      or
      sum(rate(istio_request_duration_seconds_count{destination_workload="go-demo-7-primary", reporter="destination"}[1m])) 
      > bool 0

spec:
  maxReplicas: 10
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: go-demo-7-primary
  metrics:
  - type: External
    external:
      metric:
        name: prometheus-query
        selector:
          matchLabels:
            query-name: istio-requests-per-replica
      target:
        type: AverageValue
        value: 5
  - type: External
    external:
      metric:
        name: prometheus-query
        selector:
          matchLabels:
            query-name: istio-requests-average-resp-time
      target:
        type: Value
        value: 100m
  - type: Object
    object:
      metric:
        name: istio-requests-error-rate
      describedObject:
        apiVersion: v1 #make sure you check the api version on the targeted resource using get command.
        kind: Pod # note Pod can be used as resource kind for kube-metrics-adapter.
        name: go-demo-7-primary
      target:
        type: Value
        value: 5

Then when I describe HPA I would expect that I will be able to distinguish each metrics that are in play as below,

  "istio-requests-per-replica" (target value):                                    0 / 5
  "istio-requests-average-resp-time" (target value):                                    0 / 100m
  "istio-requests-error-rate" on Pod/go-demo-7-primary (target value):  0 / 5

Actual Behavior

I see below in HPA which is very hard to distinguish which external metrics is in play

Metrics:                                                                ( current / target )
  "prometheus-query" (target value):                                    0 / 5
  "prometheus-query" (target value):                                    0 / 100m
  "istio-requests-error-rate" on Pod/go-demo-7-primary (target value):  0 / 5

Steps to Reproduce the Problem

1.Install Kube-metrics-adaptor v0.1.1
1.Create an hpa with the above metrics
1.Describe HPA

Specifications

  • Version:
    --set image.repository=registry.opensource.zalan.do/teapot/kube-metrics-adapter \
    --set image.tag=v0.1.0

  • Platform:
    AWS KOPS
  • Subsystem:
@mikkeloscar
Copy link
Contributor

This is a problem I never considered, but I can totally see how it's not ideal to have the same query name used.

I would consider it a feature that we change/extend the metrics definition in a way where you can name the metrics in a custom way. In the past we were limited to the metricName as an identifier because we were using older autoscaling API versions, but now with autoscaling/v2beta2 being the default we can take advantage of labels on all the metrics.

I would propose we add something like type: prometheus-query as a label and then let users define a custom metric.name.

@mikkeloscar mikkeloscar added the enhancement New feature or request label Feb 4, 2020
@prageethw
Copy link
Author

Thanks @mikkeloscar

@prageethw
Copy link
Author

@mikkeloscar any idea whether this feature will be done at all?

@mikkeloscar
Copy link
Contributor

@prageethw I have it on my todo list, but didn't get around to implement it yet unfortunately. It's not forgotten but will take some time before I get around to it. Will also happily review PRs adding support :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants