Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus Receiver: scraping metrics from multiple targets that emits metrics of the same name and label keys #4986

Closed
arpitjindal97 opened this issue Sep 14, 2020 · 11 comments · Fixed by #11463
Labels
comp:prometheus Prometheus related issues

Comments

@arpitjindal97
Copy link
Contributor

Reopen of open-telemetry/opentelemetry-collector#1076

This issue is not resolved and can be seen in latest version

@liamawhite

@arpitjindal97 arpitjindal97 changed the title Problem when scraping metrics from multiple targets that emits metrics of the same name and labelkeys Prometheus scraping metrics from multiple targets that emits metrics of the same name and label keys Sep 14, 2020
@liamawhite
Copy link
Contributor

Does your repro differ from the example in open-telemetry/opentelemetry-collector#1076?

@arpitjindal97
Copy link
Contributor Author

arpitjindal97 commented Sep 14, 2020

I'm not sure about which repro you are referring to, but I can provide you simple steps by which you can reproduce the same error at your end.

  • Run any application with more than 1 Instance at different ports (Grafana)
  • Configure OpenTelemetry Collector to scrape both

receiver -> prometheus -> config -> scrape both and apply label with same key and different value

Exactly same problem like this: https://gist.github.com/liamawhite/bc957be682fc3ae5558ef5ab1636858f

@arpitjindal97
Copy link
Contributor Author

@liamawhite I'm really stuck because of this error. Please provide some hotfix

@nilebox
Copy link
Member

nilebox commented Sep 21, 2020

@arpitjindal97 Could you share a configuration example please (if it doesn't contain sensitive information)?

I'm not sure what's the root cause for this bug specifically (probably the internal cache is using map based on labels as a key), but you shouldn't have multiple applications producing the same set of metric labels anyway, as this may cause issues down the pipeline for batching / aggregation etc. Collector will consider metric with the same labels as a single metric which may lead to incorrect aggregations. @bogdandrutu could you confirm this please?

So ideally you'd want to add instance-specific labels to your metrics as described in open-telemetry/opentelemetry-collector#1076 (comment).

This can also be solved on the Prometheus config side, e.g. for Kubernetes Pods you may define additional labels kubernetes_namespace and kubernetes_pod_name which will be unique per endpoint:

    receivers:
      prometheus:
        config:
          scrape_configs:
            ...
            relabel_configs:
            - source_labels: [__meta_kubernetes_namespace]
              action: replace
              target_label: kubernetes_namespace
            - source_labels: [__meta_kubernetes_pod_name]
              action: replace
              target_label: kubernetes_pod_name

I suppose we could try to detect this situtation and at least make the Prometheus receiver emit errors suggesting to resolve the issue by adding unique labels.

@nilebox
Copy link
Member

nilebox commented Sep 22, 2020

Sorry, I read the issue open-telemetry/opentelemetry-collector#1076 more carefully and apparently it complains about the behavior of the Prometheus Exporter (i.e. when collector exposes collected metrics via a Prometheus endpoint), not Prometheus Receiver (when collector scrapes metrics from other applications via scraping their Prometheus endpoints).

@arpitjindal97 Based on your comments above I think you're having an issue with Prometheus Receiver, so it's unrelated to open-telemetry/opentelemetry-collector#1076?
If so, please see the comment above https://github.com/open-telemetry/opentelemetry-collector/issues/1774#issuecomment-696400632 with suggestions on resolving your issue.

Exactly same problem like this: https://gist.github.com/liamawhite/bc957be682fc3ae5558ef5ab1636858f

As I described above, your issue seems to be completely different from that example, so please provide a specific example reproducing your issue if you think there is a bug in Prometheus Receiver which needs to be fixed.

@nilebox nilebox changed the title Prometheus scraping metrics from multiple targets that emits metrics of the same name and label keys Prometheus Receiver: scraping metrics from multiple targets that emits metrics of the same name and label keys Sep 22, 2020
@arpitjindal97
Copy link
Contributor Author

arpitjindal97 commented Sep 22, 2020

Local Environment

config.yml

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:55680
  hostmetrics:
    collection_interval: 30s
    scrapers:
      cpu:
      memory:
      disk:
  prometheus:
    config:
      scrape_configs:
      - job_name: "monitoring"
        scrape_interval: 5s
        static_configs:
        - targets: 
          - localhost:8888
          labels:
            group: 'opentelemetry'
        - targets:
          - localhost:3000
          labels:
            group: 'grafana_z1'
        - targets:
          - localhost:3001
          labels:
            group: 'grafana_z2'
exporters:
  prometheus:
    endpoint: "0.0.0.0:8889"
    namespace: "default"
service:
  pipelines:
      metrics:
          receivers: [prometheus,otlp,hostmetrics]
          exporters: [prometheus]

otelcol

Download otelcol 0.10.0 from releases and run with above configuration

./otelcol_darwin_amd64 --config config.yml

Grafana instance 1

docker run -d --name=grafana_z1 -p 3000:3000 grafana/grafana

Grafana instance 2

docker run -d --name=grafana_z2 -p 3001:3000 grafana/grafana

Navigate to localhost:8889/metrics

Sometimes you will see below Metrics

# HELP default_go_goroutines Number of goroutines that currently exist.
# TYPE default_go_goroutines gauge
default_go_goroutines{group="grafana_z2"} 37
# HELP default_go_info Information about the Go environment.
# TYPE default_go_info gauge
default_go_info{group="grafana_z2",version="go1.14.4"} 1
# HELP default_go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE default_go_memstats_alloc_bytes gauge
default_go_memstats_alloc_bytes{group="grafana_z2"} 1.2500592e+07
# HELP default_go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE default_go_memstats_alloc_bytes_total counter
default_go_memstats_alloc_bytes_total{group="grafana_z2"} 6.390216e+06
# HELP default_go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE default_go_memstats_buck_hash_sys_bytes gauge
default_go_memstats_buck_hash_sys_bytes{group="grafana_z2"} 1.492336e+06
# HELP default_go_memstats_frees_total Total number of frees.
# TYPE default_go_memstats_frees_total counter
default_go_memstats_frees_total{group="grafana_z2"} 102520
# HELP default_go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE default_go_memstats_gc_cpu_fraction gauge
default_go_memstats_gc_cpu_fraction{group="grafana_z2"} 0.004440384738240683
# HELP default_go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE default_go_memstats_gc_sys_bytes gauge
default_go_memstats_gc_sys_bytes{group="grafana_z2"} 3.590408e+06
# HELP default_go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE default_go_memstats_heap_alloc_bytes gauge
default_go_memstats_heap_alloc_bytes{group="grafana_z2"} 1.2500592e+07
# HELP default_go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE default_go_memstats_heap_idle_bytes gauge
default_go_memstats_heap_idle_bytes{group="grafana_z2"} 5.1068928e+07
# HELP default_go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE default_go_memstats_heap_inuse_bytes gauge
default_go_memstats_heap_inuse_bytes{group="grafana_z2"} 1.4893056e+07
# HELP default_go_memstats_heap_objects Number of allocated objects.
# TYPE default_go_memstats_heap_objects gauge
default_go_memstats_heap_objects{group="grafana_z2"} 51864
# HELP default_go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE default_go_memstats_heap_released_bytes gauge
default_go_memstats_heap_released_bytes{group="grafana_z2"} 4.9299456e+07
# HELP default_go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE default_go_memstats_heap_sys_bytes gauge
default_go_memstats_heap_sys_bytes{group="grafana_z2"} 6.5961984e+07
# HELP default_go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE default_go_memstats_last_gc_time_seconds gauge
default_go_memstats_last_gc_time_seconds{group="grafana_z2"} 1.600787836507633e+09
# HELP default_go_memstats_lookups_total Total number of pointer lookups.
# TYPE default_go_memstats_lookups_total counter
default_go_memstats_lookups_total{group="grafana_z2"} 0
# HELP default_go_memstats_mallocs_total Total number of mallocs.
# TYPE default_go_memstats_mallocs_total counter
default_go_memstats_mallocs_total{group="grafana_z2"} 23934
# HELP default_go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE default_go_memstats_mcache_inuse_bytes gauge
default_go_memstats_mcache_inuse_bytes{group="grafana_z2"} 10416
# HELP default_go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE default_go_memstats_mcache_sys_bytes gauge
default_go_memstats_mcache_sys_bytes{group="grafana_z2"} 16384
# HELP default_go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE default_go_memstats_mspan_inuse_bytes gauge
default_go_memstats_mspan_inuse_bytes{group="grafana_z2"} 181560
# HELP default_go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE default_go_memstats_mspan_sys_bytes gauge
default_go_memstats_mspan_sys_bytes{group="grafana_z2"} 229376
# HELP default_go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE default_go_memstats_next_gc_bytes gauge
default_go_memstats_next_gc_bytes{group="grafana_z2"} 1.4136288e+07
# HELP default_go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE default_go_memstats_other_sys_bytes gauge
default_go_memstats_other_sys_bytes{group="grafana_z2"} 1.307272e+06
# HELP default_go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE default_go_memstats_stack_inuse_bytes gauge
default_go_memstats_stack_inuse_bytes{group="grafana_z2"} 1.14688e+06
# HELP default_go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE default_go_memstats_stack_sys_bytes gauge
default_go_memstats_stack_sys_bytes{group="grafana_z2"} 1.14688e+06
# HELP default_go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE default_go_memstats_sys_bytes gauge
default_go_memstats_sys_bytes{group="grafana_z2"} 7.374464e+07
# HELP default_go_threads Number of OS threads created.
# TYPE default_go_threads gauge
default_go_threads{group="grafana_z2"} 15
# HELP default_grafana_alerting_active_alerts amount of active alerts
# TYPE default_grafana_alerting_active_alerts gauge
default_grafana_alerting_active_alerts{group="grafana_z2"} 0
# HELP default_grafana_api_admin_user_created_total api admin user created counter
# TYPE default_grafana_api_admin_user_created_total counter
default_grafana_api_admin_user_created_total{group="grafana_z2"} 0
# HELP default_grafana_api_dashboard_snapshot_create_total dashboard snapshots created
# TYPE default_grafana_api_dashboard_snapshot_create_total counter
default_grafana_api_dashboard_snapshot_create_total{group="grafana_z2"} 0
# HELP default_grafana_api_dashboard_snapshot_external_total external dashboard snapshots created
# TYPE default_grafana_api_dashboard_snapshot_external_total counter
default_grafana_api_dashboard_snapshot_external_total{group="grafana_z2"} 0
# HELP default_grafana_api_dashboard_snapshot_get_total loaded dashboards
# TYPE default_grafana_api_dashboard_snapshot_get_total counter
default_grafana_api_dashboard_snapshot_get_total{group="grafana_z2"} 0
# HELP default_grafana_api_login_oauth_total api login oauth counter
# TYPE default_grafana_api_login_oauth_total counter
default_grafana_api_login_oauth_total{group="grafana_z2"} 0
# HELP default_grafana_api_login_post_total api login post counter
# TYPE default_grafana_api_login_post_total counter
default_grafana_api_login_post_total{group="grafana_z2"} 0
# HELP default_grafana_api_login_saml_total api login saml counter
# TYPE default_grafana_api_login_saml_total counter
default_grafana_api_login_saml_total{group="grafana_z2"} 0
# HELP default_grafana_api_models_dashboard_insert_total dashboards inserted 
# TYPE default_grafana_api_models_dashboard_insert_total counter
default_grafana_api_models_dashboard_insert_total{group="grafana_z2"} 0
# HELP default_grafana_api_org_create_total api org created counter
# TYPE default_grafana_api_org_create_total counter
default_grafana_api_org_create_total{group="grafana_z2"} 0
# HELP default_grafana_api_response_status_total api http response status
# TYPE default_grafana_api_response_status_total counter
default_grafana_api_response_status_total{code="200",group="grafana_z2"} 0
default_grafana_api_response_status_total{code="404",group="grafana_z2"} 0
default_grafana_api_response_status_total{code="500",group="grafana_z2"} 0
default_grafana_api_response_status_total{code="unknown",group="grafana_z2"} 0
# HELP default_grafana_api_user_signup_completed_total amount of users who completed the signup flow
# TYPE default_grafana_api_user_signup_completed_total counter
default_grafana_api_user_signup_completed_total{group="grafana_z2"} 0
# HELP default_grafana_api_user_signup_invite_total amount of users who have been invited
# TYPE default_grafana_api_user_signup_invite_total counter
default_grafana_api_user_signup_invite_total{group="grafana_z2"} 0
# HELP default_grafana_api_user_signup_started_total amount of users who started the signup flow
# TYPE default_grafana_api_user_signup_started_total counter
default_grafana_api_user_signup_started_total{group="grafana_z2"} 0
# HELP default_grafana_aws_cloudwatch_get_metric_data_total counter for getting metric data time series from aws
# TYPE default_grafana_aws_cloudwatch_get_metric_data_total counter
default_grafana_aws_cloudwatch_get_metric_data_total{group="grafana_z2"} 0
# HELP default_grafana_aws_cloudwatch_get_metric_statistics_total counter for getting metric statistics from aws
# TYPE default_grafana_aws_cloudwatch_get_metric_statistics_total counter
default_grafana_aws_cloudwatch_get_metric_statistics_total{group="grafana_z2"} 0
# HELP default_grafana_aws_cloudwatch_list_metrics_total counter for getting list of metrics from aws
# TYPE default_grafana_aws_cloudwatch_list_metrics_total counter
default_grafana_aws_cloudwatch_list_metrics_total{group="grafana_z2"} 0
# HELP default_grafana_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which Grafana was built
# TYPE default_grafana_build_info gauge
default_grafana_build_info{branch="HEAD",edition="oss",goversion="go1.14.4",group="grafana_z2",revision="9893b8c53d",version="7.1.5"} 1
# HELP default_grafana_db_datasource_query_by_id_total counter for getting datasource by id
# TYPE default_grafana_db_datasource_query_by_id_total counter
default_grafana_db_datasource_query_by_id_total{group="grafana_z2"} 0
# HELP default_grafana_instance_start_total counter for started instances
# TYPE default_grafana_instance_start_total counter
default_grafana_instance_start_total{group="grafana_z2"} 0
# HELP default_grafana_page_response_status_total page http response status
# TYPE default_grafana_page_response_status_total counter
default_grafana_page_response_status_total{code="200",group="grafana_z2"} 1
default_grafana_page_response_status_total{code="404",group="grafana_z2"} 0
default_grafana_page_response_status_total{code="500",group="grafana_z2"} 0
default_grafana_page_response_status_total{code="unknown",group="grafana_z2"} 1
# HELP default_grafana_plugin_build_info A metric with a constant '1' value labeled by pluginId, pluginType and version from which Grafana plugin was built
# TYPE default_grafana_plugin_build_info gauge
default_grafana_plugin_build_info{group="grafana_z2",plugin_id="input",plugin_type="datasource",version="1.0.0"} 1
# HELP default_grafana_proxy_response_status_total proxy http response status
# TYPE default_grafana_proxy_response_status_total counter
default_grafana_proxy_response_status_total{code="200",group="grafana_z2"} 0
default_grafana_proxy_response_status_total{code="404",group="grafana_z2"} 0
default_grafana_proxy_response_status_total{code="500",group="grafana_z2"} 0
default_grafana_proxy_response_status_total{code="unknown",group="grafana_z2"} 0
# HELP default_grafana_rendering_queue_size size of image rendering queue
# TYPE default_grafana_rendering_queue_size gauge
default_grafana_rendering_queue_size{group="grafana_z2"} 0
# HELP default_grafana_stat_active_users number of active users
# TYPE default_grafana_stat_active_users gauge
default_grafana_stat_active_users{group="grafana_z2"} 0
# HELP default_grafana_stat_total_orgs total amount of orgs
# TYPE default_grafana_stat_total_orgs gauge
default_grafana_stat_total_orgs{group="grafana_z2"} 1
# HELP default_grafana_stat_total_playlists total amount of playlists
# TYPE default_grafana_stat_total_playlists gauge
default_grafana_stat_total_playlists{group="grafana_z2"} 0
# HELP default_grafana_stat_total_users total amount of users
# TYPE default_grafana_stat_total_users gauge
default_grafana_stat_total_users{group="grafana_z2"} 1
# HELP default_grafana_stat_totals_active_admins total amount of active admins
# TYPE default_grafana_stat_totals_active_admins gauge
default_grafana_stat_totals_active_admins{group="grafana_z2"} 0
# HELP default_grafana_stat_totals_active_editors total amount of active editors
# TYPE default_grafana_stat_totals_active_editors gauge
default_grafana_stat_totals_active_editors{group="grafana_z2"} 0
# HELP default_grafana_stat_totals_active_viewers total amount of viewers
# TYPE default_grafana_stat_totals_active_viewers gauge
default_grafana_stat_totals_active_viewers{group="grafana_z2"} 0
# HELP default_grafana_stat_totals_admins total amount of admins
# TYPE default_grafana_stat_totals_admins gauge
default_grafana_stat_totals_admins{group="grafana_z2"} 1
# HELP default_grafana_stat_totals_annotations total amount of annotations in the database
# TYPE default_grafana_stat_totals_annotations gauge
default_grafana_stat_totals_annotations{group="grafana_z2"} 0
# HELP default_grafana_stat_totals_dashboard total amount of dashboards
# TYPE default_grafana_stat_totals_dashboard gauge
default_grafana_stat_totals_dashboard{group="grafana_z2"} 0
# HELP default_grafana_stat_totals_dashboard_versions total amount of dashboard versions in the database
# TYPE default_grafana_stat_totals_dashboard_versions gauge
default_grafana_stat_totals_dashboard_versions{group="grafana_z2"} 0
# HELP default_grafana_stat_totals_editors total amount of editors
# TYPE default_grafana_stat_totals_editors gauge
default_grafana_stat_totals_editors{group="grafana_z2"} 0
# HELP default_grafana_stat_totals_viewers total amount of viewers
# TYPE default_grafana_stat_totals_viewers gauge
default_grafana_stat_totals_viewers{group="grafana_z2"} 0
# HELP default_http_request_in_flight A gauge of requests currently being served by Grafana.
# TYPE default_http_request_in_flight gauge
default_http_request_in_flight{group="grafana_z2"} 0
# HELP default_http_request_total http request counter
# TYPE default_http_request_total counter
default_http_request_total{group="grafana_z2",handler="/*",method="get",statuscode="302"} 0
default_http_request_total{group="grafana_z2",handler="/login",method="get",statuscode="200"} 0
# HELP default_otelcol_process_cpu_seconds Total CPU user and system time in seconds
# TYPE default_otelcol_process_cpu_seconds gauge
default_otelcol_process_cpu_seconds{group="opentelemetry",service_instance_id="92b1cced-9d8e-4db3-8c0b-f34ea9b51c8c"} 0
# HELP default_otelcol_process_memory_rss Total physical memory (resident set size)
# TYPE default_otelcol_process_memory_rss gauge
default_otelcol_process_memory_rss{group="opentelemetry",service_instance_id="92b1cced-9d8e-4db3-8c0b-f34ea9b51c8c"} 4.3331584e+07
# HELP default_otelcol_process_runtime_heap_alloc_bytes Bytes of allocated heap objects (see 'go doc runtime.MemStats.HeapAlloc')
# TYPE default_otelcol_process_runtime_heap_alloc_bytes gauge
default_otelcol_process_runtime_heap_alloc_bytes{group="opentelemetry",service_instance_id="92b1cced-9d8e-4db3-8c0b-f34ea9b51c8c"} 1.6649696e+07
# HELP default_otelcol_process_runtime_total_alloc_bytes Cumulative bytes allocated for heap objects (see 'go doc runtime.MemStats.TotalAlloc')
# TYPE default_otelcol_process_runtime_total_alloc_bytes gauge
default_otelcol_process_runtime_total_alloc_bytes{group="opentelemetry",service_instance_id="92b1cced-9d8e-4db3-8c0b-f34ea9b51c8c"} 4.281948e+07
# HELP default_otelcol_process_runtime_total_sys_memory_bytes Total bytes of memory obtained from the OS (see 'go doc runtime.MemStats.Sys')
# TYPE default_otelcol_process_runtime_total_sys_memory_bytes gauge
default_otelcol_process_runtime_total_sys_memory_bytes{group="opentelemetry",service_instance_id="92b1cced-9d8e-4db3-8c0b-f34ea9b51c8c"} 7.505536e+07
# HELP default_otelcol_process_uptime Uptime of the process
# TYPE default_otelcol_process_uptime counter
default_otelcol_process_uptime{group="opentelemetry",service_instance_id="92b1cced-9d8e-4db3-8c0b-f34ea9b51c8c"} 189.99883000000005
# HELP default_otelcol_receiver_accepted_metric_points Number of metric points successfully pushed into the pipeline.
# TYPE default_otelcol_receiver_accepted_metric_points counter
default_otelcol_receiver_accepted_metric_points{group="opentelemetry",receiver="prometheus",service_instance_id="92b1cced-9d8e-4db3-8c0b-f34ea9b51c8c",transport="http"} 3124
# HELP default_otelcol_receiver_refused_metric_points Number of metric points that could not be pushed into the pipeline.
# TYPE default_otelcol_receiver_refused_metric_points counter
default_otelcol_receiver_refused_metric_points{group="opentelemetry",receiver="prometheus",service_instance_id="92b1cced-9d8e-4db3-8c0b-f34ea9b51c8c",transport="http"} 0
# HELP default_process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE default_process_cpu_seconds_total counter
default_process_cpu_seconds_total{group="grafana_z2"} 0.18999999999999995
# HELP default_process_max_fds Maximum number of open file descriptors.
# TYPE default_process_max_fds gauge
default_process_max_fds{group="grafana_z2"} 1.048576e+06
# HELP default_process_open_fds Number of open file descriptors.
# TYPE default_process_open_fds gauge
default_process_open_fds{group="grafana_z2"} 14
# HELP default_process_resident_memory_bytes Resident memory size in bytes.
# TYPE default_process_resident_memory_bytes gauge
default_process_resident_memory_bytes{group="grafana_z2"} 5.1806208e+07
# HELP default_process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE default_process_start_time_seconds gauge
default_process_start_time_seconds{group="grafana_z2"} 1.60078783486e+09
# HELP default_process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE default_process_virtual_memory_bytes gauge
default_process_virtual_memory_bytes{group="grafana_z2"} 7.75049216e+08
# HELP default_process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE default_process_virtual_memory_max_bytes gauge
default_process_virtual_memory_max_bytes{group="grafana_z2"} -1
# HELP default_system_memory_usage Bytes of memory in use.
# TYPE default_system_memory_usage counter
default_system_memory_usage{state="free"} 3.03734784e+08
default_system_memory_usage{state="inactive"} 5.94792448e+09
default_system_memory_usage{state="used"} 1.092820992e+10

Sometimes you see these

# HELP default_go_goroutines Number of goroutines that currently exist.
# TYPE default_go_goroutines gauge
default_go_goroutines{group="grafana_z1"} 36
# HELP default_go_info Information about the Go environment.
# TYPE default_go_info gauge
default_go_info{group="grafana_z1",version="go1.14.4"} 1
# HELP default_go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE default_go_memstats_alloc_bytes gauge
default_go_memstats_alloc_bytes{group="grafana_z1"} 1.4101944e+07
# HELP default_go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE default_go_memstats_alloc_bytes_total counter
default_go_memstats_alloc_bytes_total{group="grafana_z1"} 3.625636e+07
# HELP default_go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE default_go_memstats_buck_hash_sys_bytes gauge
default_go_memstats_buck_hash_sys_bytes{group="grafana_z1"} 1.497168e+06
# HELP default_go_memstats_frees_total Total number of frees.
# TYPE default_go_memstats_frees_total counter
default_go_memstats_frees_total{group="grafana_z1"} 184815
# HELP default_go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE default_go_memstats_gc_cpu_fraction gauge
default_go_memstats_gc_cpu_fraction{group="grafana_z1"} 2.2644910793730463e-05
# HELP default_go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE default_go_memstats_gc_sys_bytes gauge
default_go_memstats_gc_sys_bytes{group="grafana_z1"} 3.590408e+06
# HELP default_go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE default_go_memstats_heap_alloc_bytes gauge
default_go_memstats_heap_alloc_bytes{group="grafana_z1"} 1.4101944e+07
# HELP default_go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE default_go_memstats_heap_idle_bytes gauge
default_go_memstats_heap_idle_bytes{group="grafana_z1"} 4.9954816e+07
# HELP default_go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE default_go_memstats_heap_inuse_bytes gauge
default_go_memstats_heap_inuse_bytes{group="grafana_z1"} 1.6203776e+07
# HELP default_go_memstats_heap_objects Number of allocated objects.
# TYPE default_go_memstats_heap_objects gauge
default_go_memstats_heap_objects{group="grafana_z1"} 66127
# HELP default_go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE default_go_memstats_heap_released_bytes gauge
default_go_memstats_heap_released_bytes{group="grafana_z1"} 4.9463296e+07
# HELP default_go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE default_go_memstats_heap_sys_bytes gauge
default_go_memstats_heap_sys_bytes{group="grafana_z1"} 6.6158592e+07
# HELP default_go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE default_go_memstats_last_gc_time_seconds gauge
default_go_memstats_last_gc_time_seconds{group="grafana_z1"} 1.6007882927213206e+09
# HELP default_go_memstats_lookups_total Total number of pointer lookups.
# TYPE default_go_memstats_lookups_total counter
default_go_memstats_lookups_total{group="grafana_z1"} 0
# HELP default_go_memstats_mallocs_total Total number of mallocs.
# TYPE default_go_memstats_mallocs_total counter
default_go_memstats_mallocs_total{group="grafana_z1"} 197236
# HELP default_go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE default_go_memstats_mcache_inuse_bytes gauge
default_go_memstats_mcache_inuse_bytes{group="grafana_z1"} 10416
# HELP default_go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE default_go_memstats_mcache_sys_bytes gauge
default_go_memstats_mcache_sys_bytes{group="grafana_z1"} 16384
# HELP default_go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE default_go_memstats_mspan_inuse_bytes gauge
default_go_memstats_mspan_inuse_bytes{group="grafana_z1"} 182784
# HELP default_go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE default_go_memstats_mspan_sys_bytes gauge
default_go_memstats_mspan_sys_bytes{group="grafana_z1"} 229376
# HELP default_go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE default_go_memstats_next_gc_bytes gauge
default_go_memstats_next_gc_bytes{group="grafana_z1"} 2.005152e+07
# HELP default_go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE default_go_memstats_other_sys_bytes gauge
default_go_memstats_other_sys_bytes{group="grafana_z1"} 1.564584e+06
# HELP default_go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE default_go_memstats_stack_inuse_bytes gauge
default_go_memstats_stack_inuse_bytes{group="grafana_z1"} 950272
# HELP default_go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE default_go_memstats_stack_sys_bytes gauge
default_go_memstats_stack_sys_bytes{group="grafana_z1"} 950272
# HELP default_go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE default_go_memstats_sys_bytes gauge
default_go_memstats_sys_bytes{group="grafana_z1"} 7.4006784e+07
# HELP default_go_threads Number of OS threads created.
# TYPE default_go_threads gauge
default_go_threads{group="grafana_z1"} 13
# HELP default_grafana_alerting_active_alerts amount of active alerts
# TYPE default_grafana_alerting_active_alerts gauge
default_grafana_alerting_active_alerts{group="grafana_z1"} 0
# HELP default_grafana_api_admin_user_created_total api admin user created counter
# TYPE default_grafana_api_admin_user_created_total counter
default_grafana_api_admin_user_created_total{group="grafana_z1"} 0
# HELP default_grafana_api_dashboard_snapshot_create_total dashboard snapshots created
# TYPE default_grafana_api_dashboard_snapshot_create_total counter
default_grafana_api_dashboard_snapshot_create_total{group="grafana_z1"} 0
# HELP default_grafana_api_dashboard_snapshot_external_total external dashboard snapshots created
# TYPE default_grafana_api_dashboard_snapshot_external_total counter
default_grafana_api_dashboard_snapshot_external_total{group="grafana_z1"} 0
# HELP default_grafana_api_dashboard_snapshot_get_total loaded dashboards
# TYPE default_grafana_api_dashboard_snapshot_get_total counter
default_grafana_api_dashboard_snapshot_get_total{group="grafana_z1"} 0
# HELP default_grafana_api_login_oauth_total api login oauth counter
# TYPE default_grafana_api_login_oauth_total counter
default_grafana_api_login_oauth_total{group="grafana_z1"} 0
# HELP default_grafana_api_login_post_total api login post counter
# TYPE default_grafana_api_login_post_total counter
default_grafana_api_login_post_total{group="grafana_z1"} 0
# HELP default_grafana_api_login_saml_total api login saml counter
# TYPE default_grafana_api_login_saml_total counter
default_grafana_api_login_saml_total{group="grafana_z1"} 0
# HELP default_grafana_api_models_dashboard_insert_total dashboards inserted 
# TYPE default_grafana_api_models_dashboard_insert_total counter
default_grafana_api_models_dashboard_insert_total{group="grafana_z1"} 0
# HELP default_grafana_api_org_create_total api org created counter
# TYPE default_grafana_api_org_create_total counter
default_grafana_api_org_create_total{group="grafana_z1"} 0
# HELP default_grafana_api_response_status_total api http response status
# TYPE default_grafana_api_response_status_total counter
default_grafana_api_response_status_total{code="200",group="grafana_z1"} 0
default_grafana_api_response_status_total{code="404",group="grafana_z1"} 0
default_grafana_api_response_status_total{code="500",group="grafana_z1"} 0
default_grafana_api_response_status_total{code="unknown",group="grafana_z1"} 0
# HELP default_grafana_api_user_signup_completed_total amount of users who completed the signup flow
# TYPE default_grafana_api_user_signup_completed_total counter
default_grafana_api_user_signup_completed_total{group="grafana_z1"} 0
# HELP default_grafana_api_user_signup_invite_total amount of users who have been invited
# TYPE default_grafana_api_user_signup_invite_total counter
default_grafana_api_user_signup_invite_total{group="grafana_z1"} 0
# HELP default_grafana_api_user_signup_started_total amount of users who started the signup flow
# TYPE default_grafana_api_user_signup_started_total counter
default_grafana_api_user_signup_started_total{group="grafana_z1"} 0
# HELP default_grafana_aws_cloudwatch_get_metric_data_total counter for getting metric data time series from aws
# TYPE default_grafana_aws_cloudwatch_get_metric_data_total counter
default_grafana_aws_cloudwatch_get_metric_data_total{group="grafana_z1"} 0
# HELP default_grafana_aws_cloudwatch_get_metric_statistics_total counter for getting metric statistics from aws
# TYPE default_grafana_aws_cloudwatch_get_metric_statistics_total counter
default_grafana_aws_cloudwatch_get_metric_statistics_total{group="grafana_z1"} 0
# HELP default_grafana_aws_cloudwatch_list_metrics_total counter for getting list of metrics from aws
# TYPE default_grafana_aws_cloudwatch_list_metrics_total counter
default_grafana_aws_cloudwatch_list_metrics_total{group="grafana_z1"} 0
# HELP default_grafana_build_info A metric with a constant '1' value labeled by version, revision, branch, and goversion from which Grafana was built
# TYPE default_grafana_build_info gauge
default_grafana_build_info{branch="HEAD",edition="oss",goversion="go1.14.4",group="grafana_z1",revision="9893b8c53d",version="7.1.5"} 1
# HELP default_grafana_db_datasource_query_by_id_total counter for getting datasource by id
# TYPE default_grafana_db_datasource_query_by_id_total counter
default_grafana_db_datasource_query_by_id_total{group="grafana_z1"} 0
# HELP default_grafana_instance_start_total counter for started instances
# TYPE default_grafana_instance_start_total counter
default_grafana_instance_start_total{group="grafana_z1"} 0
# HELP default_grafana_page_response_status_total page http response status
# TYPE default_grafana_page_response_status_total counter
default_grafana_page_response_status_total{code="200",group="grafana_z1"} 2
default_grafana_page_response_status_total{code="404",group="grafana_z1"} 0
default_grafana_page_response_status_total{code="500",group="grafana_z1"} 0
default_grafana_page_response_status_total{code="unknown",group="grafana_z1"} 2
# HELP default_grafana_plugin_build_info A metric with a constant '1' value labeled by pluginId, pluginType and version from which Grafana plugin was built
# TYPE default_grafana_plugin_build_info gauge
default_grafana_plugin_build_info{group="grafana_z1",plugin_id="input",plugin_type="datasource",version="1.0.0"} 1
# HELP default_grafana_proxy_response_status_total proxy http response status
# TYPE default_grafana_proxy_response_status_total counter
default_grafana_proxy_response_status_total{code="200",group="grafana_z1"} 0
default_grafana_proxy_response_status_total{code="404",group="grafana_z1"} 0
default_grafana_proxy_response_status_total{code="500",group="grafana_z1"} 0
default_grafana_proxy_response_status_total{code="unknown",group="grafana_z1"} 0
# HELP default_grafana_rendering_queue_size size of image rendering queue
# TYPE default_grafana_rendering_queue_size gauge
default_grafana_rendering_queue_size{group="grafana_z1"} 0
# HELP default_grafana_stat_active_users number of active users
# TYPE default_grafana_stat_active_users gauge
default_grafana_stat_active_users{group="grafana_z1"} 0
# HELP default_grafana_stat_total_orgs total amount of orgs
# TYPE default_grafana_stat_total_orgs gauge
default_grafana_stat_total_orgs{group="grafana_z1"} 1
# HELP default_grafana_stat_total_playlists total amount of playlists
# TYPE default_grafana_stat_total_playlists gauge
default_grafana_stat_total_playlists{group="grafana_z1"} 0
# HELP default_grafana_stat_total_users total amount of users
# TYPE default_grafana_stat_total_users gauge
default_grafana_stat_total_users{group="grafana_z1"} 1
# HELP default_grafana_stat_totals_active_admins total amount of active admins
# TYPE default_grafana_stat_totals_active_admins gauge
default_grafana_stat_totals_active_admins{group="grafana_z1"} 0
# HELP default_grafana_stat_totals_active_editors total amount of active editors
# TYPE default_grafana_stat_totals_active_editors gauge
default_grafana_stat_totals_active_editors{group="grafana_z1"} 0
# HELP default_grafana_stat_totals_active_viewers total amount of viewers
# TYPE default_grafana_stat_totals_active_viewers gauge
default_grafana_stat_totals_active_viewers{group="grafana_z1"} 0
# HELP default_grafana_stat_totals_admins total amount of admins
# TYPE default_grafana_stat_totals_admins gauge
default_grafana_stat_totals_admins{group="grafana_z1"} 1
# HELP default_grafana_stat_totals_annotations total amount of annotations in the database
# TYPE default_grafana_stat_totals_annotations gauge
default_grafana_stat_totals_annotations{group="grafana_z1"} 0
# HELP default_grafana_stat_totals_dashboard total amount of dashboards
# TYPE default_grafana_stat_totals_dashboard gauge
default_grafana_stat_totals_dashboard{group="grafana_z1"} 0
# HELP default_grafana_stat_totals_dashboard_versions total amount of dashboard versions in the database
# TYPE default_grafana_stat_totals_dashboard_versions gauge
default_grafana_stat_totals_dashboard_versions{group="grafana_z1"} 0
# HELP default_grafana_stat_totals_editors total amount of editors
# TYPE default_grafana_stat_totals_editors gauge
default_grafana_stat_totals_editors{group="grafana_z1"} 0
# HELP default_grafana_stat_totals_viewers total amount of viewers
# TYPE default_grafana_stat_totals_viewers gauge
default_grafana_stat_totals_viewers{group="grafana_z1"} 0
# HELP default_http_request_in_flight A gauge of requests currently being served by Grafana.
# TYPE default_http_request_in_flight gauge
default_http_request_in_flight{group="grafana_z1"} 0
# HELP default_http_request_total http request counter
# TYPE default_http_request_total counter
default_http_request_total{group="grafana_z1",handler="/*",method="get",statuscode="302"} 1
default_http_request_total{group="grafana_z1",handler="/login",method="get",statuscode="200"} 1
# HELP default_otelcol_process_cpu_seconds Total CPU user and system time in seconds
# TYPE default_otelcol_process_cpu_seconds gauge
default_otelcol_process_cpu_seconds{group="opentelemetry",service_instance_id="92b1cced-9d8e-4db3-8c0b-f34ea9b51c8c"} 3
# HELP default_otelcol_process_memory_rss Total physical memory (resident set size)
# TYPE default_otelcol_process_memory_rss gauge
default_otelcol_process_memory_rss{group="opentelemetry",service_instance_id="92b1cced-9d8e-4db3-8c0b-f34ea9b51c8c"} 4.7038464e+07
# HELP default_otelcol_process_runtime_heap_alloc_bytes Bytes of allocated heap objects (see 'go doc runtime.MemStats.HeapAlloc')
# TYPE default_otelcol_process_runtime_heap_alloc_bytes gauge
default_otelcol_process_runtime_heap_alloc_bytes{group="opentelemetry",service_instance_id="92b1cced-9d8e-4db3-8c0b-f34ea9b51c8c"} 8.22352e+06
# HELP default_otelcol_process_runtime_total_alloc_bytes Cumulative bytes allocated for heap objects (see 'go doc runtime.MemStats.TotalAlloc')
# TYPE default_otelcol_process_runtime_total_alloc_bytes gauge
default_otelcol_process_runtime_total_alloc_bytes{group="opentelemetry",service_instance_id="92b1cced-9d8e-4db3-8c0b-f34ea9b51c8c"} 1.91228328e+08
# HELP default_otelcol_process_runtime_total_sys_memory_bytes Total bytes of memory obtained from the OS (see 'go doc runtime.MemStats.Sys')
# TYPE default_otelcol_process_runtime_total_sys_memory_bytes gauge
default_otelcol_process_runtime_total_sys_memory_bytes{group="opentelemetry",service_instance_id="92b1cced-9d8e-4db3-8c0b-f34ea9b51c8c"} 7.5317504e+07
# HELP default_otelcol_process_uptime Uptime of the process
# TYPE default_otelcol_process_uptime counter
default_otelcol_process_uptime{group="opentelemetry",service_instance_id="92b1cced-9d8e-4db3-8c0b-f34ea9b51c8c"} 705.0022319999999
# HELP default_otelcol_receiver_accepted_metric_points Number of metric points successfully pushed into the pipeline.
# TYPE default_otelcol_receiver_accepted_metric_points counter
default_otelcol_receiver_accepted_metric_points{group="opentelemetry",receiver="prometheus",service_instance_id="92b1cced-9d8e-4db3-8c0b-f34ea9b51c8c",transport="http"} 22900
# HELP default_otelcol_receiver_refused_metric_points Number of metric points that could not be pushed into the pipeline.
# TYPE default_otelcol_receiver_refused_metric_points counter
default_otelcol_receiver_refused_metric_points{group="opentelemetry",receiver="prometheus",service_instance_id="92b1cced-9d8e-4db3-8c0b-f34ea9b51c8c",transport="http"} 0
# HELP default_process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE default_process_cpu_seconds_total counter
default_process_cpu_seconds_total{group="grafana_z1"} 2.18
# HELP default_process_max_fds Maximum number of open file descriptors.
# TYPE default_process_max_fds gauge
default_process_max_fds{group="grafana_z1"} 1.048576e+06
# HELP default_process_open_fds Number of open file descriptors.
# TYPE default_process_open_fds gauge
default_process_open_fds{group="grafana_z1"} 13
# HELP default_process_resident_memory_bytes Resident memory size in bytes.
# TYPE default_process_resident_memory_bytes gauge
default_process_resident_memory_bytes{group="grafana_z1"} 5.8413056e+07
# HELP default_process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE default_process_start_time_seconds gauge
default_process_start_time_seconds{group="grafana_z1"} 1.60078778879e+09
# HELP default_process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE default_process_virtual_memory_bytes gauge
default_process_virtual_memory_bytes{group="grafana_z1"} 7.74934528e+08
# HELP default_process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE default_process_virtual_memory_max_bytes gauge
default_process_virtual_memory_max_bytes{group="grafana_z1"} -1
# HELP default_system_memory_usage Bytes of memory in use.
# TYPE default_system_memory_usage counter
default_system_memory_usage{state="free"} 2.60694016e+08
default_system_memory_usage{state="inactive"} 6.101962752e+09
default_system_memory_usage{state="used"} 1.0817212416e+10

I do not know if the error is coming from receiver or exporter

@jmacd
Copy link
Contributor

jmacd commented Oct 30, 2020

@nilebox regarding your comment https://github.com/open-telemetry/opentelemetry-collector/issues/1774#issuecomment-696400632, I agree that this issue could be addressed by additional relabeling inside the Prometheus receiver config.

I'm not sure what's the root cause for this bug specifically (probably the internal cache is using map based on labels as a key), but you shouldn't have multiple applications producing the same set of metric labels anyway, as this may cause issues down the pipeline for batching / aggregation etc. Collector will consider metric with the same labels as a single metric which may lead to incorrect aggregations. @bogdandrutu could you confirm this please?

I'm not exactly sure what a Prometheus server will do in this situation. Shouldn't the job and instance labels keep these timeseries distinct without an explicit effort to relabel these? Do we not include those labels? Shouldn't these targets have distinct resources, which prevent them from appearing as duplicates to the exporter? (See open-telemetry/opentelemetry-collector#1892)

This is difficult to answer in practice, because if we represent counters with DELTA temporality, then there is simply not a problem interleaving points and/or having multiple identical timeseries. When representing counters with CUMULATIVE temporality, it's not possible to simply interleave points when timeseries collide: you have to aggregate these timeseries into a single cumulative series, which requires a lot more code. Abstractly speaking, I think it should be fine to have multiple targets with the same name and label keys. Concretely speaking, in a Prometheus receiver, this is difficult to reason about.

@nilebox
Copy link
Member

nilebox commented Oct 30, 2020

Shouldn't the job and instance labels keep these timeseries distinct without an explicit effort to relabel these?

@jmacd This is a great point.
Apparently job and instance labels are used for creating node and resource in the OpenCensus model: https://github.com/open-telemetry/opentelemetry-collector/blob/99ea2df3c7ae9f6a24a982d897678cd5d564cffa/receiver/prometheusreceiver/internal/transaction.go#L220

From looking at this code though, it seems that 2 pods will likely have equal OC resources but different OC nodes, since the "host" part is only used in the node, and not in the OC resource.

OT model doesn't have "node", but it gets converted to OT resource attributes: https://github.com/open-telemetry/opentelemetry-collector/blob/99ea2df3c7ae9f6a24a982d897678cd5d564cffa/translator/internaldata/oc_to_resource.go#L102-L104

So theoretically it should work fine for collector grouping metrics by resource?
Then I agree that there is probably a bug somewhere seemingly overwriting data coming from different Prometheus targets.

@nilebox nilebox removed their assignment Dec 14, 2020
@nilebox
Copy link
Member

nilebox commented Dec 14, 2020

Unassigned myself as I'm not actively working on Prometheus receiver anymore.

@bogdandrutu bogdandrutu transferred this issue from open-telemetry/opentelemetry-collector Aug 30, 2021
@alolita alolita added the comp:prometheus Prometheus related issues label Sep 2, 2021
@gouthamve
Copy link
Member

Just want to add a note that this is still happening, even on 0328a79

hex1848 pushed a commit to hex1848/opentelemetry-collector-contrib that referenced this issue Jun 2, 2022
…pen-telemetry#4817) (open-telemetry#4986)

* Implement unmarshal traces with jsoniter, make 40x faster than jsonpb.

Signed-off-by: Jimmie Han <hanjinming@outlook.com>

* ptrace: json unmarshaller use defer

* Use jsoniter unmarshaller as default trace unmarshaler.
Update unit test style design.

* Add kvlist support
@gouthamve
Copy link
Member

I am fairly sure this is where it is happening:

signature := timeseriesSignature(il.Name(), metric, ip.Attributes())
if ip.Flags().HasFlag(pmetric.MetricDataPointFlagNoRecordedValue) {
a.registeredMetrics.Delete(signature)
return 0
}
v, ok := a.registeredMetrics.Load(signature)
if !ok {
m := createMetric(metric)
ip.CopyTo(m.Gauge().DataPoints().AppendEmpty())
a.registeredMetrics.Store(signature, &accumulatedValue{value: m, resourceAttrs: resourceAttrs, scope: il, updated: now})
n++
continue
}

So we have an "accumulator", which stores the lastValue for each metric and then spits out the metric for conversion to Prometheus format here:

inMetrics, resourceAttrs := c.accumulator.Collect()

Now the problem is that the accumulator creates a signature based on only the attributes for the metric, and not the resource attributes. This signature is used to deduplicate the metrics, and this means when two targets expose the same metrics, then only one metric is stored and the other is overridden.

I'll start working on a test for this and then the fix.

gouthamve added a commit to gouthamve/opentelemetry-collector-contrib that referenced this issue Jul 5, 2022
Fixes: #4986

See: open-telemetry#4986 (comment)

The new test fails on old code.

Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>
mx-psi pushed a commit that referenced this issue Jul 6, 2022
…rrectly (#11463)

* Duplicate metrics from multiple targets

Fixes: #4986

See: #4986 (comment)

The new test fails on old code.

Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>

* Add changelog entry

Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>

* Fix linting issues

Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>

* Redo changelog for new process

Signed-off-by: Goutham Veeramachaneni <gouthamve@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:prometheus Prometheus related issues
Projects
None yet
6 participants