You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I originally attempted to file this issue via the New Relic issue tracker, and was redirected here.
tl;dr -- the values from the OpenMetrics integration 1.2.2 are showing up in New Relic for me, but they looked a like a single, JSON-valued column, which doesn't appear to be queryable?
None of those keys appear to be the values themselves, however it looks like the kube_node_status_allocatable_cpu_cores is a json object that contains the values.
Running a query to select the values:
FROM Metric SELECT * WHERE metricName = 'kube_node_status_allocatable_cpu_cores'
the kube_node_status_allocatable_cpu_cores key has the values I want in it, but they don't appear to be queryable. Trying to query with SELECT kube_node_status_allocatable_cpu_cores.count ... results in an error.
I'm really excited about being able to consolidate all this data into New Relic -- I just have to figure out how to query it!
Software:
k8s: 1.11.x
nri-prometheus: 1.2.2
My config is:
---
apiVersion: v1
data:
config.yaml: |
# The name of your cluster. It's important to match other New Relic products to relate the data.
cluster_name: "<removed>"
# How often the integration should run. Defaults to 30s.
# scrape_duration: "30s"
# The HTTP client timeout when fetching data from endpoints. Defaults to 5s.
# scrape_timeout: "5s"
# Wether the integration should run in verbose mode or not. Defaults to false.
verbose: true
# Wether the integration should skip TLS verification or not. Defaults to false.
insecure_skip_verify: false
# The label used to identify scrapable targets. Defaults to "prometheus.io/scrape".
scrape_enabled_label: "prometheus.io/scrape"
# Whether k8s nodes need to be labelled to be scraped or not. Defaults to true.
require_scrape_enabled_label_for_nodes: true
# targets:
# - description: Secure etcd example
# urls: ["https://192.168.3.1:2379", "https://192.168.3.2:2379", "https://192.168.3.3:2379"]
# tls_config:
# ca_file_path: "/etc/etcd/etcd-client-ca.crt"
# cert_file_path: "/etc/etcd/etcd-client.crt"
# key_file_path: "/etc/etcd/etcd-client.key"
# Proxy to be used by the emitters when submitting metrics. It should be
# in the format [scheme]://[domain]:[port].
# The emitter is the component in charge of sending the scraped metrics.
# This proxy won't be used when scraping metrics from the targets.
# By default it's empty, meaning that no proxy will be used.
# emitter_proxy: "http://localhost:8888"
# Certificate to add to the root CA that the emitter will use when
# verifying server certificates.
# If left empty, TLS uses the host's root CA set.
# emitter_ca_file: "/path/to/cert/server.pem"
# Whether the emitter should skip TLS verification when submitting data.
# Defaults to false.
# emitter_insecure_skip_verify: false
# Histogram support is based on New Relic's guidelines for higher
# level metrics abstractions https://github.com/newrelic/newrelic-exporter-specs/blob/master/Guidelines.md.
# To better support visualization of this data, percentiles are calculated
# based on the histogram metrics and sent to New Relic.
# By default, the following percentiles are calculated: 50, 95 and 99.
#
# percentiles:
# - 50
# - 95
# - 99
transformations:
- description: "General processing rules"
rename_attributes:
- metric_prefix: ""
attributes:
container_name: "containerName"
pod_name: "podName"
namespace: "namespaceName"
node: "nodeName"
container: "containerName"
pod: "podName"
deployment: "deploymentName"
ignore_metrics:
# Ignore all the metrics except the ones listed below.
# This is a list that complements the data retrieved by the New
# Relic Kubernetes Integration, that's why Pods and containers are
# not included, because they are already collected by the
# Kubernetes Integration.
- except:
- kube_hpa_
- kube_daemonset_
- kube_statefulset_
- kube_endpoint_
- kube_service_
- kube_limitrange
- kube_node_
- kube_poddisruptionbudget_
- kube_resourcequota
- nr_stats
# copy_attributes:
# # Copy all the labels from the timeseries with metric name
# # `kube_hpa_labels` into every timeseries with a metric name that
# # starts with `kube_hpa_` only if they share the same `namespace`
# # and `hpa` labels.
# - from_metric: "kube_hpa_labels"
# to_metrics: "kube_hpa_"
# match_by:
# - namespace
# - hpa
# - from_metric: "kube_daemonset_labels"
# to_metrics: "kube_daemonset_"
# match_by:
# - namespace
# - daemonset
# - from_metric: "kube_statefulset_labels"
# to_metrics: "kube_statefulset_"
# match_by:
# - namespace
# - statefulset
# - from_metric: "kube_endpoint_labels"
# to_metrics: "kube_endpoint_"
# match_by:
# - namespace
# - endpoint
# - from_metric: "kube_service_labels"
# to_metrics: "kube_service_"
# match_by:
# - namespace
# - service
# - from_metric: "kube_node_labels"
# to_metrics: "kube_node_"
# match_by:
# - namespace
# - node
kind: ConfigMap
metadata:
name: nri-prometheus-cfg
namespace: newrelic
Thanks in advance.
The text was updated successfully, but these errors were encountered:
I just found the docs (by accident) that describe how Metric isn't the same thing as, e.g. Insights, and the expectation is that it's queried differently. So I'm unblocked, stoked to query some stuff, and only a little sheepish that I filed this ticket 😊.
I originally attempted to file this issue via the New Relic issue tracker, and was redirected here.
tl;dr -- the values from the OpenMetrics integration 1.2.2 are showing up in New Relic for me, but they looked a like a single, JSON-valued column, which doesn't appear to be queryable?
I followed the setup guide for the Prometheus OpenMetrics integration running in k8s ( https://docs.newrelic.com/docs/integrations/prometheus-integrations/prometheus-kubernetes/new-relic-prometheus-openmetrics-integration-kubernetes), and got the integration stood up and copying metrics from several different prometheus instances in our cluster, but the values seem to be coming out in a format that's not queryable in New Relic One or Insights.
As an example of the behavior I'm seeing, I followed some of the query suggestions in the linked setup:
I get back a response that includes:
None of those keys appear to be the values themselves, however it looks like the
kube_node_status_allocatable_cpu_cores
is a json object that contains the values.Running a query to select the values:
I get, for example:
the
kube_node_status_allocatable_cpu_cores
key has the values I want in it, but they don't appear to be queryable. Trying to query withSELECT kube_node_status_allocatable_cpu_cores.count ...
results in an error.I'm really excited about being able to consolidate all this data into New Relic -- I just have to figure out how to query it!
Software:
My config is:
Thanks in advance.
The text was updated successfully, but these errors were encountered: