Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Netdata Prometheus endpoint works with 1.5.2 but not 2.0.0 #2611

Closed
tyrken opened this Issue Apr 11, 2017 · 6 comments

Comments

Projects
None yet
4 participants
@tyrken
Copy link

tyrken commented Apr 11, 2017

What did you do?
Configure Prometheus to scrape from a local netdata (https://github.com/firehol/netdata) server acting as Prom exporter

What did you expect to see?
Netdata metrics

What did you see instead? Under which circumstances?
Under Prom 1.5.2, it worked.
Under Prom 2.0.0 Alpha 2017-04-10, I only got the 1st metric

Environment
Ubuntu 16.04

  • System information:

Linux 4.4.0-66-generic x86_64

  • Prometheus version:

prometheus, version 1.5.2 (branch: master, revision: bd1182d)
build user: root@a8af9200f95d
build date: 20170210-14:41:22
go version: go1.7.5
--- and ---
prometheus, version 2.0.0-alpha.0 (branch: master, revision: ece483c)
build user: root@cd2fcfcce982
build date: 20170410-11:14:31
go version: go1.8

  • Prometheus configuration file:
# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
      monitor: 'codelab-monitor'

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first.rules"
  # - "second.rules"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'netdata-scrape'

    metrics_path: '/api/v1/allmetrics'
    params:
      format: [prometheus]

    static_configs:
      - targets: ['localhost:19999']
  • Logs:
    Nothing in journalctl -u prometheus - how to enable better debug logging on Prom 2?
    Exporter output:
root@ip-10-8-128-62:/var/log# curl -v http://localhost:19999/api/v1/allmetrics?format=prometheus
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 19999 (#0)
> GET /api/v1/allmetrics?format=prometheus HTTP/1.1
> Host: localhost:19999
> User-Agent: curl/7.47.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Connection: close
< Server: NetData Embedded HTTP Server
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Credentials: true
< Content-Type: text/plain; version=0.0.4
< Date: Tue, 11 Apr 2017 16:41:27 GMT
< Cache-Control: no-cache
< Expires: Tue, 11 Apr 2017 16:41:28 GMT
< Content-Length: 134863
< 

# TYPE disk_qops_xvda_operations gauge
disk_qops_xvda_operations{instance="nlog2"} 0 1491928886880

# TYPE services_throttle_io_ops_write_system_slice_mdadm_service counter
services_throttle_io_ops_write_system_slice_mdadm_service{instance="nlog2"} 0 1491928886879
# TYPE services_throttle_io_ops_write_system_slice_dbus_service counter
services_throttle_io_ops_write_system_slice_dbus_service{instance="nlog2"} 0 1491928886879
# TYPE services_throttle_io_ops_write_system_slice_uuidd_service counter
services_throttle_io_ops_write_system_slice_uuidd_service{instance="nlog2"} 0 1491928886879
# TYPE services_throttle_io_ops_write_system_slice_cron_service counter
services_throttle_io_ops_write_system_slice_cron_service{instance="nlog2"} 254 1491928886879
...

Only disk_qops_xvda_operations shows up in the Prom 2.0.0 GUI as an available item to Console/Graph not services_throttle_io_ops_write_system_slice_mdadm_service, both appear in Prom 1.5.2.

@gouthamve

This comment has been minimized.

Copy link
Member

gouthamve commented Apr 11, 2017

@fabxc Is this because we are assuming that the data from a single scrape has data only for a single timestamp? Ref: prometheus/tsdb#11 (comment)

@tyrken

This comment has been minimized.

Copy link
Author

tyrken commented Apr 11, 2017

For easy repro, you can use the public netdata instance at london.my-netdata.io in place of my localhost:19999, i.e. http://london.my-netdata.io/api/v1/allmetrics?format=prometheus.

Anecdotally this often returns identical timestamps for the first few metrics at least, e.g.:

# TYPE ipv4_bcastpkts_InBcastPkts counter
ipv4_bcastpkts_InBcastPkts{instance="london_my_netdata_io"} 6 1491931984903
# TYPE ipv4_bcastpkts_OutBcastPkts counter
ipv4_bcastpkts_OutBcastPkts{instance="london_my_netdata_io"} 0 1491931984903

# TYPE ipv4_bcast_InBcastOctets counter
ipv4_bcast_InBcastOctets{instance="london_my_netdata_io"} 1968 1491931984903
# TYPE ipv4_bcast_OutBcastOctets counter
ipv4_bcast_OutBcastOctets{instance="london_my_netdata_io"} 0 1491931984903

... but I only see the very first metric ipv4_bcastpkts_InBcastPkts as an option in the dropdown on Prometheus's Graph UI.

@fabxc

This comment has been minimized.

Copy link
Member

fabxc commented Apr 11, 2017

Yes, that is because the Prometheus 2.0 uses a new parser that does not speak timestamps yet.
Definitely my bad for forgetting to point this out in the blog post.

We should very much have this available in the next alpha.

@tyrken

This comment has been minimized.

Copy link
Author

tyrken commented Apr 11, 2017

Thanks - will wait for that to retry.

@fabxc

This comment has been minimized.

Copy link
Member

fabxc commented Apr 28, 2017

Added in #2661

@fabxc fabxc closed this Apr 28, 2017

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 23, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 23, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.