Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supporting multiple metrics_path in configuration #1852

Closed
chinglinwen opened this Issue Jul 28, 2016 · 11 comments

Comments

Projects
None yet
3 participants
@chinglinwen
Copy link

chinglinwen commented Jul 28, 2016

What did you do?
I try to config multiple endpoint metrics at the same server and same port

What did you expect to see?
A way to specify multiple metrics_path in a single job name
or a way to specify the host with extra path ( to distinguish multiple metrics on the same server and same port) ( like prefix ) ( I'd more like it can specify the end point for multiple entry )
or to specify a full URL for a metric ( not just host and port ) in a single job ( same kind of metrics specify in a same job ) ( what if it have 100 end point, 100 jobs ? )

What did you see instead? Under which circumstances?
Only allowed to specify single metrics_name in a job
The targets in target_groups isn't support other url, etc , only host and port is allowed
Now the targets in static_config ( does the same )

Environment

  • System information:

    Linux 2.6.18-164.el5 x86_64

  • Prometheus version:

    prometheus, version 0.19.2 (branch: master, revision: 23ca13c)
    build user: root@134dc6bbc274
    build date: 20160529-18:58:00
    go version: go1.6.2

( I checked the documents (2016-7-28) , about metrics_path, and host specify, issue remains )

  • Prometheus configuration file:
  - job_name:  'ntp1'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 1m
    metrics_path: /hadoop241.ntpq_metric
    target_groups:
      - targets: ['10.100.2.108:8000']

  - job_name:  'ntp2'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 1m
    metrics_path: /hadoop242.ntpq_metric
    target_groups:
      - targets: ['10.100.2.108:8000']

  - job_name:  'ntp3'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 1m
    metrics_path: /hadoop243.ntpq_metric
    target_groups:
      - targets: ['10.100.2.108:8000']

Tried same job name
Couldn't load configuration (-config.file=prometheus.yml): found multiple scrape configs with job name "ntp" source=main.go:218

Tried host with url path
Couldn't load configuration (-config.file=prometheus.yml): "10.100.2.108:8000/hadoop241" is not a valid hostname source=main.go:218

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Jul 28, 2016

It looks like you're trying to do something similar to the snmp/blackbox exporter. I'd suggest taking a look at how they approach this issue. https://github.com/prometheus/snmp_exporter#prometheus-configuration

In high-level terms, this information should be coming from the node exporter rather than something custom.

@chinglinwen

This comment has been minimized.

Copy link
Author

chinglinwen commented Jul 28, 2016

Thank you, I checked the link above, replace some string is not what I want

Although the my above config do solve the issue, But I expect more simple way ( that do not to have a job for every end point ) as following:

  - job_name:  'ntp'
    scrape_interval: 1m
    metrics_path: /hadoop241.ntpq_metric
    metrics_path: /hadoop242.ntpq_metric
    metrics_path: /hadoop243.ntpq_metric
    target_groups:
      - targets: ['10.100.2.108:8000']

or

  - job_name:  'ntp'
    scrape_interval: 1m
    metrics_path: ?
    target_groups:
      - targets: ['10.100.2.108:8000/hadoop241.ntpq_metric']
      - targets: ['10.100.2.108:8000/hadoop242.ntpq_metric']
      - targets: ['10.100.2.108:8000/hadoop243.ntpq_metric']

It helps a lot more if I have many such endpoints

this information should be coming from the node exporter rather than something custom.

I'm sorry use the ntp as example config, makes a little confusion it should coming from the node exporter, the node exporter does good, but it need to deploy the node_exporter then, And I am not sure how can I easily get the metrics from node_exporter to do what I want ( say different ntp metrics, not just time drift , that need custom )

something custom

I think Prometheus will be used more in situation like customize, say a application's metrics and service metrics, etc. ( the metrics may depend on business logic that need customization )

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Jul 28, 2016

It helps a lot more if I have many such endpoints

That not something we support, as that's not a standard use case. Normally you only hit a given target once, with the exception of the blackbox/snmp exporters.

Have you considered combining all this into one endpoint?

And I am not sure how can I easily get the metrics from node_exporter to do what I want ( say different ntp metrics, not just time drift , that need custom )

There's an optional ntp module in the node exporter. Generally all machine-level metrics should come from the node exporter and we often suggest it to be the first thing you use with Prometheus as it's easy to get going and gives you lots of metrics out of the box.

@chinglinwen

This comment has been minimized.

Copy link
Author

chinglinwen commented Jul 28, 2016

Thank you for your patient, you saying

as that's not a standard use case

How about thinking in this way, A new config directive, say url that just specify where to get the metrics

  - job_name:  'services'
    scrape_interval: 1m
    metrics_path: ?
    target_groups:
      - url: ['http://10.100.2.108:8000/serviceA_metric]
      - url: ['http://10.100.2.108:8000/serviceB_metric']
      - url: ['http://10.100.2.108:8000/serviceC_metric']

Here I change it from ntp to more general services situation ( as we are talking the pattern, not just ntp specific case )

combining all this into one endpoint

This reminds me that it really should doing this way, but I collect these metrics from multiple host, I see no easy way to merge them together

I like doing collect metrics (in Prometheus), because it is simple, I collect metrics and send it to somewhere (A center fileserver receive these metrics), Prometheus take care of the rest

Since it not going to support this feature, I'm okay to stay with my current config method ( with multiple job )

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Jul 28, 2016

Here I change it from ntp to more general services situation ( as we are talking the pattern, not just ntp specific case )

This is still not a standard pattern. A given job should be hitting a given target once with one metrics path.

If you're trying to do anything else, you're going to be swimming upstream.

(A center fileserver receive these metrics)

That sounds like effectively a pushgateway, you should have Prometheus scrape the metrics from the hosts directly. See https://prometheus.io/docs/practices/pushing/

@chinglinwen

This comment has been minimized.

Copy link
Author

chinglinwen commented Jul 28, 2016

should be hitting a given target once with one metrics path

Am I get it right, you mean a job should have single metrics path only?
I take the metrics_path as a default suffix, ( the suffix append to multiple host then)

About pushgateway

pushgateway does good job, But I have different situation:

pushgateway often expect a direct metrics format
I try to using Mtail to analyse the log content to metrics on the AIX system, and mtail doesn't support AIX
so I write a fileserver that can receive the log contents to other server ( say linux )
then use mtail to analyse the log content on that server ( this is the original purpose I write the fileserver to solve this issue )

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Jul 28, 2016

Am I get it right, you mean a job should have single metrics path only?

Usually yes.

I try to using Mtail to analyse the log content to metrics on the AIX system, and mtail doesn't support AIX
so I write a fileserver that can receive the log contents to other server ( say linux )
then use mtail to analyse the log content on that server ( this is the original purpose I write the fileserver to solve this issue )

Ah, you're doing something very unusual. Have a look at file service discovery.

@chinglinwen

This comment has been minimized.

Copy link
Author

chinglinwen commented Jul 28, 2016

file service discovery

I guess you mean file_sd_config, as my understanding, it make the config external and dynamic, the targets in the config still use host, So it is same as config above ( only host and port , nothing else ), I think issue still remains

@chinglinwen

This comment has been minimized.

Copy link
Author

chinglinwen commented Jul 28, 2016

As noted above, I understand this issue( multiple metrics_path or url, or append string to host ) is un-usual case, Thank you for your time !

@manasagovindu

This comment has been minimized.

Copy link

manasagovindu commented Mar 8, 2018

Hi, @brian-brazil

I have different kafka brokers in different environments(staging,prod) so if i want to run the kafka exporter as a side car how would i need to write the scrape config for kafka job. i have 8 kafka brokers with differnt nodeports.

thanks in advance

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 22, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 22, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.