Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upSupporting multiple metrics_path in configuration #1852
Comments
This comment has been minimized.
This comment has been minimized.
|
It looks like you're trying to do something similar to the snmp/blackbox exporter. I'd suggest taking a look at how they approach this issue. https://github.com/prometheus/snmp_exporter#prometheus-configuration In high-level terms, this information should be coming from the node exporter rather than something custom. |
brian-brazil
added
the
kind/question
label
Jul 28, 2016
This comment has been minimized.
This comment has been minimized.
|
Thank you, I checked the link above, replace some string is not what I want Although the my above config do solve the issue, But I expect more simple way ( that do not to have a job for every end point ) as following:
or
It helps a lot more if I have many such endpoints
I'm sorry use the ntp as example config, makes a little confusion it should coming from the node exporter, the node exporter does good, but it need to deploy the node_exporter then, And I am not sure how can I easily get the metrics from node_exporter to do what I want ( say different ntp metrics, not just time drift , that need custom )
I think Prometheus will be used more in situation like customize, say a application's metrics and service metrics, etc. ( the metrics may depend on business logic that need customization ) |
This comment has been minimized.
This comment has been minimized.
That not something we support, as that's not a standard use case. Normally you only hit a given target once, with the exception of the blackbox/snmp exporters. Have you considered combining all this into one endpoint?
There's an optional |
This comment has been minimized.
This comment has been minimized.
|
Thank you for your patient, you saying
How about thinking in this way, A new config directive, say url that just specify where to get the metrics
Here I change it from ntp to more general services situation ( as we are talking the pattern, not just ntp specific case )
This reminds me that it really should doing this way, but I collect these metrics from multiple host, I see no easy way to merge them together I like doing collect metrics (in Prometheus), because it is simple, I collect metrics and send it to somewhere (A center fileserver receive these metrics), Prometheus take care of the rest Since it not going to support this feature, I'm okay to stay with my current config method ( with multiple job ) |
This comment has been minimized.
This comment has been minimized.
This is still not a standard pattern. A given job should be hitting a given target once with one metrics path. If you're trying to do anything else, you're going to be swimming upstream.
That sounds like effectively a pushgateway, you should have Prometheus scrape the metrics from the hosts directly. See https://prometheus.io/docs/practices/pushing/ |
This comment has been minimized.
This comment has been minimized.
Am I get it right, you mean a job should have single metrics path only? About pushgateway pushgateway does good job, But I have different situation: pushgateway often expect a direct metrics format |
This comment has been minimized.
This comment has been minimized.
Usually yes.
Ah, you're doing something very unusual. Have a look at file service discovery. |
This comment has been minimized.
This comment has been minimized.
I guess you mean file_sd_config, as my understanding, it make the config external and dynamic, the targets in the config still use host, So it is same as config above ( only host and port , nothing else ), I think issue still remains |
This comment has been minimized.
This comment has been minimized.
|
As noted above, I understand this issue( multiple metrics_path or url, or append string to host ) is un-usual case, Thank you for your time ! |
brian-brazil
closed this
Aug 1, 2016
This comment has been minimized.
This comment has been minimized.
manasagovindu
commented
Mar 8, 2018
|
Hi, @brian-brazil I have different kafka brokers in different environments(staging,prod) so if i want to run the kafka exporter as a side car how would i need to write the scrape config for kafka job. i have 8 kafka brokers with differnt nodeports. thanks in advance |
klausenbusk
referenced this issue
Apr 28, 2018
Closed
file_sd_config: Support custom metrics_path #4121
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 22, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
chinglinwen commentedJul 28, 2016
•
edited
What did you do?
I try to config multiple endpoint metrics at the same server and same port
What did you expect to see?
A way to specify multiple metrics_path in a single job name
or a way to specify the host with extra path ( to distinguish multiple metrics on the same server and same port) ( like prefix ) ( I'd more like it can specify the end point for multiple entry )
or to specify a full URL for a metric ( not just host and port ) in a single job ( same kind of metrics specify in a same job ) ( what if it have 100 end point, 100 jobs ? )
What did you see instead? Under which circumstances?
Only allowed to specify single metrics_name in a job
The targets in target_groups isn't support other url, etc , only host and port is allowed
Now the targets in static_config ( does the same )
Environment
System information:
Linux 2.6.18-164.el5 x86_64
Prometheus version:
prometheus, version 0.19.2 (branch: master, revision: 23ca13c)
build user: root@134dc6bbc274
build date: 20160529-18:58:00
go version: go1.6.2
( I checked the documents (2016-7-28) , about metrics_path, and host specify, issue remains )
Tried same job name
Couldn't load configuration (-config.file=prometheus.yml): found multiple scrape configs with job name "ntp" source=main.go:218
Tried host with url path
Couldn't load configuration (-config.file=prometheus.yml): "10.100.2.108:8000/hadoop241" is not a valid hostname source=main.go:218