Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upScraping both file_sd_configs and dns_sd_configs scrapes only dns #3365
Comments
juliusv
added
the
component/service discovery
label
Nov 9, 2017
This comment has been minimized.
This comment has been minimized.
|
@theonlydoo I tried this out on |
This comment has been minimized.
This comment has been minimized.
|
@theonlydoo Here is a link to download an executable for Linux 64bit |
This comment has been minimized.
This comment has been minimized.
|
@krasi-georgiev thank's for the link, I've tried it out : and still no file_sd scraping, for information here is a sanitized configfile : # my global config
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
evaluation_interval: 15s
scrape_timeout: 15s
# scrape_timeout is set to the global default (10s).
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'foobar'
# Load and evaluate rules in this file every 'evaluation_interval' seconds.
rule_files:
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['127.0.0.1:9090']
### Infrastructure autoconf ###
scrape_configs:
- job_name: 'foobarbaz'
scrape_interval: 30s
file_sd_configs:
- files:
- /etc/prometheus/config/_p_foobarbaz_s__foobarstats/_p_foobarbaz_s__foobarstats.json
- job_name: 'servdisc'
scrape_interval: 30s
dns_sd_configs:
- names:
- _supporthttp._tcp.foobarbaz.sd.as you can see, I've got both dns_sd and file_sd but only dns_sd is handled in both cases, even though promtool fully validates this config |
This comment has been minimized.
This comment has been minimized.
|
@theonlydoo is this a copy/paste error you you really have 2 x |
This comment has been minimized.
This comment has been minimized.
|
@theonlydoo can you try some things?
here's my example config:
which works even if I move one of the SD methods into a separate job |
This comment has been minimized.
This comment has been minimized.
|
@krasi-georgiev I have 2 scrape configs, but the first one is not read by prom. I've removed it, and still no file_sd_configs enabled @cstyan I've tried to remove the File permissions are OK, if I disable the dns_sd_config, I do not see any file_sd with this config. This is weird, since I've tried this simple config : scrape_configs:
- job_name: 'node_exporter'
scrape_interval: 30s
file_sd_configs:
- files:
- /etc/prometheus/config/_p_node_exporter_s__cassandra_log/_p_node_exporter_s__cassandra_log.json
- job_name: 'servdisc'
scrape_interval: 30s
dns_sd_configs:
- names:
- _supporthttp._tcp.foobar.sd.
it starts and validate, but I have no file_sd job If I move
it doesnt raises an error (so the file is not watched, as I do not see it while I do a and if I append a new file_sd who doesnt exist at all, promtool refuses to validate the configuration and prometheus refuses to start. |
This comment has been minimized.
This comment has been minimized.
|
OK i've got it... This was a HUGE pebkac, apparently, there is no error raised when the json config file is "not readable" for prometheus, so it is silently ignored. I've found the error while doing a diff between my previous config management branch and this one. On one hand, you had : [{
"labels": {
"env": "prod",
"group": "foo_log",
"hosting": "company"
},
"targets": [
"bar-3.foo:19100",
"bar-1.foo:19100",
"bar-2.foo:19100"
]
}]on the other : [{
"labels": {
"env": "prod",
"group": "foo_log",
"hosting": "company"
},
"target": [
"bar-3.foo:19100",
"bar-1.foo:19100",
"bar-2.foo:19100"
]
}]
so the typo between So the problem was not at all in the file_sd or dns_sd, but in the json parsing! Thank you all for the debug effort |
This comment has been minimized.
This comment has been minimized.
|
That sounds like a bug on our side, we should be verifying that the files parse and not doing an update if they don't - same as if EC2 started returning errors half way through a poll. |
This comment has been minimized.
This comment has been minimized.
|
strict JSON unmarshaling will be added in golang 1.10 in the meanwhile we can use the function from this PR to return an error if the parsed file has some unknown fields I will open a PR. |
brian-brazil
added
kind/bug
priority/P2
labels
Dec 21, 2017
krasi-georgiev
referenced this issue
Dec 22, 2017
Merged
Validate json parse for TargetGroup Unmarshal #3614
brian-brazil
closed this
in
#3614
Feb 27, 2018
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 22, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |


theonlydoo commentedOct 27, 2017
•
edited
What did you do?
What did you expect to see?
Scraping on both targets
What did you see instead? Under which circumstances?
only scraping on dns_sd_configs
Environment
System information:
Linux 4.9.0-3-amd64 x86_64/ debian stretchPrometheus version:
the content of the json is :
[{ "labels": { "env": "prod", "group": "foo_bar_service", "service_type": "webservice" }, "target": [ "foo01.tld:7646", "foo02.tld:7646", "foo03.tld:7646", "foo04.tld:7646" ] }]if I completely disable dns_sd_configs it is scraped again