Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upNo refresh of file service discovery targets if only __meta labels changed #3693
Comments
This comment has been minimized.
This comment has been minimized.
|
The labels which are the output of relabelling are used. If a meta label is changed which has no impact on target labels, there's no need to update the target. |
This comment has been minimized.
This comment has been minimized.
|
Thanks for your clarification. I was confused by the information displayed in the /targets UI because it lists the available metrics "before relabeling" which made me assume that I can see all available (even unused during relabeling) labels of the listed targets. If I use one of the labels during relabeling, the value is there, but somehow the information displayed doesn't match the real state of the target. Assume I build and deploy a full relabeling configuration and later modify the process that generates the targets file for file_sd to list one more __meta label, its value will be used during relabeling but I will not see the original __meta label value (in the overlay) until a restart of Prometheus or a change in the relabeling configuration happens. So using the __meta label value works as intended but the information displayed in the overlay is not the latest state of all __meta labels. |
This comment has been minimized.
This comment has been minimized.
|
That's a bit of a bug then, especially as the next release makes that information more obvious. |
This comment has been minimized.
This comment has been minimized.
|
Good to hear that you also see this as kind of a bug. Where can I find more details on the mentioned extension of displaying this information? Please let me know if I can support. |
This comment has been minimized.
This comment has been minimized.
|
If you compile master you'll see it under the Status menu. |
This comment has been minimized.
This comment has been minimized.
|
@ajaegle if you can give an example for the |
This comment has been minimized.
This comment has been minimized.
|
To reproduce this problem which is present in 2.0.0 and 2.1.0, use a default Prometheus package with an additional job using file_sd_configs with a file named
Use this content as initial targets configuration when starting Prometheus (as [
{
"Targets": [
"localhost:9091"
],
"Labels": {
"__meta_label_target_will_be_refreshed": "false"
}
},
{
"Targets": [
"localhost:9092"
],
"Labels": {
"__meta_label_target_will_be_refreshed": "true"
}
}
]
When everything is up, check targets or service discovery view (since 2.1.0) to see that there are two endpoints each having exactly one Copy over the following content into the targets.json where both endpoints have one additional label. The 9091 added one [
{
"Targets": [
"localhost:9091"
],
"Labels": {
"__meta_label_target_will_be_refreshed": "false",
"__meta_label_new": "meta-only"
}
},
{
"Targets": [
"localhost:9092"
],
"Labels": {
"__meta_label_target_will_be_refreshed": "true",
"__non_meta_label_new": "non-meta-label"
}
}
]
Refresh the targets / service discovery view to see that only one endpoint has updated labels: Additional info: If the |
This comment has been minimized.
This comment has been minimized.
|
@ajaegle thanks that info is super useful , I will try to to find some time to work on this. |
This comment has been minimized.
This comment has been minimized.
|
@krasi-georgiev Please see the details in my initial post before researching by yourself. The reason why one endpoint is not updated is obviously using the cached version because the hash hasn’t changed. But I’m not sure what the side effects would be if one includes those meta labels into the hash calculation. Probably there was a good reason not to do this... |
This comment has been minimized.
This comment has been minimized.
|
@ajaegle thanks, your info really saved some time |
brian-brazil
closed this
in
#3805
Feb 7, 2018
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 22, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |


ajaegle commentedJan 16, 2018
What did you do?
Using file_sd_config as discovery mechanism for a target group generated by a custom docker swarm service discovery (similar to https://github.com/ContainerSolutions/prometheus-swarm-discovery). When updating the targets file, I couldn't see any update in the targets list of the Prometheus web interface. My first assumption was that there are issues with fsnotify inside of container volumes, but this was not the case. The issue I faced was that there is no updated configuration visible in the targets view when only __meta labels change.
After some investigation, I found the reason for the omitted updates in the code where the list of scrape tasks is synced using a hash function of the target. This hash is calculated using the
urland thelabelsbut not thediscoveredLabelswhere these __meta labels are part of.Is it an intended behavior that an update of __meta labels does not trigger an update of the scrape targets - probably to prevent too frequent updates? My assumption was that I can use these __meta labels to add all information I get when talking to the Docker API and so I would be able to make use of them during the relabeling phase. In my opinion, it can be a regular use case that service labels are updated without any further changes of the target urls or regular labels. If one should not use these, which would be a better "namespace" for these custom labels?
What did you expect to see?
Even if the targets and regular labels remain the same, the configuration should reflect the change/addition/removal of __meta labels.
What did you see instead? Under which circumstances?
The configuration update was not applied. The targets view still lists the old combination of __meta labels.
Environment
Prometheus 2.0.0 deployed using Docker Swarm - and also reproduced with a local setup on a pure Ubuntu Server 16.04 LTS VM.
Host: Ubuntu 16.04.3 LTS
Docker 17.12.0-ce
Linux 4.4.0-108-generic x86_64
2.0.0