Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upIt takes Prometheus 5m to notice a metric is not available #1810
Comments
This comment has been minimized.
This comment has been minimized.
|
This is #398, there's not much you can do here until this is resolved. |
This comment has been minimized.
This comment has been minimized.
|
Thanks for the report. As Brian said this is well known and covered by another issue. So closing here. |
fabxc
closed this
Jul 14, 2016
This comment has been minimized.
This comment has been minimized.
|
Thanks, will follow #398 |
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 24, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
lock
bot
locked and limited conversation to collaborators
Mar 24, 2019
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
tzach commentedJul 13, 2016
What did you do?
I have a service reporting a metrics using collectd_exporter.
After killing the services, its take collectd_exporter a few second to reflect that, and stop showing service metric. It takes Prometheus additional 5 min to reflect the fact no new metrics are coming.
It looks like Prometheus has a 5min cache for the last metric.
What did you expect to see?
Prometheus immediately reflect the fact no metrics are available
What did you see instead? Under which circumstances?
Prometheus showing old value for 5 min.
Environment
Using the latest docker
Linux 4.4.6-201.fc22.x86_64 x86_64
0.18.0