Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upDashboards flatline and frequent WAL truncation #3489
Comments
This comment has been minimized.
This comment has been minimized.
|
Hi, not sure what you meant by dashboards flatline but the compactions are supposed to run every 2hrs and from the looks of it, are completely fine according to your logs. Could you check if the scrapes are successful and if the targets are returning the right data? |
This comment has been minimized.
This comment has been minimized.
|
Closing this as it doesn't look like an issue with Prometheus and looks more like a configuration error / usage question. Please feel free to re-open if you think this is a Prometheus bug. |
gouthamve
closed this
Nov 17, 2017
This comment has been minimized.
This comment has been minimized.
|
Does anything in my configuration I pasted above look incorrect? I also rarely receive any scrape errors and their times do not coincide with the flatlining. This was posted here, because we have not previously seen behavior like this in prometheus over the last 3-4mos. |
This comment has been minimized.
This comment has been minimized.
anthu
commented
Nov 17, 2017
•
|
You can configure how to handle "null" values in grafana. For me it looks like your application is not providing this metric for some time or returning the same value during this period (metric updating issue app-side or no requests at all). And grafana is simply connecting the datapoints according your "null value handling" configuration. How does this graph looks like if you zoom into this "flat" time range? |
This comment has been minimized.
This comment has been minimized.
|
It looks like it's returning the same metric value. The null handling value is not set to connected. |
ranbochen
referenced this issue
Dec 28, 2017
Open
prometheus drop many metrics sample after restart #3632
This comment has been minimized.
This comment has been minimized.
andrey-kozyrev
commented
Mar 5, 2018
|
Same problem for me. Flat lines go for some time. |
This comment has been minimized.
This comment has been minimized.
|
My issue ended up being rate limiting from a third party. Having the
exporter scrape at a slightly higher interval resolved the issue.
…On Mar 5, 2018 5:49 AM, "andrey-kozyrev" ***@***.***> wrote:
Same problem for me. Flat lines go for some time.
Logs:
pms_1 | level=debug ts=2018-03-05T13:53:42.30764359Z caller=scrape.go:676
component="scrape manager" scrape_pool=finagle target=
http://akz.local:20001/admin/prometheusMetrics msg="Scrape failed"
err="context deadline exceeded"
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#3489 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AT8ZvvBVsekiscKGfwETF738bAgwaeeAks5tbUJWgaJpZM4QhgRx>
.
|
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 22, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |

smd1000 commentedNov 17, 2017
•
edited
What did you do?
Viewing dashboards.
What did you expect to see?
Datapoints
What did you see instead? Under which circumstances?
Queries flatline. I also see frequent block compaction.
Environment
amzn-linux,
Linux 4.9.58-18.51.amzn1.x86_64 x86_64
prometheus, version 2.0.0 (branch: HEAD, revision: 0a74f98)
build user: root@615b82cb36b6
build date: 20171108-07:11:59
go version: go1.9.2
alertmanager, version 0.9.1 (branch: HEAD, revision: 9f5f4b2a516d35cfaf196530b277f1d109254569)
build user: root@3b87d661c3dd
build date: 20170929-12:59:03
go version: go1.9