Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upExcessive WAL and block overlap #5476
Comments
This comment has been minimized.
This comment has been minimized.
|
@mknapphrt it depends on how many segments you have. I suspect you have quite a lot, given how many segments you have in just that one pasted comment for a 1-2 minute time period. When TSDB Head takes the last 2h of data and then a truncation is performed, at most the first 1/3 of segment files will be removed. A checkpoint is created with records from those segments for series that are still active, and then an attempt to delete those segment files. https://github.com/prometheus/tsdb/blob/master/head.go#L556-L561 |
This comment has been minimized.
This comment has been minimized.
|
Note: I'm assuming you haven't messed with any of the tsdb flags :) if you have, please let us know. |
krasi-georgiev
added
component/local storage
kind/question
labels
Apr 17, 2019
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
mknapphrt commentedApr 17, 2019
Bug Report
(I don't know if this is really a bug or just a misunderstanding on WAL truncation)
What did you do?
No changes were made, just running prometheus.
What did you expect to see?
When the WAL compactions are triggered and blocks are created, the WAL that now overlaps with the blocks would be deleted.
What did you see instead? Under which circumstances?
There are several hours of WAL data still overlapping with the blocks. Taken at the same time:
Should there be this much overlap? Is there a flag or setting that can control how much overlap there is? I don't really see the point in this redundancy in data storage. Any help or suggestions would be appreciated. Thanks
Environment