Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upWide range of timestamps leads to running out of file descriptors #2725
Comments
brian-brazil
added
dev-2.0
kind/bug
labels
May 16, 2017
This comment has been minimized.
This comment has been minimized.
|
Yea, initial invariant was that all blocks must be non-overlapping and also have no gaps. Realised pretty early on that will cause issues and prometheus/tsdb#80 should kill that idea for good. Not sure though if we really want to handle a case like described here at all. Data would be deleted right after anyway due to retention and even if not, our limited append window makes it never really work. |
This comment has been minimized.
This comment has been minimized.
|
The issue here is more that we ran out of FDs. |
This comment has been minimized.
This comment has been minimized.
|
Yes, because it created 1080 blocks which it all kept open. In a practical setup that applies the above guards I mention, this cannot happen anymore. Max block size is 10% time range of retention by default. Assume that the most recent 10% time range holds 20 less-compacted blocks, that puts us at 29 blocks in total with a handful of open FDs each. That should practically be fine. Within reasonable limits, FDs are not an expensive resource. |
This was referenced May 18, 2017
brian-brazil
closed this
May 24, 2017
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 23, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
brian-brazil commentedMay 16, 2017
I just fired up a Prometheus 2.0 with a fresh data directory. I was manually specifying timestamps and specified a timestamp in 1970 by mistake. This lead to 1018 directories being created in the data directory, and did not end well:
This should be handled more gracefully.