Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upCarry over pre-allocation sizes across TSDB blocks and targets #2784
Comments
This comment has been minimized.
This comment has been minimized.
|
Closing as this no longer happens in 2.0 as we don't cut blocks anymore. |
gouthamve
closed this
Nov 21, 2017
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 23, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
lock
bot
locked and limited conversation to collaborators
Mar 23, 2019
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
fabxc commentedMay 31, 2017
We see in #2774 that new TSDB blocks being cut as well as pods being scaled cause significant memory spikes. (Seems we regressed a bit in the former since alpha.0 – not exactly sure why.)
One possible reason is that we allocate maps and slices, which then basically immediately converge to their max size. On the way, we grow these structures, which may make a lot of allocations that go to garbage shortly after.
For slices, this is easy to measure (see below). For maps, which account for most of those structures, I'm not entirely sure whether in how far it behaves the same or just dynamically adds new buckets.
For slices however, the growth is only by power of two until 1KB and slows down after, causing more garbage to be generated. Until 200k elements are created, 860k elements of garbage are created: https://play.golang.org/p/KcvqVdZjR1
We should try to pre-allocate on instantiation based on size info we have from previous blocks and targets and see whether that reduces the spikes.
@gouthamve