Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Carry over pre-allocation sizes across TSDB blocks and targets #2784

Closed
fabxc opened this Issue May 31, 2017 · 2 comments

Comments

Projects
None yet
3 participants
@fabxc
Copy link
Member

fabxc commented May 31, 2017

We see in #2774 that new TSDB blocks being cut as well as pods being scaled cause significant memory spikes. (Seems we regressed a bit in the former since alpha.0 – not exactly sure why.)

One possible reason is that we allocate maps and slices, which then basically immediately converge to their max size. On the way, we grow these structures, which may make a lot of allocations that go to garbage shortly after.
For slices, this is easy to measure (see below). For maps, which account for most of those structures, I'm not entirely sure whether in how far it behaves the same or just dynamically adds new buckets.

For slices however, the growth is only by power of two until 1KB and slows down after, causing more garbage to be generated. Until 200k elements are created, 860k elements of garbage are created: https://play.golang.org/p/KcvqVdZjR1

We should try to pre-allocate on instantiation based on size info we have from previous blocks and targets and see whether that reduces the spikes.

@gouthamve

@gouthamve

This comment has been minimized.

Copy link
Member

gouthamve commented Nov 21, 2017

Closing as this no longer happens in 2.0 as we don't cut blocks anymore.

@gouthamve gouthamve closed this Nov 21, 2017

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 23, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 23, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.