Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keep memory pool of scrape caches per target set #3048

Closed
fabxc opened this Issue Aug 10, 2017 · 1 comment

Comments

Projects
None yet
1 participant
@fabxc
Copy link
Member

fabxc commented Aug 10, 2017

On SD updates we abandon all disappeared scrape loops. On reload we abandon all scrape loops.
This causes scrape caches to be fully rebuild, which in return causes moderate memory spikes.

We should be able to avoid this to some degree by keeping a pool a memory pool of scrape caches that can then be reused. Doing it per target set sounds intuitively correct as we are likely dealing with uniform sets of series there.

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 23, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 23, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.