Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus memory problem is very serious #1970

Closed
barnettZQG opened this Issue Sep 9, 2016 · 6 comments

Comments

Projects
None yet
4 participants
@barnettZQG
Copy link

barnettZQG commented Sep 9, 2016

image
As shown in figure, this is a Prometheus resource monitoring situation of container to 12 hours.
We can see the memory usage more and more, so that after a period of time it will be the system to kill.
This kind of situation I have already tested nearly a month.
The following is my configuration information about memory.
storage.local.max-chunks-to-persist: 20971520
storage.local.memory-chunks: 41943040
storage.local.num-fingerprint-mutexes: 81920

Before I mentioned related issue, not solved.

Please help me !!!! My prometheus often be killed.

@barnettZQG

This comment has been minimized.

Copy link
Author

barnettZQG commented Sep 9, 2016

image

@beorn7

This comment has been minimized.

Copy link
Member

beorn7 commented Sep 9, 2016

With 41943040 configured memory chunks, I'd expect a steady-state RAM consumption around 120GiB.

Only if you go beyond that, you need to worry if something is weird.

See https://prometheus.io/docs/operating/storage/ : "As a rule of thumb, you should have at least three times more RAM available than needed by the memory chunks alone." To play save, go for x5, i.e. if you have e.g. 64GiB RAM, I'd go with -storage.local.memory-chunks=12000000 for a start.

@runningman84

This comment has been minimized.

Copy link

runningman84 commented Sep 14, 2016

What about auto detecting the available memory and only assigning 1/5 of the memory as a default value?

@juliusv

This comment has been minimized.

Copy link
Member

juliusv commented Sep 14, 2016

@runningman84 That would be ideal, but not easy to do. This issue is about that: #455

@beorn7

This comment has been minimized.

Copy link
Member

beorn7 commented Sep 15, 2016

Closing as this appears to be answered, and the rest is handled in #455.

@beorn7 beorn7 closed this Sep 15, 2016

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 24, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 24, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.