Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upWhy not to use shared memory for in RAM DB #2225
Comments
This comment has been minimized.
This comment has been minimized.
|
I'd expect this to require a lot of effort to get right, and then additional operational effort to run. And it only covers a relatively rare corner case (i.e. where a SIGHUP is not enough but it's not a complete restart of the machine/container running Prometheus). |
brian-brazil
closed this
Mar 27, 2017
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 23, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
lock
bot
locked and limited conversation to collaborators
Mar 23, 2019
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
onorua commentedNov 28, 2016
The problem
As soon as we increased memory to around 32GB+, we noticed quite a big startup time due to journals restoration, even we have sent TERM signal properly and waited for normal shutdown. Startup takes couple of minutes (up to 10 actually), which is noticeable on graphs.
That is because prometheus is trying to get all the data are to be in RAM into memory.
shared memory proposal
it is possible to attach shared memory from another process, which will speedup the startup time, because you will not need to load journals from disk to RAM, but rather attach already available memory.
This will require checks for data consistency on startup of course and fallback to the current restoration from journals.
What do you think about this functionality?