New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restoring large databases eats up all the RAM #714
Comments
Heap dump+binary used around 20-30G ram use - https://ipfs.io/ipfs/QmWxZ2L4vUQX5vBuq7hHgipNDnf3FQFdNvNnPhrAKMWApy https://ipfs.io/ipfs/QmWxZ2L4vUQX5vBuq7hHgipNDnf3FQFdNvNnPhrAKMWApy/profile001.svg |
I also tried setting |
Based on the heap, it is using around 5GB of RAM, which is not extreme but not ideal either. I think what's happening is that the restore is going full throttle, without any bounds, i.e. it is creating writes, without considering how many are in fly already. We can add a throttle control, so it doesn't create too many pending writes. Changing this to case over a buffered channel would be the easiest way to throttle it: Line 152 in 3196cc1
|
There's also the Throttle lib that I wrote, available in y package, which can do this in a better way. |
Might also be worth noting - this ran restoring to/from NFS storage on ZFS HDDs, and the backup file might have been cached in RAM, so writes could be way slower than reads. |
Fixed in #762 |
When trying to restore a large backup, Badger starts taking lots of memory, eventually crashing with OOM.
The text was updated successfully, but these errors were encountered: