Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limit RocksDB memory requirements #43

Open
martinboehm opened this issue Aug 28, 2018 · 13 comments
Open

Limit RocksDB memory requirements #43

martinboehm opened this issue Aug 28, 2018 · 13 comments

Comments

@martinboehm
Copy link
Contributor

@martinboehm martinboehm commented Aug 28, 2018

Especially during initial index import the RocksDB memory usage of Blockbook is large and unpredictable. Find a way (options) that will make the memory usage limited.

@romanz

This comment has been minimized.

Copy link

@romanz romanz commented Aug 29, 2018

I can suggest disabling auto-compactions and using smaller values for block_size and max_open_files (I had similar issue at romanz/electrs#30).

@martinboehm

This comment has been minimized.

Copy link
Contributor Author

@martinboehm martinboehm commented Sep 5, 2018

Thank you very much for your suggestions. I am going to try some of them.

I am afraid, with disabled auto-compactions the DB size will grow very much as we are overwriting data in the database.

As I understand the rockdb documentation, with larger values of block_size one should get better memory footprint. You are certainly using very large block size (512kB) in comparison to our 16kB or 32kB.

After I experiment with the options, I will post my findings here.

@yura-pakhuchiy

This comment has been minimized.

Copy link
Contributor

@yura-pakhuchiy yura-pakhuchiy commented Oct 16, 2018

I was able to sync blockbook from scratch for GRS mainnet on 2GB VPS by restarting daemon every minute:

while true; do date; systemctl restart blockbook-groestlcoin; sleep 60; done

Without restarts it was killed due to OOM after several minutes. So my guess is that GC happens not often enough, thus resulting in much higher memory requirements during initial sync. I suggest to force GC after every 1000 or 10k blocks to reduce memory requirements.

@yura-pakhuchiy

This comment has been minimized.

Copy link
Contributor

@yura-pakhuchiy yura-pakhuchiy commented Oct 16, 2018

Not that easy. I've tried doing runtime.GC() after every 1k blocks. Does not really helps. :-(

@martinboehm

This comment has been minimized.

Copy link
Contributor Author

@martinboehm martinboehm commented Oct 16, 2018

A good test of graceful shutdown procedure :)

Actually, there are probably better ways how to reduce memory footprint of the initial sync. Unfortunately, we did not have time to document them yet.

  1. disable rocksdb cache by parameter -dbcache=0, the default size is 500MB
  2. run blockbook with parameter -workers=1. This disables bulk import mode, which caches a lot of data in memory (not in rocksdb cache). It will run about twice as slowly but especially for smaller blockchains it is no problem at all.
@yura-pakhuchiy

This comment has been minimized.

Copy link
Contributor

@yura-pakhuchiy yura-pakhuchiy commented Oct 16, 2018

Martin, thank you, these options helped to reduce memory usage. Still blockbook caches enough to force me restart it once in the middle of the sync to prevent OOM killer.

@martinboehm

This comment has been minimized.

Copy link
Contributor Author

@martinboehm martinboehm commented Oct 16, 2018

It is probably not Blockbook itself but Rocksdb, who is taking memory. This is exactly why this issue exists, we are not able to control memory usage of Rocksdb as much as we would like.

@wakiyamap

This comment has been minimized.

Copy link
Contributor

@wakiyamap wakiyamap commented Oct 27, 2018

If it is only the initial synchronization, is it a good solution to make swapfile?
I solved my initial sync using 4GB swapfile.

@pavoltravnik

This comment has been minimized.

Copy link

@pavoltravnik pavoltravnik commented Jan 13, 2019

This issue is really fatal, I have 16 GB RAM, 4 cores and it always fails anyway.

Jan 08 01:48:45 instance-5 blockbook[22720]: E0108 01:48:29.950974   22720 sync.go:293] getBlockWorker 0 connect block error hash 4bf49e2661325894a8287a2dd2a49c22e17a401f5d77cc
Jan 08 01:48:45 instance-5 blockbook[22720]: E0108 01:48:36.717507   22720 sync.go:330] GetBlockHash error height 1521750: Post http://127.0.0.1:8034: net/http: request cancele
Jan 08 01:52:32 instance-5 blockbook[22720]: E0108 01:52:32.117822   22720 sync.go:330] GetBlockHash error height 1527204: Post http://127.0.0.1:8034: net/http: request cancele
Jan 08 02:04:34 instance-5 systemd[1]: blockbook-litecoin.service: Main process exited, code=killed, status=9/KILL

I

@martinboehm

This comment has been minimized.

Copy link
Contributor Author

@martinboehm martinboehm commented Jan 13, 2019

16 GB RAM should be more than enough for Litecoin. Is it really a memory problem? Have you tried to run the inital import with settings mentioned in this comment? Especially the flag -workers=1 will dramatically reduce the memory footprint of the initial import.

@pavoltravnik

This comment has been minimized.

Copy link

@pavoltravnik pavoltravnik commented Jun 4, 2019

I can confirm, that parameters -workers=1 -dbcache=0 in /lib/systemd/system/blockbook-litecoin.service file helped. Thank you a lot @martinboehm.

@kolya182

This comment has been minimized.

Copy link
Contributor

@kolya182 kolya182 commented Dec 6, 2019

I can confirm, that parameters -workers=1 -dbcache=0 in /lib/systemd/system/blockbook-litecoin.service file helped. Thank you a lot @martinboehm.

I did:
sudo systemctl stop blockbook-dogecoin
vim /lib/systemd/system/blockbook-dogecoin.service
changed -workers=1 => -workers=6
sudo systemctl start blockbook-dogecoin

but still showing 1 worker
ps aux | grep blockbook

/opt/coins/blockbook/dogecoin/bin/blockbook -blockchaincfg=/opt/coins/blockbook/dogecoin/config/blockchaincfg.json -datadir=/opt/coins/data/dogecoin/blockbook/db -sync -internal=:9038 -public=:9138 -certfile=/opt/coins/blockbook/dogecoin/cert/blockbook -explorer= -log_dir=/opt/coins/blockbook/dogecoin/logs -resyncindexperiod=30011 -resyncmempoolperiod=2011 -workers=1 -dbcache=0

@martinboehm

This comment has been minimized.

Copy link
Contributor Author

@martinboehm martinboehm commented Dec 7, 2019

Hi, the workers are in the same process as goroutines, you cannot see them using ps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
6 participants
You can’t perform that action at this time.