Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there any limitation of WriteBatch? #5938

Closed
derekbit opened this issue Oct 18, 2019 · 5 comments
Closed

Is there any limitation of WriteBatch? #5938

derekbit opened this issue Oct 18, 2019 · 5 comments

Comments

@derekbit
Copy link

Hi,
I want to delete multiple key-val pairs (several millions to billion pairs) atomically from rocksdb by using WriteBatch method.
Is any limitation of the WriteBatch method, e.g. the maximum number of key-val pairs , memory usage or others?
Thanks

@BradBarnich
Copy link

would you be able to use DeleteRange?

@siying
Copy link
Contributor

siying commented Oct 22, 2019

I can't think of a limit before you run out of memory. But it's not a good practice to insert large write batch. It's hard to predict the performance impact.

@adamretter
Copy link
Collaborator

@siying any guidance on what constitutes "large"?

@siying
Copy link
Contributor

siying commented Oct 22, 2019

@adamretter good question. I don't know. Thinking about you are writing one such an entry to WAL, and inserting them to memtable in one operation. In the mean time, we double the memory usage. Or course, the impact will be workload specific. Maybe they don't care about slowing down other writes and have plenty of DRAM to waste. But I would be really cautious when one write batch is larger than a few MBs.

@derekbit
Copy link
Author

Thanks! We reduced the on big write batch to multiple small write batches to avoid the large memory usages.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants