New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Drastically decreasing read performence when range of keys are deleted #610
Comments
Implemente a solution to the issue described by google/leveldb#610
Implemente a solution to the issue described by google/leveldb#610
Implemente a solution to the issue described by google/leveldb#610
Implemente a solution to the issue described by google/leveldb#610
Hi, I may meet the same issue with frequent query, any update? |
Implemente a solution to the issue described by google/leveldb#610
Why google failed to handle those known issues in such a long time? |
Yes, fix only shows that stopping early can improve greatly iteration time. |
@pcmind totally! An internal iterator that is bounded in an inclusive key range would enable these early returns. 👏 |
Implement a solution to the issue described by google/leveldb#610
Implement a possible solution for the issue google/leveldb#610
* format all documents according to contributor guidelines and specifications use clang-format on/off to stop formatting when it makes excessively poor decisions * format all tests as well, and mark blocks which change too much
Performance of Iteration over a range of keys is drastically affected when multiple keys that share some comon prefix where previously deleted.
The use case to reproduce this issue is as follow:
I know that as mentioned in issue #83, commit 748539c would mitigate this issue. But as show by the following example, the implemented solution does not mitigate completely the issue. This is specially relevant when using LevelDB easily with prefix searches (or mutable indexes).
I made a simple unit test to show the issue:
Running this test I get the following result:
The performance issue due to the fact that db_iter knows nothing about the prefix being searched by the end user.
Adding something like:
to db_iter.cc#L179 improve drastically performance:
Would it be nice to add an API to give prefix being searched to iterator and stop looking for more data when no more keys are available?
The text was updated successfully, but these errors were encountered: