You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm considering running BadgerDB on HDDs for a use case where I only insert items, and then sometimes scan the entire database. Obviously with limited random access abilities of a HDD, these scans would be many orders of magnitude more efficient if they could walk the value log sequentially in file order.
Am I right that there isn't currently a way to do this or did I miss something?
Could such an iterator be implemented easily in BadgerDB? I think so from what I know of the architecture, but I'm not familiar with the internals.
Interestingly, if there was a feature like #1367 one could maybe abuse the callback and the compaction to not only scan the entire value log in order, but to compact it at the same time.