-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Database trait implementation #11
Comments
Awesome, thanks for working on this! And sorry for leaving develop in a half-refactored state, in the future I'll contain half-baked changes in PRs to better communicate what's going on. The Database trait will be nice for tests, I'll be happy to merge these changes after the current refactor. |
BTW, this change will look a lot different after this refactor since it makes some fundamental changes, e.g. supporting concurrency for the backing store. Also, something to keep in mind is that in some cases I'd prefer keeping things coupled to RocksDB since there are features that might be hard to implement in other stores (like the checkpointing feature), so abstracting the store should primarily be for testing purposes rather than focusing on supporting other backing stores (although this could change in the future). |
Understand. I agree Rocksdb is probably the best bet for now. I'm also interested in using columns vs prefixing keys with a common namespace. Down the road it may be interesting to look at other potential stores such as Sled Side thought I have though.... How fast is fast enough for a Tree in a blockchain environment? You're only as fast as the slowest component - you're still bound by the consensus process and state transition logic regardless of how fast the tree is. |
After Merk I'd like to optimize the state machine as well. While it's true you need to process transactions serially in order to ensure determinism, the vast majority of transactions have no intersection of the set of keys they read from or write to (Alice sending to Bob can be processed independently from Carol sending to Dave (hey, that's you)). So to process transactions concurrently, we can simply bin them into N queues, where transactions with intersecting key sets end up in the same bin and do get a canonical ordering. Now we can process transactions in parallel for an e.g. 32x throughput increase. Also, I think this can be done in a way where the consensus algorithm does not significantly affect the throughput. If we know the set of keys already, this can be done efficiently with Bloom filters, since set intersection is just the AND of the filter bits, and set union is the OR, and I believe we can make a data structure to even make these operations faster. I have more thoughts about this but I'll write them up elsewhere. |
Interesting, is this for a performance increase, or for logically structuring the data for app developers? |
Ok. I see where you're going with all this now. Interesting and exciting stuff!
I don't know about performance with columns. For now, I was just thinking in terms of "logical structure": Account keys, etc...separating them by columns vs prefixes. Side note: After fiddling with a simple abci app using the Database trait and Merk, I'm finding the use of the |
One thing to keep in mind is that merk is designed to be written to in batches, once per block. So ABCI apps will require a layer on top that maintains a map of keys/values as the transaction mutates the values, then it should all get put into merk on |
Thanks. Yes. I have a map in the sample app I'm working on now: 1 for checkTx and 1 for deliverTx. The deliveryTx map is batched to Merk in the abci.commit(). The I'm going to test it out a bit more and update my fork. |
FYI, this is what |
First off this is great work! Thank you. I'm looking forward to trying this out with rust-abci and seeing how it develops.
I took a stab at refactoring the DB specific logic out of Merk and into a separate trait and associated modules. The work is in my fork HERE. I didn't submit a pull request for any of this since it appears you're focused on refactoring on the
develop
branch.I based my fork on your master branch.
I added:
I also refactored merk.rs to use the the new trait and removed the
mut
requirement from get_node.All tests and benchmarks are updated and everything is passing.
I'm more than happy to work it into the main codebase if/when you're interested in using it.
The text was updated successfully, but these errors were encountered: