-
Notifications
You must be signed in to change notification settings - Fork 118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Indexed Merkle tree #1666
Indexed Merkle tree #1666
Conversation
Thanks for your feedback @dfstio. Based on it, I added a new version of Here are the numbers for height 11, it matches with what you wrote:
|
It would be a great addition. Proving inclusion and exclusion (for nullifiers) are very important operations that should probably be reflected in the IndexedMerkleMap methods. And given that data is public, I can do myself toJSON () and fromJSON () to/from base64. |
Can the version that will only prove exclusion (non-inclusion) take less than 405 constraints, similar to the get() that takes half of it? |
Great point, done!
|
Can |
the version of |
How do we serialize the witness to calculate recursive proofs using many workers? If I want to create a proof confirming that I've correctly added ten key-values to the IndexedMerkleMap and want to split the calculation between 10 separate workers running in parallel, each calculating the proof to be merged later for one key-value pair, I would need to be able to generate a serializable witness to be passed to the worker. Otherwise, I should serialize the whole map, which would take much longer. Effectively, for this use case, _computeRoot() should be split to map.getWitness(key) and computeRoot(witness), with the witness being easily serializable. map.getWitness() should be called in the master worker for all ten key-value pairs, and computeRoot() should be called in the provable code for each of the ten workers. Each worker should not have the map, just a witness. Example when it is needed: Serializing Proving |
Interesting challenge! |
I thought about this a bit, and think it can be done in a way that is compatible with the current design of the IndexedMerkleTree data structure. The idea is that the current implementation should work if you don't have the full tree, but just the subset that are touched by your updates. This is quite similar to what you propose, since in the end a collection of Merkle witnesses is also just a subset of the tree. There are two internal data structures: nodes and sortedLeaves. Both should currently allow pruning to the values you actually need. For nodes, you'd need to store arrays of the same length, but mostly filled with empty slots, not sure how much memory that saves. In the case of sortedLeaves, only having a subset should just work. So for parallel proving, we could:
The nice thing is that circuits can be written exactly as in a normal, serial implementation. Actually this is extremely close to what Mina does with transaction proofs, where snapshots of the ledger are updated :D |
Note to self / reviewers: the implementation currently doesn't do proof of updates correctly. The problem is that it doesn't connect the Merkle path for the update with the path previously validated against the old commitment |
I've done some testing with MerkleTree to evaluate serialized map size and serialized MerkleTree witness size, and the results are as follows:
The IndexedMerkleMap should be closer to ordered indexes, so by creating a witness or pruned snapshot we can decrease the serialized witness size by circa 1000x. Btw, it also shows the serialized map size savings IndexedMerkleMap will bring: it should be 170x (16k per element in MerkleMap vs 95 bytes per element with IndexedMerkleMap)
We need to take a pruned snapshot several times BEFORE running the circuit without proofs for k updates and make sure that low leaves are also included.
I believe that IndexedMerkleMap is extremely important for rollups on Mina protocol and will save a lot of money in proving costs |
closes #1655
This PR introduces
IndexedMerkleMap
, an all around better version ofMerkleMap
. See #1655 for a detailed description, including the API which this PR implements.In short, there are two motivations to introduce a new Merkle storage primitive:
IndexedMerkleMap
uses about 4-8x fewer constraints thanMerkleMap
when used with height 31 (which supports 1 billion entries). Here are some constraint counts for different operations:EDIT: Based on feedback from @dfstio, the API was expanded to be useful for cases where only inclusion of a key (but not the value) is important, see discussion below.