Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add sparse Merkle tree advice set #269

Merged
merged 7 commits into from
Jul 6, 2022

Conversation

maxgillett
Copy link
Contributor

@maxgillett maxgillett commented Jun 28, 2022

This PR adds an SMT of variable depth (up to 63) and with no key compaction as an advice set. It follows the same interface as the balanced Merkle tree.

@maxgillett maxgillett changed the title Sparse Merkle tree advice set Add sparse Merkle tree advice set Jun 28, 2022
Copy link
Contributor

@bobbinth bobbinth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Thank you! A few comments:

  1. Would be great to add doc comments about how the overall structure works. Doesn't need to be anything super extensive - but I think several sentences would help.
  2. For tests, would be good to test with trees which are not full. One approach could be to instantiate a regular Merkle tree with all zero nodes (e.g., of depth 8) and also have a blank Sparse Merkle tree. Then check their roots (they should be the same). Then add a leaf to a regular Merkle tree and the same leaf to the Sparse Merkle tree. Again, their roots should be the same, and the paths to this leaf returned from both should be the same. And perform a few other operations on both and make sure again the results we get are consistent. What do you think?
  3. Small nit: I'd prefer a slightly different way to organize the file. I've included a potential sketch below.
...
// SPARSE MERKLE TREE
// =============================

pub struct SparseMerkleTree {
...
}

impl SparseMerkleTree {
...
}

// STORE
// =============================

struct Store {
...
}

struct BranchNode {
...
}

impl Store {
...
}

core/src/inputs/advice/sparse_merkle_tree.rs Outdated Show resolved Hide resolved
core/src/inputs/advice/sparse_merkle_tree.rs Show resolved Hide resolved
@maxgillett
Copy link
Contributor Author

  1. For tests, would be good to test with trees which are not full. One approach could be to instantiate a regular Merkle tree with all zero nodes (e.g., of depth 8) and also have a blank Sparse Merkle tree. Then check their roots (they should be the same). Then add a leaf to a regular Merkle tree and the same leaf to the Sparse Merkle tree. Again, their roots should be the same, and the paths to this leaf returned from both should be the same. And perform a few other operations on both and make sure again the results we get are consistent. What do you think?

I think tests for partially full trees are definitely needed. I'm not sure the approach you suggested would work without changing how the hashing of empty nodes work. Right now a default hash digest value is used for an empty node. To be equivalent with a regular Merkle tree, that digest value would need to depend on depth, and so we would need to have a lookup of the appropriate digest value for each depth. Unless the specific digest values are used elsewhere in Miden in the processing of these advice sets (for example, the expected root value of an empty tree is hardcoded in the constraints), it might not make sense to take this approach. Let me know what you think.

@bobbinth
Copy link
Contributor

bobbinth commented Jun 29, 2022

Ah - I see! The approach when hash(0, 0) = 0 won't work with our current hash co-processor. To give the full picture on an example of MPVERIFY operation (skipping some detail):

  1. This operation gets a Merkle path from the advice provider using root, depth, and node index.
  2. It then passes this Merkle path to the hash co-processor to compute the root of the path.
  3. Then it compares the root with the expected root to make sure that the Merkle path is correct.

In the second step, the hash co-processor computes hash(a, b) in a regular way - so, if the path was computed differently, the roots won't match.

For depth-specific hashes, we only need 63 values, right? How difficult would it be to change the structure use them?

@maxgillett
Copy link
Contributor Author

maxgillett commented Jun 30, 2022

In the second step, the hash co-processor computes hash(a, b) in a regular way - so, if the path was computed differently, the roots won't match.

Right, but I don't think this depth-specific choice affects the path computation. The structure of the sparse Merkle path is the same as that of the regular Merkle tree, and the path leg verification in the hasher doesn't make any assumptions about the exact value of the sibling node hash. I might be missing something here.

For depth-specific hashes, we only need 63 values, right? How difficult would it be to change the structure use them?

Yes, it wouldn't be difficult to change this.

@bobbinth
Copy link
Contributor

The structure of the sparse Merkle path is the same as that of the regular Merkle tree, and the path leg verification in the hasher doesn't make any assumptions about the exact value of the sibling node hash.

Yes, but if we ever need to compute hash(0,0) in the hash co-processor, the result will not be 0. I think, this would come into play only when we want to prove a path to a 0 node sibling of which is also a 0 node, and this could happen fairly frequently if we want to write a large tree. But maybe there is a different way to handle this case?

Copy link
Contributor

@bobbinth bobbinth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! Thank you! Could you fix the clippy errors? I'll merge after that.

@bobbinth bobbinth merged commit 74eaab1 into 0xPolygonMiden:next Jul 6, 2022
@bobbinth bobbinth mentioned this pull request Jul 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants