-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
blockstore: send duplicate proofs for chained merkle root conflicts #35316
Conversation
0527675
to
03813aa
Compare
03813aa
to
ae76acb
Compare
) -> bool { | ||
let Ok(chained_merkle_root) = next_shred.chained_merkle_root() else { | ||
// Chained merkle roots have not been enabled yet | ||
return true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
currently in tests we always end up in this case, but I ran local-cluster with StandardBroadcastRun::should_chain_merkle_shreds
returning true
to verify the logic.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #35316 +/- ##
=========================================
- Coverage 81.7% 81.7% -0.1%
=========================================
Files 834 834
Lines 224197 224324 +127
=========================================
+ Hits 183287 183372 +85
- Misses 40910 40952 +42 |
) -> bool { | ||
let (slot, fec_set_index) = erasure_set.store_key(); | ||
|
||
let next_erasure_set = ErasureSetId::new(slot, fec_set_index + 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fec_set_index + 1
is wrong. FEC set indices are not sequential.
fec_set_index == shred index of the 1st data shred in the erasure batch.
so this should be
fec_set_index + num_data_shreds
|
||
// Check that the chaining between our current shred, the previous fec_set | ||
// and the next fec_set | ||
if !self.check_chained_merkle_root_consistency( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This check (and similar other instances below), should only be invoked if a new erasure meta is inserted into blockstore. Otherwise we are redundantly repeating this check for every shred in the erasure batch.
Maybe we should also add the chained_merkle_root
to MerkleRootMeta
to check if the shreds within the same batch have the same merkle_root
AND the same chained_merkle_root
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's a good point, i'll rework this to only check on new erasure metas.
Maybe we should also add the chained_merkle_root to MerkleRootMeta to check if the shreds within the same batch have the same merkle_root AND the same chained_merkle_root.
The chained_merkle_root
is also part of the merkle_root
right? so if the batch has the same merkle_root
they also have the same chained_merkle_root
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right. Just comparing merkle_root
s is sufficient.
Self::get_shred_from_just_inserted_or_db(self, just_inserted_shreds, next_shred_id) | ||
.expect("Shred indicated by merkle root meta must exist") | ||
.into_owned(); | ||
let next_shred = Shred::new_from_serialized_shred(next_shred) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We probably shouldn't deserialize the entire shred to check its merkle_root
and chained_merkle_root
; but instead add some apis to shred::layout::
to pull this from the binary directly.
// confirmation. | ||
return true; | ||
} | ||
let prev_erasure_set = ErasureSetId::new(slot, fec_set_index - 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment regarding
fec_set_index - 1
This one is actually more tricky because we need to know num_data_shreds
in the previous erasure batch.
Maybe blockstore/rocksdb has some api which gives the immediate next or immediate previous stored key directly; and then we can check if those keys are indeed two consecutive erasure batches.
This repository is no longer in use. Please re-open this pull request in the agave repo: https://github.com/anza-xyz/agave |
Problem
Chained merkle roots are not yet verified on the receiving node
Summary of Changes
Generate a duplicate proof if a block contains improperly chained merkle root.
Note: we do not generate a proof for FEC set 0 that chains from the last FEC set of the parent, as we are unsure which block is the duplicate. this decision is deferred until we see votes.
Contributes to #34897