-
Notifications
You must be signed in to change notification settings - Fork 595
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: avoid processing state witness multiple times #11111
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -27,8 +27,8 @@ use near_primitives::merkle::merklize; | |
use near_primitives::receipt::Receipt; | ||
use near_primitives::sharding::{ChunkHash, ReceiptProof, ShardChunkHeader}; | ||
use near_primitives::stateless_validation::{ | ||
ChunkEndorsement, ChunkStateWitness, ChunkStateWitnessAck, ChunkStateWitnessSize, | ||
SignedEncodedChunkStateWitness, | ||
ChunkEndorsement, ChunkProductionKey, ChunkStateWitness, ChunkStateWitnessAck, | ||
ChunkStateWitnessSize, SignedEncodedChunkStateWitness, | ||
}; | ||
use near_primitives::transaction::SignedTransaction; | ||
use near_primitives::types::chunk_extra::ChunkExtra; | ||
|
@@ -46,6 +46,8 @@ use std::sync::Arc; | |
// Keeping a threshold of 5 block producers should be sufficient for most scenarios. | ||
const NUM_NEXT_BLOCK_PRODUCERS_TO_SEND_CHUNK_ENDORSEMENT: u64 = 5; | ||
|
||
const RECEIVED_STATE_WITNESSES_CACHE_SIZE: usize = 100; | ||
|
||
/// A module that handles chunk validation logic. Chunk validation refers to a | ||
/// critical process of stateless validation, where chunk validators (certain | ||
/// validators selected to validate the chunk) verify that the chunk's state | ||
|
@@ -60,6 +62,7 @@ pub struct ChunkValidator { | |
chunk_endorsement_tracker: Arc<ChunkEndorsementTracker>, | ||
orphan_witness_pool: OrphanStateWitnessPool, | ||
validation_spawner: Arc<dyn AsyncComputationSpawner>, | ||
received_witnesses: lru::LruCache<ChunkProductionKey, ()>, | ||
} | ||
|
||
impl ChunkValidator { | ||
|
@@ -80,6 +83,7 @@ impl ChunkValidator { | |
chunk_endorsement_tracker, | ||
orphan_witness_pool: OrphanStateWitnessPool::new(orphan_witness_pool_size), | ||
validation_spawner, | ||
received_witnesses: lru::LruCache::new(RECEIVED_STATE_WITNESSES_CACHE_SIZE), | ||
} | ||
} | ||
|
||
|
@@ -89,7 +93,7 @@ impl ChunkValidator { | |
/// The chunk is validated asynchronously, if you want to wait for the processing to finish | ||
/// you can use the `processing_done_tracker` argument (but it's optional, it's safe to pass None there). | ||
pub fn start_validating_chunk( | ||
&self, | ||
&mut self, | ||
state_witness: ChunkStateWitness, | ||
chain: &Chain, | ||
processing_done_tracker: Option<ProcessingDoneTracker>, | ||
|
@@ -103,6 +107,8 @@ impl ChunkValidator { | |
))); | ||
} | ||
|
||
self.check_duplicate_witness(&state_witness)?; | ||
|
||
let pre_validation_result = pre_validate_chunk_state_witness( | ||
&state_witness, | ||
chain, | ||
|
@@ -142,6 +148,24 @@ impl ChunkValidator { | |
}); | ||
Ok(()) | ||
} | ||
|
||
/// Verifies that we haven't already processed state witness for the corresponding | ||
/// (shard_id, epoch_id, height_created). This protects against malicious chunk | ||
/// producers wasting stateless validator resources by making it apply chunk multiple | ||
/// times. | ||
fn check_duplicate_witness(&mut self, state_witness: &ChunkStateWitness) -> Result<(), Error> { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Could we store the chunk hash as "value" for LRU cache and add a check to see if we have received the same chunk hash? If the chunk hash of the witness is the same it can probably be deemed as a non-malicious behavior? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Technically submitting the witness for the same chunk (hence with the same chunk hash) could also be an attack vector. For example another stateless validator could send a witness it received to us multiple times and in this case it is not coming from the chunk producer. |
||
let chunk_production_key = ChunkProductionKey::from_witness(&state_witness); | ||
if self.received_witnesses.contains(&chunk_production_key) { | ||
return Err(Error::DuplicateChunkStateWitness(format!( | ||
"Already received witness for height {} shard {} epoch {:?}", | ||
chunk_production_key.height_created, | ||
chunk_production_key.shard_id, | ||
chunk_production_key.epoch_id, | ||
))); | ||
} | ||
self.received_witnesses.push(chunk_production_key, ()); | ||
Ok(()) | ||
} | ||
} | ||
|
||
/// Checks that proposed `transactions` are valid for a chunk with `chunk_header`. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm IMO this should be placed after
pre_validate_chunk_state_witness()
.Otherwise a malicious chunk producer can do something like:
ChunkStateWitness
ChunkStateWitnesses
to fill the cache with junk (possibly from other random nodes to avoid getting banned)ChunkStateWitness
so it'll be processed again.This way a malicious chunk producer could force the validator to process the valid
ChunkStateWitness
an arbitrary number of times.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should not be possible because of the checks as part of
partially_validate_state_witness
. In particular chunk producer could only submit one witness for every height it suppose to produce a chunk on. On top of that it should be on top of a not-yet-final block which is known to us (otherwise it goes to orphan pool). So chunk producer could only submit junk for a very narrow set of heights hence the cache size of 100.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I didn't notice that apart from
pre_validate_chunk_state_witness
we also dopartially_validate_state_witness
at the beginning ofprocess_chunk_state_witness
. That makes the situation better 👍There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But could a malicious chunk producer submit 100 fake witnesses (with existing
prev_block_hash
) for the next 100 heights for which it's a chunk producer? I think we don't limit the distance from the tip for non-orphan witnesses?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only if we don't have a final block for those 100 heights. If that happens then the blockchain is in a very degraded state and we have bigger problems to worry about 🙃 Keeping Flat Storage deltas in memory was built on the same assumptions by the way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The scenario that I'm imagining would look something like this:
There's a final block
F
at height 100There's a chunk producer
P
, responsible for producing chunks at heights 102, 104, 106, 108, 110, ...P
is malicious and constructs a fake witness for each of those heights:Witness {height: 102, prev_block_hash: F, receipts: lotsofwork}
Witness {height: 104, prev_block_hash: F, receipts: lotsofwork}
Witness {height: 106, prev_block_hash: F, receipts: lotsofwork}
Then
P
sends all of those witnesses to validatorV
.Wouldn't
V
process all of those witnesses?P
could fill the whole cache with them, or just makeV
do a lot of useless work.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see, that is indeed a valid scenario, good catch!
Moving this PR to draft until I have a solution that addresses that.