-
Notifications
You must be signed in to change notification settings - Fork 937
[Merged by Bors] - Do not reset batch ids & redownload out of range batches #1528
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Give each chain a logs with its Id
f2b15f9 to
0e72a23
Compare
| let batch_end_slot = std::cmp::min( | ||
| // request either a batch containing the max number of blocks per batch | ||
| batch_start_slot + blocks_per_batch, | ||
| self.to_be_downloaded.start_slot(slots_per_epoch) + blocks_per_batch + 1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if here this should be a +2 @AgeManning
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be a 1. We are shifting the start_slot by 1 so this +1 is for that start slot, then we do blocks_per_batch on top.
This looks correct to me
AgeManning
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me.
I'm unsure about removing the chain_id logs.
I'll get this in and we can have a chat about it after :)
| let batch_end_slot = std::cmp::min( | ||
| // request either a batch containing the max number of blocks per batch | ||
| batch_start_slot + blocks_per_batch, | ||
| self.to_be_downloaded.start_slot(slots_per_epoch) + blocks_per_batch + 1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be a 1. We are shifting the start_slot by 1 so this +1 is for that start slot, then we do blocks_per_batch on top.
This looks correct to me
| beacon_processor_send, | ||
| self.beacon_chain.clone(), | ||
| self.log.clone(), | ||
| self.log.new(o!("chain" => chain_id)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @AgeManning this is why I removed the chain ID from the logs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh. I missed this. Nice!
|
bors r+ |
The changes are somewhat simple but should solve two issues: - When quickly changing between chains once and a second time back again, batchIds would collide and cause havoc. - If we got an out of range response from a peer, sync would remain in syncing but without advancing Changes: - remove the batch id. Identify each batch (inside a chain) by its starting epoch. Target epochs for downloading and processing now advance by EPOCHS_PER_BATCH - for the same reason, move the "to_be_downloaded_id" to be an epoch - remove a sneaky line that dropped an out of range batch without downloading it - bonus: put the chain_id in the log given to the chain. This is why explicitly logging the chain_id is removed
|
Pull request successfully merged into master. Build succeeded: |
The changes are somewhat simple but should solve two issues:
Changes: