-
Notifications
You must be signed in to change notification settings - Fork 743
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Merged by Bors] - Fix merge rpc length limits #3133
Conversation
b22066f
to
e1fdd39
Compare
e1fdd39
to
08ee162
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks fine to me.
It is kinda weird tho, that we allow 16 GB as a valid size for a block, but only allow a 10MB RPC chunk.
I wonder what happens if someone makes a valid 11MB block. It can't be transferred to anyone. So I guess in block production we are forced to limit ourselves to 10MB.
@paulhauner @michaelsproul this is worth raising, if it is not already known. A valid 11MB block cannot be sent to anyone.
We have a check here: lighthouse/beacon_node/beacon_chain/src/beacon_chain.rs Lines 3186 to 3188 in ea78336
However the responsibility for not hitting that condition falls on the EE/block builder. The BN won't retry if the EE produces a block that's too large, although the VC may fall back to a different BN. It's definitely something to keep an eye on as we integrate MEV support (where we may not even know the block size while signing, I think). @paulhauner may have more thoughts regarding what we should do if the EE gives us a payload that's too large. |
@AgeManning Some context on 16gb blocks https://github.com/ethereum/consensus-specs/pull/2607/files#r717073895 |
If an EE gives us a too-large payload I think the only thing we can do is fail loudly (as we are already doing). I think it's going to be their responsibility to build suitably sized payloads. I've raised this question over in Discord to get more eyes on it: https://discord.com/channels/595666850260713488/692062809701482577/960296933883273278 This PR is important to ensure we run smoothly on testnets, I think the complications of EEs returning too-large payloads can be addressed elsewhere. @AgeManning if you're happy with this, could you please bors 🙏 |
bors r+ |
## Issue Addressed N/A ## Proposed Changes Fix the upper bound for blocks by root responses to be equal to the max merge block size instead of altair. Further make the rpc response limits fork aware.
FYI, from Danny regarding the EE producing ~10MiB blocks:
|
Build failed (retrying...): |
## Issue Addressed N/A ## Proposed Changes Fix the upper bound for blocks by root responses to be equal to the max merge block size instead of altair. Further make the rpc response limits fork aware.
bors r- |
Canceled. |
bors r+ |
## Issue Addressed N/A ## Proposed Changes Fix the upper bound for blocks by root responses to be equal to the max merge block size instead of altair. Further make the rpc response limits fork aware.
## Issue Addressed N/A ## Proposed Changes #3133 changed the rpc type limits to be fork aware i.e. if our current fork based on wall clock slot is Altair, then we apply only altair rpc type limits. This is a bug because phase0 blocks can still be sent over rpc and phase 0 block minimum size is smaller than altair block minimum size. So a phase0 block with `size < SIGNED_BEACON_BLOCK_ALTAIR_MIN` will return an `InvalidData` error as it doesn't pass the rpc types bound check. This error can be seen when we try syncing pre-altair blocks with size smaller than `SIGNED_BEACON_BLOCK_ALTAIR_MIN`. This PR fixes the issue by also accounting for forks earlier than current_fork in the rpc limits calculation in the `rpc_block_limits_by_fork` function. I decided to hardcode the limits in the function because that seemed simpler than calculating previous forks based on current fork and doing a min across forks. Adding a new fork variant is simple and can the limits can be easily checked in a review. Adds unit tests and modifies the syncing simulator to check the syncing from across fork boundaries. The syncing simulator's block 1 would always be of phase 0 minimum size (404 bytes) which is smaller than altair min block size (since block 1 contains no attestations).
## Issue Addressed N/A ## Proposed Changes Fix the upper bound for blocks by root responses to be equal to the max merge block size instead of altair. Further make the rpc response limits fork aware.
## Issue Addressed N/A ## Proposed Changes sigp#3133 changed the rpc type limits to be fork aware i.e. if our current fork based on wall clock slot is Altair, then we apply only altair rpc type limits. This is a bug because phase0 blocks can still be sent over rpc and phase 0 block minimum size is smaller than altair block minimum size. So a phase0 block with `size < SIGNED_BEACON_BLOCK_ALTAIR_MIN` will return an `InvalidData` error as it doesn't pass the rpc types bound check. This error can be seen when we try syncing pre-altair blocks with size smaller than `SIGNED_BEACON_BLOCK_ALTAIR_MIN`. This PR fixes the issue by also accounting for forks earlier than current_fork in the rpc limits calculation in the `rpc_block_limits_by_fork` function. I decided to hardcode the limits in the function because that seemed simpler than calculating previous forks based on current fork and doing a min across forks. Adding a new fork variant is simple and can the limits can be easily checked in a review. Adds unit tests and modifies the syncing simulator to check the syncing from across fork boundaries. The syncing simulator's block 1 would always be of phase 0 minimum size (404 bytes) which is smaller than altair min block size (since block 1 contains no attestations).
Issue Addressed
N/A
Proposed Changes
Fix the upper bound for blocks by root responses to be equal to the max merge block size instead of altair.
Further make the rpc response limits fork aware.