Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update block's blob_kzg_commitments size limit to MAX_BLOB_COMMITMENTS_PER_BLOCK (4096) #3338

Merged
merged 11 commits into from
May 24, 2023

Conversation

g11tech
Copy link
Contributor

@g11tech g11tech commented Apr 25, 2023

UPDATE:

  1. Following the discussion rolled back the increase in max blobs per block limit but instead introduces a MAX_BLOB_COMMITMENTS_PER_BLOCK for setting list limit to beacon block's blob_kzg_commitments. (thanks @arnetheduck for bringing up the idea)
  2. Clarifications regarding MAX_BLOBS_PER_BLOCK in p2p layer and elsewhere

--------------------------------------Previously---------------------------------------

As per discussion in execution specs

it would be nice to have a higher MAX_BLOBS_PER_BLOCK limit for mainnet spec which can independently allow ELs to hardfork and increase data gas limit (and hence number of blobs per block)

Suggesting a limit of 1024 with sharding into account.

cc @mkalinin

@ppopth
Copy link
Member

ppopth commented Apr 25, 2023

This doesn't work because every validator has to download 128MB (MAX_BLOBS_PER_BLOCK * 128KB = 1024 * 128KB) of sidecars within 4 seconds (which is obviously impossible).

Currently the value is just 4*128KB = 512KB which is far more possible.

@g11tech
Copy link
Contributor Author

g11tech commented Apr 25, 2023

Currently the value is just 4*128KB = 512KB which is far more possible.

This is a top limit with sharding in mind where clients are not downloading all blobs but doing DAS, but a super proposer can feed it into network

The actual/effective limit of blobs in a block come from MAX_DATA_GAS_PER_BLOCK // DATA_GAS_PER_BLOB

@g11tech
Copy link
Contributor Author

g11tech commented Apr 25, 2023

Also not super hungup on the 1024 number, it could be something lower as well so suggestions are welcome

@ppopth
Copy link
Member

ppopth commented Apr 25, 2023

Also not super hungup on the 1024 number, it could be something lower as well so suggestions are welcome

It's true that 1024 is not high with sharding in mind, but this is not sharding, it's 4844.

I think 512KB (4 blobs) is the highest limit that we can have. I'm not sure if 1MB (8 blobs) will work in the network and I think 2MB (16 blobs) is quite dangerous.

Since the concern is in the CL network and independent from EL, this limit should be controlled not only by EL but also CL.

Quotes from ethereum/execution-apis#404

makes sense to me if we plan to keep raising the data gas limit which we will, fixed high static limit is good for the CLs as well as for making EL checks

If the EL wants to increase the data gas limit, I think it's better to let the EL increase it to only some extent. If the EL wants to increase more, it should wait for the CL hard fork.

cc's network people
@AgeManning @Nashatyrev @arnetheduck

@mkalinin
Copy link
Collaborator

My intuition is based on a fact that EL limiting a number of blobs per block is more natural than CL handling it. This intuition might be wrong if MAX_DATA_GAS_PER_BLOCK parameter is supposed to be updated during the runtime, in this case we will have to use MAX_BLOBS_PER_BLOCK as a limitation.

@AgeManning
Copy link
Contributor

Yeah I agree with @ppopth and @mkalinin . At least in 4844 we should limit the number of blobs to a reasonable size that the CL network can handle it. We don't want runtime updates to start breaking things.

@djrtwo
Copy link
Contributor

djrtwo commented Apr 26, 2023

I understand the desire to leave the param with room to grow here so EL can be independently forked to increase the gas limit. That said, changing it from 4844 regime to DAS regime should certainly be done with substantial coordination between the layers.

If we give room for buffer here, I would argue to just go to 2MB max (MAX_BLOBS_PER_BLOCK = 16) which is the 1MB (target), 2MB (max) which is the max 4844-style range that is under discussion -- thus allowing for some room to grow in in EL-only forks

@g11tech
Copy link
Contributor Author

g11tech commented Apr 27, 2023

sounds good, will make an update to 16 for 4844 regime

@hwwhww hwwhww added the Deneb was called: eip-4844 label Apr 28, 2023
@@ -4,5 +4,5 @@
# ---------------------------------------------------------------
# `uint64(4096)`
FIELD_ELEMENTS_PER_BLOB: 4096
# `uint64(2**2)` (= 4)
MAX_BLOBS_PER_BLOCK: 4
# `uint64(2**4)` (= 16)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given how sensitive this is should a note be made somewhere that capacity is actually limit in exec layer?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added the note in p2p interface

@dankrad
Copy link
Contributor

dankrad commented Apr 30, 2023

I don't understand the argument here. Can't we just agree that the real limit is coded in the EL and this is purely the theoretical list limit?

What would be the justification for "to change it up to 16, you only need to modify EL; but if you want to go above 16, both EL and CL need to be changed"? This makes no sense to me. Just having one constand that defines this in one place seems significantly less confusing.

@g11tech
Copy link
Contributor Author

g11tech commented May 1, 2023

just a point to note: this would not really be theoretical in the sense that CL will subscribe to these many blob_sidecar_${index} topics

@arnetheduck
Copy link
Contributor

So lining up the problems - MAX_BLOBS_PER_BLOCK is used in / changes require:

  • BeaconBlockBody - requires HF, invalidates merkle proofs
  • gossip topics - requires HF, no external effects
  • P2P spam protection limits - requires HF, no external effects
  • execution API - no HF needed (can be done with API version update where old version keeps producing "limited" blocks)

Its current value (4) is derived from the numbers that we believe fall within network capacity limits (max size of each blob * number of blobs per block), ie we don't want the EL to frivolously increase them without involvement from the CL because the CL has to deal with the fallout of the change. The alternative is to "dimension" the CL for the full value from start, which we're not comfortable doing, else we'd be doing it already. We cannot re-dimension the CL post-fact without a HF (gossip topics, spam protection limits etc but also promises we make to users with regards to bandwidth consumption when they set up their rigs).

Of all these constraints, the execution API is the least significant reason to increase the limit "just in case" - it can (likely) be done without a HF and users will have reasonable incentive to upgrade outside of the HF should this become a reality. I would complete disregard its upgrade path for the purposes of deciding on a value here.

re MAX_DATA_GAS_PER_BLOCK - agreeing with @ppopth, it seems reasonable to ensure that it is capped at a value corresponding to MAX_BLOBS_PER_BLOCK - in particular, we do not have to upgrade both at the same time: the CL leads, the EL follows - the EL could even be prudent in using a lower limit initially.

I'd thus:

  • use a separate constant for the block body limit (ie MAX_BLOB_COMMITMENTS_PER_BLOCK = 1024)
  • use MAX_BLOBS_PER_BLOCK = 4 for execution API, gossip topics etc - increase these at the next CL HF if networking conditions and developments permit.
  • assert MAX_BLOBS_PER_BLOCK <= MAX_BLOB_COMMITMENTS_PER_BLOCK
  • let the EL do what it wants when it wants with its MAX_DATA_GAS_PER_BLOCK within the bound / capacity established by MAX_BLOBS_PER_BLOCK.

@g11tech
Copy link
Contributor Author

g11tech commented May 1, 2023

* use a separate constant for the block body limit (ie `MAX_BLOB_COMMITMENTS_PER_BLOCK = 1024`)

I think this is a great idea!

@ppopth
Copy link
Member

ppopth commented May 5, 2023

use a separate constant for the block body limit (ie MAX_BLOB_COMMITMENTS_PER_BLOCK = 1024)

I don't understand this one. With MAX_BLOB_COMMITMENTS_PER_BLOCK > MAX_BLOBS_PER_BLOCK, does it mean that there can be more commitments in a block than the blobs sent to the network?

If so, I think is_data_available will be false for those excessive commitments then?

@ppopth
Copy link
Member

ppopth commented May 5, 2023

I decoupled the gossipsub topics from MAX_BLOBS_PER_BLOCK in #3346. I hope it will enable us to change MAX_BLOBS_PER_BLOCK more easily.

@g11tech
Copy link
Contributor Author

g11tech commented May 5, 2023

use a separate constant for the block body limit (ie MAX_BLOB_COMMITMENTS_PER_BLOCK = 1024)

I don't understand this one. With MAX_BLOB_COMMITMENTS_PER_BLOCK > MAX_BLOBS_PER_BLOCK, does it mean that there can be more commitments in a block than the blobs sent to the network?

If so, I think is_data_available will be false for those excessive commitments then?

no what it means is that MAX_BLOB_COMMITMENTS_PER_BLOCK is set to a high number like 1024 which is what is used in the ssz specification for blob_kzg_commitments in the deneb beacon block. So the merkle proofs constructed today would be valid in future also without thinking about what ssz tree structure to use them to verify.

But its a list to it will always be filled less than MAX_BLOBS_PER_BLOCK

@g11tech
Copy link
Contributor Author

g11tech commented May 5, 2023

I decoupled the gossipsub topics from MAX_BLOBS_PER_BLOCK in #3346. I hope it will enable us to change MAX_BLOBS_PER_BLOCK more easily.

But what will it independently do w.r.t. MAX_BLOBS_PER_BLOCK:

So now we have three constants around the same:

  • EL 's: MAX_DATA_GAS_PER_BLOCK // DATA_GAS_PER_BLOB which is proposed to independently change yet remain <=
  • MAX_BLOBS_PER_BLOCK by specifying a higher MAX_BLOBS_PER_BLOCK which i guess should also be <=
  • BLOB_SIDECAR_SUBNET_COUNT

I think, what you mean is 2nd only since EL already has its independent data gas mechanism of defining its limit (actual limit calculation on EL side would however be :
blobLimit = min(MAX_DATA_GAS_PER_BLOCK // DATA_GAS_PER_BLOB, MAX_BLOBS_PER_BLOCK)

@rolfyone
Copy link
Contributor

rolfyone commented May 5, 2023

I'm not sure if we have a 'target' staker, but many home stakers running a handful of validators on residential DSL plans already struggle from time to time on bandwidth, and adding to that 2MB of data to download, and more importantly upload, might exclude that class of validator from our network, as they won't be able to upload fast enough to participate any more... I'm hoping that at least for 4844 we can have a solution that doesn't exclude them... Admittedly this should improve with full sharding, but in the meantime we should continue to consider our sizings very carefully, or be very transparent that DSL is going to be severely limiting to the home staker...

@g11tech
Copy link
Contributor Author

g11tech commented May 6, 2023

I'm not sure if we have a 'target' staker, but many home stakers running a handful of validators on residential DSL plans already struggle from time to time on bandwidth, and adding to that 2MB of data to download, and more importantly upload, might exclude that class of validator from our network, as they won't be able to upload fast enough to participate any more... I'm hoping that at least for 4844 we can have a solution that doesn't exclude them... Admittedly this should improve with full sharding, but in the meantime we should continue to consider our sizings very carefully, or be very transparent that DSL is going to be severely limiting to the home staker...

the real limit right now could be controlled by MAX_DATA_GAS_PER_BLOCK // DATA_GAS_PER_BLOB, r.n. this effectively makes no change w.r.t. on ground requirements

# `uint64(2**2)` (= 4)
MAX_BLOBS_PER_BLOCK: 4
# `uint64(2**4)` (= 16)
MAX_BLOBS_PER_BLOCK: 16
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should remain 4: that already represents a significant increase in traffic on the gossipsub network

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rolledback

@arnetheduck
Copy link
Contributor

If we give room for buffer here, I would argue to just go to 2MB max

I don't feel we're ready for this just yet given that the block itself also can touch this size, totaling 4mb for a block - since the limit no longer has long-term consensus effects, all that's needed to change it is a coordination hard fork and we can gradually increase. For increasing the limit, we should also consider other bandwidth-saving techniques (lazy sending, potentially episub, better attestation subnet subscription handling etc)

@ppopth
Copy link
Member

ppopth commented May 9, 2023

  • MAX_BLOBS_PER_BLOCK by specifying a higher MAX_BLOBS_PER_BLOCK which i guess should also be <=

  • BLOB_SIDECAR_SUBNET_COUNT

No, MAX_BLOBS_PER_BLOCK should be independent from BLOB_SIDECAR_SUBNET_COUNT. Wrt that PR, The number of gossipsub topics (aka subnets) will be BLOB_SIDECAR_SUBNET_COUNT.

If MAX_BLOBS_PER_BLOCK is greater than BLOB_SIDECAR_SUBNET_COUNT, there will be multiple blobs from a block sent in some subnet.
If MAX_BLOBS_PER_BLOCK is less than BLOB_SIDECAR_SUBNET_COUNT, there will be some subnet unused.

As mentioned in #3346, the advantage of doing this is that we can change MAX_BLOBS_PER_BLOCK without worrying about the p2p network structure and the number of subnets.

For example, if we want to change MAX_BLOBS_PER_BLOCK to 1024, that PR ensures that we won't have 1024 gossipsub topics.

@ppopth
Copy link
Member

ppopth commented May 9, 2023

the real limit right now could be controlled by MAX_DATA_GAS_PER_BLOCK // DATA_GAS_PER_BLOB, r.n. this effectively makes no change w.r.t. on ground requirements

What would be the justification for "to change it up to 16, you only need to modify EL; but if you want to go above 16, both EL and CL need to be changed"? This makes no sense to me. Just having one constand that defines this in one place seems significantly less confusing.

The reason why we shouldn't put the limit only in the EL is due to the philosophy of the layered protocol design.

We should design the CL to work fine even with any silly EL (not only with the EL with the max data gas limiting). For example, let's say we have a silly EL that wants to publish 1024 blobs and the CL doesn't have a limit, the network will collapse. Why do we need to think about such the silly EL even if it will not really exist? Let me explain.

All the layered protocols in the Internet have the contracts that the upper layer and the lower layer must satisfy and it will work fine with any silly upper and lower protocol, as long as they satisfy the contracts. The benefit of this principle is that it allows us to change the protocol at any layer as long as the new protocol satisfies the contracts between layers.

For example, HTTP says that it will work fine as long as the lower protocol provides an in-order and reliable connection. Even if the lower protocol is silly, as long as it satisfies the contract, HTTP will work fine. That's why we can use HTTP not only over TCP, but also TLS, QUIC, and Unix Domain Socket, but not UDP, because UDP doesn't provide a reliable connection.

Another example is TCP, TCP says that it will work fine as long as the upper protocol is connection-oriented, stream-oriented (doesn't care about the write boundary), and resilient to delay, so TCP will work fine with HTTP, etc., but not DNS and NTP because DNS is not stream-oriented and NTP is sensitive to delay.

In conclusion, due to this principle, we should ensure that the CL will work fine as long as the upper layer doesn't publish more than MAX_BLOBS_PER_BLOCK per block. And, in order to make such statement true, MAX_BLOBS_PER_BLOCK has to be set to a reasonable number.

The benefit of this principle is that it allows anyone to just use our CL and design their own EL which satisfies the contract that it will not publish more than MAX_BLOBS_PER_BLOCK per block without thinking why it has to be such number.

Without this principle, the EL people will have to work very closely to the CL people, i.e. the EL people will have to look at the CL p2p network spec and decide what is the right value for MAX_DATA_GAS_PER_BLOCK // DATA_GAS_PER_BLOB which is okay for Ethereum because we are already working together, but it will rule out the third-party EL automatically because they will have to look at the protocol detail of the CL.

If we think that the third-party EL is not an issue and decide to not follow this principle, I think it's okay, but the EL and CL will be really coupled and they are not really layers according to the norm definition.

@arnetheduck
Copy link
Contributor

And, in order to make such statement true, MAX_BLOBS_PER_BLOCK has to be set to a reasonable number.

It is set to a reasonable number (4): the number of blobs that we're comfortable supporting - you're right that the fact that it coincides with the subnet count is an implementation detail on the CL side - it's merely offered as the reason why we're uncomfortable going higher: doing so has complex side effects on the CL side that we're working to resolve: once we've solved them, the number can go up.

Re layers, you bring up an excellent example - take UDP: it is one of the layers - part of its "interface" to the next layer is the packet size which tends to hover in the 600:ish bytes range: it's a limit that you, when designing a protocol on top of UDP, have to take into account (just like the packet reordering and so on) - as long as you stay within this limit, you're fine.

Layered protocols rely on artifacts like MAX_BLOBS_PER_BLOCK to define the interface between layers and in this case, the EL needs to provide "smaller or equal" blocks in order to be compatible with the CL - there's no contradiction or argument here really with respect to layering (outside of the naming of the constants, maybe).

@ppopth
Copy link
Member

ppopth commented May 9, 2023

It is set to a reasonable number (4):

Yeah, I meant I'm against setting it to an unreasonable number. (like 1024)

part of its "interface" to the next layer

I didn't know what to call it, so I called it a contract between layers. It probably should be called an interface instead.

there's no contradiction or argument here really with respect to layering (outside of the naming of the constants, maybe).

As there is an question in this PR on why we shouldn't do the limit only in the EL, I brought up the layering thing to explain why we should also have a limit in the CL.

@g11tech
Copy link
Contributor Author

g11tech commented May 9, 2023

As there is an question in this PR on why we shouldn't do the limit only in the EL, I brought up the layering thing to explain why we should also have a limit in the CL.

I agree, but then MAX_BLOBS_PER_BLOCK is what that independent variable represents and if CL isn't ready for blobs > 4 at all then I agree to leave it at 4 else whats the point of having MAX_BLOBS_PER_BLOCK to a higher number and not have subnets to transmit it without a coordinated CL hardfork with EL.

Also its not really an independent CL variable because EL has to limit its blob production per block to the min of ( MAX_DATA_GAS_PER_BLOCK // DATA_GAS_PER_BLOB, MAX_BLOBS_PER_BLOCK), so it has to refer to a CL constant in its hardfork params (which isn't a big deal, but why then have MAX_DATA_GAS_PER_BLOCK)

may be we are just arguing over some hypothetical here since all the constants right now have to be in tandem together and will upgrade in tandem.

The two fruitful outcomes of this discussion as per me are:

  • MAX_BLOB_COMMITMENTS_PER_BLOCK theoretical limit for commitment list in block
  • specifying MAX_BLOBS_PER_BLOCK in the execution apis as the limit of blobs in blobsbundle response so as to make execution aware of the CL limit, which is actually making ELs track this variable as well in their hardfork configs

@ppopth
Copy link
Member

ppopth commented May 18, 2023

If we go the two var approach, this needs to be an assertion in process_operations

Strongly agree, but I think it should be in process_blob_kzg_commitments instead?

def process_blob_kzg_commitments(body: BeaconBlockBody) -> None:
    assert len(body.blob_kzg_commitments) <= MAX_BLOBS_PER_BLOCK # Added
    assert verify_kzg_commitments_against_transactions(body.execution_payload.transactions, body.blob_kzg_commitments)

** Please also don't forget to add the pyspec test for this.

@g11tech
Copy link
Contributor Author

g11tech commented May 18, 2023

If we go the two var approach, this needs to be an assertion in process_operations

Strongly agree, but I think it should be in process_blob_kzg_commitments instead?

def process_blob_kzg_commitments(body: BeaconBlockBody) -> None:
    assert len(body.blob_kzg_commitments) <= MAX_BLOBS_PER_BLOCK # Added
    assert verify_kzg_commitments_against_transactions(body.execution_payload.transactions, body.blob_kzg_commitments)

** Please also don't forget to add the pyspec test for this.

I think this function is going away since CL computed versioned hashes (from CL commitments) will now be verified as part of new_payload call,
so effectively the assertion can be moved out or this single assertion can stay here

@djrtwo
Copy link
Contributor

djrtwo commented May 18, 2023

Right, the commitments aren't "operations". I was thinking about the precedent of checking some length constraints in process_operations() (https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/beacon-chain.md#operations)

It could either got in process_block, solely in process_blob_kzg_commitments, in process_execution_payload.

If we go with this suggestion here #3359 (comment), I'd recommend process_blob_kzg_commitments. Otherwise, maybe put it in process_execution_payload which now has the commitments passed into the function

@g11tech
Copy link
Contributor Author

g11tech commented May 19, 2023

If we go with this suggestion here #3359 (comment), I'd recommend process_blob_kzg_commitments. Otherwise, maybe put it in process_execution_payload which now has the commitments passed into the function

awesome will update the check in process_execution_payload unless someone feels strongly against it (and will remove the MAX_BLOBS_PER_BLOCK reference from p2p)

@g11tech g11tech force-pushed the update-max-blobs-limit-mainnet branch from 97a075e to 8ccc257 Compare May 20, 2023 14:07
@g11tech
Copy link
Contributor Author

g11tech commented May 20, 2023

have moved MAX_BLOBS_PER_BLOCK references to the beacon-chain.md from p2p-interface.md leaving the addition of check for MAX_BLOBS_PER_BLOCK to #3359 as it has modified process_execution_payload with modified input for blob_commitments access

@ppopth
Copy link
Member

ppopth commented May 22, 2023

have moved MAX_BLOBS_PER_BLOCK references to the beacon-chain.md from p2p-interface.md leaving the addition of check for MAX_BLOBS_PER_BLOCK to #3359 as it has modified process_execution_payload with modified input for blob_commitments access

As we don't have a conclusion whether we want to follow #3359 (comment) and we don't know when #3359 will be merged, I think it's better to put the length check in process_blob_kzg_commitments with this PR first so that this PR can be merged before #3359

If we go with this suggestion here #3359 (comment), I'd recommend process_blob_kzg_commitments. Otherwise, maybe put it in process_execution_payload which now has the commitments passed into the function

Yes, that's right. I agree with that.

@g11tech
Copy link
Contributor Author

g11tech commented May 22, 2023

process_blob_kzg_commitments

updated! 🙏

@djrtwo djrtwo mentioned this pull request May 23, 2023
8 tasks
Copy link
Contributor

@djrtwo djrtwo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would probably remove the reference to the execution layer constants.

Let's wait on #3359 and then move this into process_execution_payload

@@ -68,7 +68,7 @@ This upgrade adds blobs to the beacon chain as part of Deneb. This is an extensi

| Name | Value |
| - | - |
| `MAX_BLOBS_PER_BLOCK` | `uint64(2**2)` (= 4) |
| `MAX_BLOB_COMMITMENTS_PER_BLOCK` | `uint64(2**12)` (= 4096) | hardfork independent fixed theoretical limit same as `LIMIT_BLOBS_PER_TX` (see EIP 4844) |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
| `MAX_BLOB_COMMITMENTS_PER_BLOCK` | `uint64(2**12)` (= 4096) | hardfork independent fixed theoretical limit same as `LIMIT_BLOBS_PER_TX` (see EIP 4844) |
| `MAX_BLOB_COMMITMENTS_PER_BLOCK` | `uint64(2**12)` (= 4096) | hardfork independent fixed theoretical limit same as `LIMIT_BLOBS_PER_TX` (see EIP 4844) |

@@ -40,6 +40,8 @@

This upgrade adds blobs to the beacon chain as part of Deneb. This is an extension of the Capella upgrade.

The blob transactions are packed into the execution payload by the EL/builder with their corresponding blobs being independently transmitted and are limited by `MAX_DATA_GAS_PER_BLOCK // DATA_GAS_PER_BLOB`. However the CL limit is independently defined by `MAX_BLOBS_PER_BLOCK`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I would put this reference to execution-layer constraints here.

Definitely don't need a reference to "builder", and there is no definition of EL in the consensus-layer specs at this point. If we want to reference that layer use ExecutionEngine

@hwwhww
Copy link
Contributor

hwwhww commented May 24, 2023

FYI I'm working on tests and pushing commits

Comment on lines +120 to +123
New validation:

- _[REJECT]_ The length of KZG commitments is less than or equal to the limitation defined in Consensus Layer --
i.e. validate that `len(body.signed_beacon_block.message.blob_kzg_commitments) <= MAX_BLOBS_PER_BLOCK`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@djrtwo I added the p2p validation here. Feel free to open a post-merge PR to revise it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, makes sense!. definitely need the p2p limit given the type isn't protecting us from dos

Copy link
Contributor

@hwwhww hwwhww left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank @g11tech

LGTM
I'm going to merge it and run test gen. We can apply proofread suggestions later if need.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Deneb was called: eip-4844
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet