Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix gossip and tx size limits for the merge #2686

Merged
merged 4 commits into from Oct 28, 2021
Merged

fix gossip and tx size limits for the merge #2686

merged 4 commits into from Oct 28, 2021

Conversation

djrtwo
Copy link
Contributor

@djrtwo djrtwo commented Oct 20, 2021

This PR addresses two issues:

  1. Adjust upper bound for Transaction byte size allow for actual bounds within current gas-limits and much higher
  2. Adjust the maximum gossip size to account for potential theoretical max size ExecutionPayload and to be compatible with network limits in geth/pow chain today.

These issues were originally pointed out by @dankrad here -- https://github.com/ethereum/consensus-specs/pull/2607/files#r711053462. Please see there for more context and discussion

Discussion points for PR:

  1. Assuming just single TX filled with calldata (16 gas per byte) fills an ExecutionPayload, then MAX_BYTES_PER_TRANSACTION of 2**24 can support a gas limit of up to ~270M (2**24 * 16). This seems safe at ~10x the current gaslimit, but considering that this is a consensus value that would require a hard-fork to upgrade and that we have "safe" limits in the p2p spec, should we instead push this limit to support a a higher gas-limit (e.g. use 2**26 which would support a 1B gas limit)? The cost here is extra depth in the Transaction object merkle tree which would result in a few more hashes per materialization in the normal case.
  2. MAX_TRANSACTIONS_PER_PAYLOAD is currently 2**14. At a minimum gas cost of 21000 gas per transaction (simple send), this supports up to a ~344M gas limit. so on the same order as the 2**24 limit on MAX_BYTES_PER_TRANSACTION. If we adjust max bound based on discussion point (1), we should similarly up this amount to be on the same order.

EDIT:

After discussion of the above two pounts, MAX_BYTES_PER_TRANSACTION is increased to 2**30 and MAX_TRANSACTIONS_PER_PAYLOAD is increased to 2**20. This ensures that both datastructures when used in the extreme can handle gas limits of ~20B. This is 1000x greater than the gas limit today and thus, barring incredible/unforseen circumstances this allows for growth without concern.

Some discussion on the "cost" involved in hashing here
#2686 (comment)

@dankrad
Copy link
Contributor

dankrad commented Oct 20, 2021

As a further input on this, I think that once statelessness becomes a reality, raising the gas limit by a significant factor will be quite realistic.
The 10x factor should still give enough headroom for a few years, but I wouldn't bet on more than 5 for sure.

@@ -2,8 +2,8 @@

# Execution
# ---------------------------------------------------------------
# 2**20 (= 1,048,576)
MAX_BYTES_PER_TRANSACTION: 1048576
# 2**24 (= 16,777,216)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we go from 2**20 to 2**30 (instead of 2**24) we add 10 hashes work (of 64 bytes input) per transaction to mix-in zero-hashes. At 1000 transactions per block, that's 10.000 hashes. The participation registry in altair is mostly re-merkleized (sparse and many changes, condensed leaf nodes), and at 250k validators (mainnet today) (packed uint8, 32 per leaf) that translates to (250000/32)*2-1=15624 hashes. So this compares to adding another one of those in terms of performance cost. Worth the future-compatibility I think.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's for MAX_BYTES_PER_TRANSACTION

You also get some depth via increading MAX_TRANSACTIONS_PER_PAYLOAD (which you probably want to increase in concert). But, correct me if I'm wrong, this doesn't scale the same way with number of TXs. You essentially get a single mix-in per depth so just +X hashes when moving from 2**N to 2**(N+X)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2^30 seems crazy? 1GB!

but it does firmly get us out of where we might realistically bump into which is a reasonable design goal, imo

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated constants based on this convo. check it out

@hwwhww hwwhww added the Bellatrix CL+EL Merge label Oct 22, 2021
@djrtwo
Copy link
Contributor Author

djrtwo commented Oct 22, 2021

bump on review @protolambda @dankrad

Copy link
Collaborator

@protolambda protolambda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another edge case that we should make clients aware of is that now that valid application blocks can technically reach the 10 MiB limit, clients should maintain the same limit carefully. Say if one client applies 10 MiB to the size including envelope related bytes, and the other without, then the block cannot get processed by some of the clients via gossip, and there could be a temporary chain-split.

@@ -77,8 +94,7 @@ Alias `block = signed_beacon_block.message`, `execution_payload = block.body.exe
- _[REJECT]_ The execution payload block hash is not equal to the parent hash --
i.e. `execution_payload.block_hash != execution_payload.parent_hash`.
- _[REJECT]_ The execution payload transaction list data is within expected size limits,
the data MUST NOT be larger than the SSZ list-limit,
and a client MAY be more strict.
the data MUST NOT be larger than the SSZ list-limit.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This diff is confusing. Clients should be more strict than the SSZ list-limit, as that limit is higher than the gossip size limit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I see that.

It is a MUST but the other gossip restriction is tighter. I'm loosely in favor of leaving as is so that the restriction is essentially min(max_gossip_size, SSZ_limit)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the condition. It's purely extraneous


| Name | Value | Description |
|---|---|---|
| `GOSSIP_MAX_SIZE_MERGE` | `10 * 2**20` (= 10,485,760, 10 MiB) | The maximum allowed size of uncompressed gossip messages starting at the Merge upgrade |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why should we have a new configurable setting (i.e. have a _MERGE suffix here) instead of just overwriting already existing one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is generally how we handle configuration value updates to be able to handle and reason about fork boundaries more simply

See discussion in config README -- https://github.com/ethereum/consensus-specs/tree/dev/configs#forking

Copy link
Collaborator

@protolambda protolambda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The phase0 spec defines MAX_CHUNK_SIZE: 2**20, which applies to all responses in req-resp. Thus also applies to the beacon_blocks_by_range and beacon_blocks_by_root topics, limiting historical sync of blocks to 1 MiB per block. This constant is loosely defined as "network configuration", not generally part of the chain-config. It can be overwritten in-place safely to any value, if clients properly implement the SSZ-size limits for other request/response type. So we should define the change of MAX_CHUNK_SIZE, and remind the Merge implementers to apply type-specific limits (10 MiB on the metadata-req/resp can be much worse than on blocks, since it may be rate-limited or scored differently).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bellatrix CL+EL Merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants