Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EIP-7251: Stricter bound for consolidation queue #3661

Closed
wants to merge 3 commits into from

Conversation

dapplion
Copy link
Collaborator

@dapplion dapplion commented Apr 9, 2024

MaxEB is the first feature to splice SSZ lists. This operator is expensive w.r.t to hashing and memory de-duplication. I propose to add more stricter bounds to the consolidation queue. Ensuring that the consolidations queue is as small as possible will significantly help those drawbacks.

A consolidation must remain in the queue at least MIN_VALIDATOR_WITHDRAWABILITY_DELAY + 1 + MAX_SEED_LOOKAHEAD. Additional time beyond this minimum is to cover the churn. Consolidations can wait for the churn either included inside the state or in the operation mem-pool. This PR forces consolidations to sit in the mem-pool if the churn exceeds the minimum delay.

Assuming a max supply of 130M ETH, all staked, the worst case for the consolidation queue with 16 ETH consolidations is 256 * 130000000/65536 / 16 = 31738, so if the assumptions of this PR are okay, we can reduce the max size of the list from the current value of 262144. In current mainnet params this change will likely result in a queue size of a few thousands during high demands of consolidation

@dapplion dapplion requested a review from mkalinin April 9, 2024 05:15
@dapplion dapplion force-pushed the eip-7251-bound-consolidations branch from f1e7402 to 79c78d0 Compare April 9, 2024 05:16
@hwwhww hwwhww added the Electra label Apr 9, 2024
@hwwhww hwwhww changed the title Stricter bound for consolidation queue EIP-7251: Stricter bound for consolidation queue Apr 9, 2024
Copy link
Contributor

@hwwhww hwwhww left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all the state.pending_* = state.pending_*[next_index:] assignment concerned me too. agree that implementing a method to manage the frequency would be optimal.

specs/_features/eip7251/beacon-chain.md Outdated Show resolved Hide resolved
Co-authored-by: Hsiao-Wei Wang <hsiaowei.eth@gmail.com>
@dapplion
Copy link
Collaborator Author

dapplion commented Apr 12, 2024

all the state.pending_* = state.pending_*[next_index:] assignment concerned me too. agree that implementing a method to manage the frequency would be optimal.

Let's review the other two:

  • pending_balance_deposits: unbounded, but not much we can do as all deposits must be accounted. Maybe we can only process deposits every N epochs.
  • pending_partial_withdrawals: should be bounded by 7002's exponential fee, but it's mutated every slot. Worth it to model the worst case

@fradamt
Copy link
Contributor

fradamt commented Apr 15, 2024

all the state.pending_* = state.pending_*[next_index:] assignment concerned me too. agree that implementing a method to manage the frequency would be optimal.

Let's review the other two:

  • pending_balance_deposits: unbounded, but not much we can do as all deposits must be accounted. Maybe we can only process deposits every N epochs.

We can also keep processing them every epoch but only clean up the already processed ones every N epochs

Copy link
Contributor

@fradamt fradamt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't mind this approach, but I see the criterion of "churn exceeding minimum delay" as somewhat arbitrary. I don't see why we couldn't have epoch = get_current_epoch(state) + MAX_EPOCHS_CONSOLIDATION_QUEUE for any reasonable value of the latter.

specs/_features/eip7251/beacon-chain.md Outdated Show resolved Hide resolved
specs/_features/eip7251/beacon-chain.md Outdated Show resolved Hide resolved
Co-authored-by: fradamt <104826920+fradamt@users.noreply.github.com>
@ralexstokes
Copy link
Member

have we considered using a (small) ring buffer for these? would avoid any concerns around a "splicing" operation shuffling data around and resulting implications for hashing overhead

@dapplion
Copy link
Collaborator Author

have we considered using a (small) ring buffer for these? would avoid any concerns around a "splicing" operation shuffling data around and resulting implications for hashing overhead

You have to size the ring buffer to a sufficiently forward compatible big size, which would be inefficient as serialized size and you need to track the pointer.

@philknows philknows mentioned this pull request May 2, 2024
12 tasks
@hwwhww hwwhww mentioned this pull request Jun 4, 2024
10 tasks
@dapplion
Copy link
Collaborator Author

dapplion commented Jun 7, 2024

Now that consolidations are execution triggered I don't think this PR makes as much sense. It would be pretty bad UX for users to pay the fee on the consolidation contract for their message to be latter dropped. The exponential fee should incentivize users not to fill the queue.

Thoughts? @mkalinin @fradamt

@mkalinin
Copy link
Collaborator

Now that consolidations are execution triggered I don't think this PR makes as much sense. It would be pretty bad UX for users to pay the fee on the consolidation contract for their message to be latter dropped. The exponential fee should incentivize users not to fill the queue.

Thoughts? @mkalinin @fradamt

Sounds reasonable to me

@dapplion dapplion closed this Jun 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants