-
Notifications
You must be signed in to change notification settings - Fork 973
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EIP-7251: Stricter bound for consolidation queue #3661
Conversation
f1e7402
to
79c78d0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all the state.pending_* = state.pending_*[next_index:]
assignment concerned me too. agree that implementing a method to manage the frequency would be optimal.
Co-authored-by: Hsiao-Wei Wang <hsiaowei.eth@gmail.com>
Let's review the other two:
|
We can also keep processing them every epoch but only clean up the already processed ones every N epochs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't mind this approach, but I see the criterion of "churn exceeding minimum delay" as somewhat arbitrary. I don't see why we couldn't have epoch = get_current_epoch(state) + MAX_EPOCHS_CONSOLIDATION_QUEUE
for any reasonable value of the latter.
Co-authored-by: fradamt <104826920+fradamt@users.noreply.github.com>
have we considered using a (small) ring buffer for these? would avoid any concerns around a "splicing" operation shuffling data around and resulting implications for hashing overhead |
You have to size the ring buffer to a sufficiently forward compatible big size, which would be inefficient as serialized size and you need to track the pointer. |
Now that consolidations are execution triggered I don't think this PR makes as much sense. It would be pretty bad UX for users to pay the fee on the consolidation contract for their message to be latter dropped. The exponential fee should incentivize users not to fill the queue. |
Sounds reasonable to me |
MaxEB is the first feature to splice SSZ lists. This operator is expensive w.r.t to hashing and memory de-duplication. I propose to add more stricter bounds to the consolidation queue. Ensuring that the consolidations queue is as small as possible will significantly help those drawbacks.
A consolidation must remain in the queue at least
MIN_VALIDATOR_WITHDRAWABILITY_DELAY + 1 + MAX_SEED_LOOKAHEAD
. Additional time beyond this minimum is to cover the churn. Consolidations can wait for the churn either included inside the state or in the operation mem-pool. This PR forces consolidations to sit in the mem-pool if the churn exceeds the minimum delay.Assuming a max supply of 130M ETH, all staked, the worst case for the consolidation queue with 16 ETH consolidations is
256 * 130000000/65536 / 16 = 31738
, so if the assumptions of this PR are okay, we can reduce the max size of the list from the current value of 262144. In current mainnet params this change will likely result in a queue size of a few thousands during high demands of consolidation