-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(op-node): Correctly pop unsafe payload queue when payload is too … #6324
Conversation
|
✅ Deploy Preview for opstack-docs canceled.
|
cc @ajsutton |
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## develop #6324 +/- ##
============================================
+ Coverage 30.84% 44.50% +13.65%
============================================
Files 429 449 +20
Lines 27559 29246 +1687
Branches 748 748
============================================
+ Hits 8500 13015 +4515
+ Misses 18472 15170 -3302
- Partials 587 1061 +474
Flags with carried forward coverage won't be shown. Click here to find out more.
|
Ooo, thank you. I'll get @protolambda to take a look since he knows this code really well and it is quite an important area. One additional bit of context is that p2p sync is now enabled by default, so while we previously went to almost any length to avoid dropping gossip we might need, now we can be a bit more relaxed knowing that p2p sync can fetch the blocks if needed. I think there may be a case where a L1 reorg causes the safe head to drop back and so there's potentially a period where we haven't seen that reorg yet but the sequencer has published new unsafe payloads that are prior to our current unsafe head (but after the post-reorg unsafe head). This change would then drop those payloads and depend on p2p sync filling them in. However that's likely less common and resolves faster than the case where p2p sync winds up adding a payload just as the safe head advances and unsafe processing stalls until the next safe head update which is what this fixes. |
Note that it would only be able to fill them in after first reorging the chain using the safe-head reorg mechanism. Since we do not really have a forkchoice other than following the latest canonical L1 chain, we do not allow unsafe-head reorgs until seeing the new L1 chain. |
This PR has been added to the merge queue, and will be merged soon. |
Hey @Kelvyne, this pull request failed to merge and has been dequeued from the merge train. If you believe your PR failed in the merge train because of a flaky test, requeue it by commenting with |
Yep agreed. Which is similar to the current setup where importing unsafe payloads stops because the next payload is prior to the current unsafe head, until the L1 reorg is process and then that first pending unsafe payload should now build on the new safe head and be able to be imported. |
…t head When activating the `unsafeL2Payload` queue, kroma-node stops processing the queue if a payload is older than the rollup's `unsafeL2Head`. It happens more frequently when the batches are pushed slower. Changes to pop unsafe payload queue when payload is older than current unsafe head. See ethereum-optimism/optimism#6324
…older
Fixes #6092
Description
See #6092
Tests
The test is minimalist. It only ensures that the unsafe payload queue is pop'd
Additional context
Metadata