New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[op-batcher] Add support for multiple batcher transactions per L1 block #5398
[op-batcher] Add support for multiple batcher transactions per L1 block #5398
Conversation
|
✅ Deploy Preview for opstack-docs canceled.
|
fc0c21c
to
9c47f84
Compare
I feel like the better way to implement this is in the |
@tynes totally agree 👍 will refactor |
I think @trianglesphere has been thinking about this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been thinking about this as well & was going to start work on this as well this week. Some of the refactors we've made to the batcher make it slightly harder to parallelize, but it should still be relatively easy to do async tx sending & async tx confirmation. There might be some slight difficulty with shutdown, but only driver.go
should have to be modified in the batcher.
e5df09e
to
bfbafde
Compare
Hey @mdehoog! This PR has merge conflicts. Please fix them before continuing review. |
1ecfb6a
to
5dd98ed
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the approach. The new functions in driver.go
are really nice.
More of an idle thought, I wonder what it’d look like to integrate the concurrency (+ limits into the txmgr or in a layer on top of the txmgr that’s not the in the batch submitter)
5dd98ed
to
aa73508
Compare
6cd6b3f
to
499417b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice. I really like the Queue
interface & how it's used. Inside the queue, I think that code is correct, but I don't think we need the full complexity of sync.Cond
there.
- Move pending metrics to txmgr package (and switch to int64) - Timeout context in tests - require.Duration - Fix comment docs
5a9b391
to
9dc9933
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM now! Just a nit about a redundant check in tests.
af19fae
into
ethereum-optimism:develop
Description
Adds support for sending multiple transactions from the batcher at once. A new flag has been added:
--max-pending-tx
, which controls the number of pending transactions sent to thetxmgr
.The
txmgr
was refactored to allow concurrent calls toSend
. Some internal nonce tracking code was added that manually increments the nonce for each tx (after initially pulling it from an L1 node). If any of the transaction sends experiences an error, the entire set of pending txs is canceled, and the cached nonce is cleared.Note: currently the batcher only supports a single pending channel, which means that only when channels have multiple frames will multiple txs be submitted at once. Multiple pending channels will come in a later PR.
Tests
Add tests for txmgr nonce management + reset, as well as tests for the new
TxQueue
.Additional context
At times we are experiencing more L2 state than the batcher can keep up with, and our safe chain is falling behind our unsafe chain. This PR will allow a greater throughput of batcher transactions by allowing multiple txs per L1 block.
Here's an example block that has 10 transactions from our batcher (0x53394266fc80e5d4e6d25b3d0b7ca243859b7b09).
TODOs