Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid freq switch of non-4844 to 4844 batch post #2158

Merged
merged 5 commits into from
Feb 28, 2024

Conversation

Tristan-Wilson
Copy link
Member

@Tristan-Wilson Tristan-Wilson commented Feb 25, 2024

Logic to prevent switching from non-4844 batches to 4844 batches too often, so that blocks can be filled efficiently. The geth txpool rejects txs for accounts that already have the other type of txs in the pool with "address already reserved". This logic makes sure that, if there is a backlog, that enough non-4844 batches have been posted to fill a block before switching.

Testing Done

Nitro needs to be started with a smaller batch size limit and longer max-delay so we can create a backlog easliy, and with 4844 posting enabled. The sequencer max-tx-data-size also needs to be set so that it will fit within the batch size.

    "batch-poster": {
      "enable": true,
...
      "max-delay": "30s",
        "max-size": 5600,
        "post-4844-blobs": true,
        "ignore-blob-price": false,
....
    },
...
  "execution": {
    "sequencer": {
      "enable": true,
      "max-tx-data-size": 500
    }
  },

Non 4844 cheaper than 4844

Tested manually on my local devnet that Nitro switches from 4844 to non-4844 batch posting when the blob gas price gets too expensive.

How?

Spam blobs from multiple accounts with gas price on L1

while true; do ./blob-utils tx --blob-file=/home/ubuntu/devnet/data/1708458198/blob-1 --private-key xxx --to 0x0 --max-fee-per-blob-gas 4000000000 --priority-gas-price 3000000000 --gas-price 4000000000 --chain-id 32382 --rpc-url http://localhost:8545; sleep 0.19; done

Spam txs from multiple accounts on L2

cast send xxx -r http://localhost:8547 --private-key xxx --value 0.0000001ether; done
INFO [02-27|01:52:38.519] BatchPoster: batch sent                  sequenceNumber=182 from=9134 to=9255 prevDelayed=183 currentDelayed=184 totalSegments=155 numBlobs=1
INFO [02-27|01:53:08.664] BatchPoster: batch sent                  sequenceNumber=183 from=9255 to=9377 prevDelayed=184 currentDelayed=185 totalSegments=156 numBlobs=1
INFO [02-27|01:53:38.786] BatchPoster: batch sent                  sequenceNumber=184 from=9377 to=9498 prevDelayed=185 currentDelayed=186 totalSegments=155 numBlobs=1
INFO [02-27|01:54:08.943] BatchPoster: batch sent                  sequenceNumber=185 from=9498 to=9618 prevDelayed=186 currentDelayed=186 totalSegments=154 numBlobs=1
INFO [02-27|01:54:39.113] BatchPoster: batch sent                  sequenceNumber=186 from=9618 to=9740 prevDelayed=186 currentDelayed=188 totalSegments=156 numBlobs=1
INFO [02-27|01:55:09.447] BatchPoster: batch sent                  sequenceNumber=187 from=9740 to=9861 prevDelayed=188 currentDelayed=188 totalSegments=156 numBlobs=1
INFO [02-27|01:55:29.548] BatchPoster: batch sent                  sequenceNumber=188 from=9861 to=9924 prevDelayed=188 currentDelayed=189 totalSegments=81  numBlobs=0
INFO [02-27|01:55:49.629] BatchPoster: batch sent                  sequenceNumber=189 from=9924 to=9988 prevDelayed=189 currentDelayed=190 totalSegments=83  numBlobs=0
INFO [02-27|01:55:59.719] BatchPoster: batch sent                  sequenceNumber=190 from=9988 to=10050 prevDelayed=190 currentDelayed=190 totalSegments=80  numBlobs=0
INFO [02-27|01:56:19.783] BatchPoster: batch sent                  sequenceNumber=191 from=10050 to=10114 prevDelayed=190 currentDelayed=191 totalSegments=82  numBlobs=0
INFO [02-27|01:56:29.877] BatchPoster: batch sent                  sequenceNumber=192 from=10114 to=10179 prevDelayed=191 currentDelayed=193 totalSegments=84  numBlobs=0

4844 goes back to being cheaper than non-4844, with backlog

Tested manually on my local devnet that when there is a backlog, Nitro switches from non-4844 to 4844 batch posting after at least 16 non-4844 batches have been posted.

How?

Continued from the previous test, then blob spam was disabled around 01:57:20. There are 16 more batches posted after that as non-4844 (numBlobs=0) and then the next batch has numBlobs=1.

INFO [02-27|01:52:38.519] BatchPoster: batch sent                  sequenceNumber=182 from=9134 to=9255 prevDelayed=183 currentDelayed=184 totalSegments=155 numBlobs=1
INFO [02-27|01:53:08.664] BatchPoster: batch sent                  sequenceNumber=183 from=9255 to=9377 prevDelayed=184 currentDelayed=185 totalSegments=156 numBlobs=1
INFO [02-27|01:53:38.786] BatchPoster: batch sent                  sequenceNumber=184 from=9377 to=9498 prevDelayed=185 currentDelayed=186 totalSegments=155 numBlobs=1
INFO [02-27|01:54:08.943] BatchPoster: batch sent                  sequenceNumber=185 from=9498 to=9618 prevDelayed=186 currentDelayed=186 totalSegments=154 numBlobs=1
INFO [02-27|01:54:39.113] BatchPoster: batch sent                  sequenceNumber=186 from=9618 to=9740 prevDelayed=186 currentDelayed=188 totalSegments=156 numBlobs=1
INFO [02-27|01:55:09.447] BatchPoster: batch sent                  sequenceNumber=187 from=9740 to=9861 prevDelayed=188 currentDelayed=188 totalSegments=156 numBlobs=1
INFO [02-27|01:55:29.548] BatchPoster: batch sent                  sequenceNumber=188 from=9861 to=9924 prevDelayed=188 currentDelayed=189 totalSegments=81  numBlobs=0
INFO [02-27|01:55:49.629] BatchPoster: batch sent                  sequenceNumber=189 from=9924 to=9988 prevDelayed=189 currentDelayed=190 totalSegments=83  numBlobs=0
INFO [02-27|01:55:59.719] BatchPoster: batch sent                  sequenceNumber=190 from=9988 to=10050 prevDelayed=190 currentDelayed=190 totalSegments=80  numBlobs=0
INFO [02-27|01:56:19.783] BatchPoster: batch sent                  sequenceNumber=191 from=10050 to=10114 prevDelayed=190 currentDelayed=191 totalSegments=82  numBlobs=0
INFO [02-27|01:56:29.877] BatchPoster: batch sent                  sequenceNumber=192 from=10114 to=10179 prevDelayed=191 currentDelayed=193 totalSegments=84  numBlobs=0
INFO [02-27|01:56:49.993] BatchPoster: batch sent                  sequenceNumber=193 from=10179 to=10243 prevDelayed=193 currentDelayed=194 totalSegments=83  numBlobs=0
INFO [02-27|01:57:00.096] BatchPoster: batch sent                  sequenceNumber=194 from=10243 to=10307 prevDelayed=194 currentDelayed=195 totalSegments=83  numBlobs=0
INFO [02-27|01:57:20.185] BatchPoster: batch sent                  sequenceNumber=195 from=10307 to=10371 prevDelayed=195 currentDelayed=196 totalSegments=82  numBlobs=0
INFO [02-27|01:57:40.257] BatchPoster: batch sent                  sequenceNumber=196 from=10371 to=10435 prevDelayed=196 currentDelayed=197 totalSegments=82  numBlobs=0
INFO [02-27|01:57:50.362] BatchPoster: batch sent                  sequenceNumber=197 from=10435 to=10499 prevDelayed=197 currentDelayed=198 totalSegments=83  numBlobs=0
INFO [02-27|01:58:10.427] BatchPoster: batch sent                  sequenceNumber=198 from=10499 to=10562 prevDelayed=198 currentDelayed=198 totalSegments=82  numBlobs=0
INFO [02-27|01:58:20.510] BatchPoster: batch sent                  sequenceNumber=199 from=10562 to=10626 prevDelayed=198 currentDelayed=200 totalSegments=82  numBlobs=0
INFO [02-27|01:58:40.585] BatchPoster: batch sent                  sequenceNumber=200 from=10626 to=10689 prevDelayed=200 currentDelayed=200 totalSegments=81  numBlobs=0
INFO [02-27|01:58:50.671] BatchPoster: batch sent                  sequenceNumber=201 from=10689 to=10753 prevDelayed=200 currentDelayed=202 totalSegments=83  numBlobs=0
INFO [02-27|01:59:10.736] BatchPoster: batch sent                  sequenceNumber=202 from=10753 to=10816 prevDelayed=202 currentDelayed=202 totalSegments=81  numBlobs=0
INFO [02-27|01:59:20.831] BatchPoster: batch sent                  sequenceNumber=203 from=10816 to=10880 prevDelayed=202 currentDelayed=204 totalSegments=82  numBlobs=0
INFO [02-27|01:59:40.916] BatchPoster: batch sent                  sequenceNumber=204 from=10880 to=10943 prevDelayed=204 currentDelayed=204 totalSegments=82  numBlobs=0
INFO [02-27|02:00:00.994] BatchPoster: batch sent                  sequenceNumber=205 from=10943 to=11007 prevDelayed=204 currentDelayed=206 totalSegments=82  numBlobs=0
INFO [02-27|02:00:11.084] BatchPoster: batch sent                  sequenceNumber=206 from=11007 to=11070 prevDelayed=206 currentDelayed=206 totalSegments=81  numBlobs=0
INFO [02-27|02:00:31.201] BatchPoster: batch sent                  sequenceNumber=207 from=11070 to=11133 prevDelayed=206 currentDelayed=207 totalSegments=82  numBlobs=0
INFO [02-27|02:00:41.295] BatchPoster: batch sent                  sequenceNumber=208 from=11133 to=11197 prevDelayed=207 currentDelayed=208 totalSegments=82  numBlobs=0
INFO [02-27|02:01:01.365] BatchPoster: batch sent                  sequenceNumber=209 from=11197 to=11260 prevDelayed=208 currentDelayed=209 totalSegments=81  numBlobs=0
INFO [02-27|02:01:11.469] BatchPoster: batch sent                  sequenceNumber=210 from=11260 to=11324 prevDelayed=209 currentDelayed=210 totalSegments=83  numBlobs=0
INFO [02-27|02:01:31.531] BatchPoster: batch sent                  sequenceNumber=211 from=11324 to=11387 prevDelayed=210 currentDelayed=211 totalSegments=81  numBlobs=0
INFO [02-27|02:02:01.643] BatchPoster: batch sent                  sequenceNumber=212 from=11387 to=11532 prevDelayed=211 currentDelayed=214 totalSegments=185 numBlobs=1
INFO [02-27|02:02:31.736] BatchPoster: batch sent                  sequenceNumber=213 from=11532 to=11653 prevDelayed=214 currentDelayed=215 totalSegments=155 numBlobs=1
INFO [02-27|02:03:01.828] BatchPoster: batch sent                  sequenceNumber=214 from=11653 to=11774 prevDelayed=215 currentDelayed=216 totalSegments=155 numBlobs=1

4844 goes back to being cheaper than non-4844, no backlog

How?

Spam blobs on l1 and txs on l2 as in the first test to switch Nitro from 4844 to non-4844 batch posting initially.
Then disable blob spam to allow blob price to return to normal, and l2 tx spam to allow the backlog to dissipate. Then occasionally send a few l2 transactions. Before 16 non-4844 batches have been posted, there will be a 4844 tx posted.

INFO [02-27|02:08:42.700] BatchPoster: batch sent                  sequenceNumber=221 from=12298 to=12421 prevDelayed=221 currentDelayed=223 totalSegments=158 numBlobs=1
INFO [02-27|02:09:12.813] BatchPoster: batch sent                  sequenceNumber=222 from=12421 to=12542 prevDelayed=223 currentDelayed=224 totalSegments=155 numBlobs=1
INFO [02-27|02:09:42.902] BatchPoster: batch sent                  sequenceNumber=223 from=12542 to=12663 prevDelayed=224 currentDelayed=225 totalSegments=155 numBlobs=1
INFO [02-27|02:10:12.994] BatchPoster: batch sent                  sequenceNumber=224 from=12663 to=12783 prevDelayed=225 currentDelayed=225 totalSegments=154 numBlobs=1
INFO [02-27|02:10:43.124] BatchPoster: batch sent                  sequenceNumber=225 from=12783 to=12905 prevDelayed=225 currentDelayed=227 totalSegments=155 numBlobs=1
INFO [02-27|02:11:03.304] BatchPoster: batch sent                  sequenceNumber=226 from=12905 to=12968 prevDelayed=227 currentDelayed=227 totalSegments=81  numBlobs=0
INFO [02-27|02:11:23.430] BatchPoster: batch sent                  sequenceNumber=227 from=12968 to=13032 prevDelayed=227 currentDelayed=228 totalSegments=83  numBlobs=0
INFO [02-27|02:11:33.524] BatchPoster: batch sent                  sequenceNumber=228 from=13032 to=13096 prevDelayed=228 currentDelayed=229 totalSegments=83  numBlobs=0
INFO [02-27|02:11:53.628] BatchPoster: batch sent                  sequenceNumber=229 from=13096 to=13159 prevDelayed=229 currentDelayed=229 totalSegments=82  numBlobs=0
INFO [02-27|02:12:23.704] BatchPoster: batch sent                  sequenceNumber=230 from=13159 to=13219 prevDelayed=229 currentDelayed=231 totalSegments=78  numBlobs=0
INFO [02-27|02:13:23.741] BatchPoster: batch sent                  sequenceNumber=231 from=13219 to=13242 prevDelayed=231 currentDelayed=233 totalSegments=29  numBlobs=0
INFO [02-27|02:14:33.774] BatchPoster: batch sent                  sequenceNumber=232 from=13242 to=13247 prevDelayed=233 currentDelayed=234 totalSegments=7   numBlobs=0
INFO [02-27|02:15:53.812] BatchPoster: batch sent                  sequenceNumber=233 from=13247 to=13249 prevDelayed=234 currentDelayed=235 totalSegments=4   numBlobs=0
INFO [02-27|02:17:03.874] BatchPoster: batch sent                  sequenceNumber=234 from=13249 to=13251 prevDelayed=235 currentDelayed=236 totalSegments=4   numBlobs=1
INFO [02-27|02:26:24.102] BatchPoster: batch sent                  sequenceNumber=235 from=13251 to=13253 prevDelayed=236 currentDelayed=237 totalSegments=4   numBlobs=1

Logic to prevent switching from non-4844 batches to 4844 batches too often,
so that blocks can be filled efficiently. The geth txpool rejects txs for
accounts that already have the other type of txs in the pool with
"address already reserved". This logic makes sure that, if there is a backlog,
that enough non-4844 batches have been posted to fill a block before switching.
@cla-bot cla-bot bot added the s Automatically added by the CLA bot if the creator of a PR is registered as having signed the CLA. label Feb 25, 2024
@@ -1350,3 +1365,56 @@ func (b *BatchPoster) StopAndWait() {
b.dataPoster.StopAndWait()
b.redisLock.StopAndWait()
}

type BoolRing struct {
buffer []bool
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's move this to arbutil and make it generic over T so it can be used in the future

@PlasmaPower
Copy link
Collaborator

LGTM otherwise

@ganeshvanahalli
Copy link
Contributor

LGTM

@ganeshvanahalli
Copy link
Contributor

LGTM

Suggestion: If maintaining the history of each past batch isn't required then I think we could use a counter for previous continuous non-4844 batches and a threshold (in this case 16) to implement the rule when there is backlog

@Tristan-Wilson
Copy link
Member Author

LGTM

Suggestion: If maintaining the history of each past batch isn't required then I think we could use a counter for previous continuous non-4844 batches and a threshold (in this case 16) to implement the rule when there is backlog

Good idea, I think I started with the ring buffer because I thought the rule would be more complex than it turned out to be. I'll try switching to a counter.

The ring buffer was not necessary, a simple counter will suffice to
implement the same logic.
Copy link
Contributor

@ganeshvanahalli ganeshvanahalli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Tristan-Wilson
Copy link
Member Author

I've been able to reproduce my manual tests with the simplified code. Merging.

@Tristan-Wilson Tristan-Wilson merged commit 8a52c76 into master Feb 28, 2024
8 checks passed
@Tristan-Wilson Tristan-Wilson deleted the batch-4844-hysteresis branch February 28, 2024 23:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
s Automatically added by the CLA bot if the creator of a PR is registered as having signed the CLA.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants