Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Span Batch Hard Fork Activation Rule Update #7813

Conversation

pcw109550
Copy link
Contributor

@pcw109550 pcw109550 commented Oct 24, 2023

This diff fixes subset of #7289 (comment)

In the batch-checking function we need to review the hardfork activation conditions,
and update the specs as well, so the span-batches activate at the right moment (w.r.t. L1 inclusion and L2 block time)
This should be added to the span-batch derivation specs, and then referenced as a feature of Canyon,
in the superchain-upgrades spec document.

  1. Added span batch hardfork spec.
  2. Harden span batch hardfork check using L1 inclusion block time.

Additional context

We may update superchain-upgrades spec in the separate PR, resolving below sentence.

This should be added to the span-batch derivation specs, and then referenced as a feature of Canyon,

in the superchain-upgrades spec document.

Base branch will be changed to develop when #7621 is merged in develop branch.

@pcw109550 pcw109550 requested review from tynes and removed request for a team October 24, 2023 15:55
@pcw109550 pcw109550 mentioned this pull request Oct 24, 2023
2 tasks
@pcw109550 pcw109550 changed the title Span Batch Hard Fork Activation Update Span Batch Hard Fork Activation Rule Update Oct 24, 2023
@tynes
Copy link
Contributor

tynes commented Oct 24, 2023

We want to move away from IsSpanBatch towards IsCanyon right?

@pcw109550
Copy link
Contributor Author

pcw109550 commented Oct 25, 2023

We want to move away from IsSpanBatch towards IsCanyon right?

@tynes Yes. Span batch is part of Canyon network upgrade, so IsSpanBatch will be replaced using IsCanyon.

Will open a separate PR after every span batch PR (tracked here) has reached the develop branch.

Span Batch will not be part of Canyon, so will leave as is.

@protolambda protolambda force-pushed the tip/span-batch-hard-fork-condition-spec-refactor branch from d90e76d to d5a3580 Compare October 30, 2023 22:57
@protolambda protolambda requested review from a team as code owners October 30, 2023 22:57
@protolambda protolambda requested review from roninjin10 and removed request for a team October 30, 2023 22:57
@protolambda protolambda changed the base branch from mirror-tip/span-batch-derivation to develop October 30, 2023 22:57
@protolambda protolambda removed request for a team and roninjin10 October 30, 2023 22:58
@protolambda
Copy link
Contributor

Rebased on develop 👍

specs/span-batches.md Outdated Show resolved Hide resolved
specs/span-batches.md Outdated Show resolved Hide resolved
op-node/rollup/derive/batches.go Outdated Show resolved Hide resolved
@ImTei ImTei force-pushed the tip/span-batch-hard-fork-condition-spec-refactor branch from d5a3580 to 6629996 Compare November 1, 2023 07:23
@ImTei
Copy link
Collaborator

ImTei commented Nov 1, 2023

Applied the new proposal.

  • Updated the specs. We can use only L1 origin block time without L2 block time. because there's an invariant: L2 block time > L1 origin block time
  • Implemented the new rule in CheckBatch().
  • Left the existing L1 inclusion block check in ChannelInReader, for early dropping invalid batches.
    • We can use channel.open_l1_block here, but it makes more diffs in the derivation pipeline. I think using the L1 inclusion block is enough because it's just for optimization and safety, not for the strict check. I want to hear your thoughts about this part. @protolambda @ajsutton

@protolambda protolambda added this pull request to the merge queue Nov 2, 2023
Merged via the queue into ethereum-optimism:develop with commit 2006da2 Nov 2, 2023
57 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants