-
Notifications
You must be signed in to change notification settings - Fork 122
refactor(l1): refactor chainconfig #5233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Lines of code reportTotal lines added: Detailed view |
Benchmark Results ComparisonNo significant difference was registered for any benchmark run. Detailed ResultsBenchmark Results: BubbleSort
Benchmark Results: ERC20Approval
Benchmark Results: ERC20Mint
Benchmark Results: ERC20Transfer
Benchmark Results: Factorial
Benchmark Results: FactorialRecursive
Benchmark Results: Fibonacci
Benchmark Results: FibonacciRecursive
Benchmark Results: ManyHashes
Benchmark Results: MstoreBench
Benchmark Results: Push
Benchmark Results: SstoreBench_no_opt
|
crates/common/types/genesis.rs
Outdated
| GrayGlacier = 14, | ||
| Paris = 15, | ||
| Shanghai = 16, | ||
| Homestead = 0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this change of names?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed the enum so the forks would be the ones mentioned in labels for the activation block numbers and timestamps. This was necessary ƒor the match statements in fork_activation_time_or_block and is_fork_activated to work properly. While I could have made it a separate enum, I think it makes more sense for the fork enums to reflect the forks as laid out in the chain config files (and in practice only the chain config struct methods and the tests use the fork enum, so it shouldn't create broader problems)
| GrayGlacier = 13, | ||
| Paris = 14, | ||
| Shanghai = 15, | ||
| #[default] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we probably shouldn't have defaults
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to do this but it breaks other structs that have fork as a field and implement default, like parts of the LEVM runner or EVMConfig. Unless we want to remove those defaults too it might be better to leave this as is (so fork defaults aren't spread out throughout the code)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it kinda sucks, but if you have to keep it, at least choose the (soon to be) current one, Osaka
crates/vm/levm/src/environment.rs
Outdated
|
|
||
| let blob_schedule = chain_config | ||
| .get_fork_blob_schedule(block_header.timestamp) | ||
| .get_current_blob_schedule(block_header.timestamp) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
current is deceiving if its calculated based on a timestamp
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. Addressed here (alongside some fixes for other things that were failing)
| // TODO: maybe fetch hash too when filtering mempool so we don't have to compute it here (we can do this in the same refactor as adding timestamp) | ||
| let tx_hash = head_tx.tx.hash(); | ||
|
|
||
| // Check whether the tx is replay-protected |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
state tests don't use this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They aren't failing on the CI and they aren't failing locally either, so preseumably not.
| pub fn get_blob_schedule_for_time(&self, block_timestamp: u64) -> Option<ForkBlobSchedule> { | ||
| if let Some(fork_with_current_blob_schedule) = FORKS.into_iter().rfind(|fork| { | ||
| self.get_blob_schedule_for_fork(*fork).is_some() | ||
| && self.is_fork_activated(*fork, block_timestamp) | ||
| }) { | ||
| self.get_blob_schedule_for_fork(fork_with_current_blob_schedule) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to iterate all forks? Can't we just fetch the blob schedule for the active fork?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, it's a part of tackling #4849. Recently it was decided that if a fork doesn't change the blob schedule, it won't have a blob schedule field in the genesis file, and the blob schedule from the most recent active fork should be used instead. This implements that logic; rfind short circuits as soon as it finds an element of the array that fulfills the closure conditions so this shouldn't iterate through too much of the array in practice
| BPO5 = 24, | ||
| } | ||
|
|
||
| pub const FORKS: [Fork; 25] = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you need this? Fork already has a number so you should be able to use that for ordering?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do, but it helps with either iterating through forks (when, for example, we want the latest forks that fulfills a certain condition or other; we use this to get the latest scheduled fork, the current one, etc)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you should be able to implement iterator on the other structure? I don't think we need this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really. You can't iterate through enum variants by default. Iterating through numbers and converting them to forks wouldn't work either without implementing a from function that matches numbers to forks, but I could do that if you think it's better
|
|
||
| pub fn is_eip155_activated(&self, block_number: BlockNumber) -> bool { | ||
| self.eip155_block.is_some_and(|num| num <= block_number) | ||
| pub fn is_fork_activated(&self, fork: Fork, timestamp_or_block: u64) -> bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need to support blocks here? Is this ever called with a block?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it ever should be but it's more of a syntax thing if anything since we want to cover the block number based forks in the match arms too (and it doesn't change functionality). If you find it better I could just make it return true for all forks pre-Merge
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I realized there is an edge case: if a call to the eth_config RPC endpoint happens before the node has synced to head, we need to check which is the currently active fork to know whether to respond or not (since the response format isn't defined for forks pre-cancun), and that requires iterating through the forks and checking which is the most recent one that's active
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
current implementation. is error prone. You should either only accept timestamps or create a type BlockNumberOrTimestamp, or something like that and deal accordingly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR refactors the chain configuration code to improve maintainability for future forks by centralizing fork-related logic. The refactoring consolidates scattered match statements into unified functions and removes pre-Merge validation logic.
- Renamed fork variants (
Tangerine→EIP150,SpuriousDragon→EIP155/EIP158, removedFrontierThawing) - Centralized fork activation checks into a single
is_fork_activatedmethod and introducedfork_activation_time_or_blockto get activation timestamps/blocks - Replaced method names for consistency (
fork→get_fork,get_fork_blob_schedule→get_blob_schedule_for_time,get_activation_timestamp_for_fork→fork_activation_time_or_block) - Removed pre-Merge logic including EIP-155 replay protection checks and pre-Istanbul gas cost handling
Reviewed Changes
Copilot reviewed 20 out of 20 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| crates/common/types/genesis.rs | Core refactoring: renamed Fork enum variants, added FORKS constant array, centralized fork activation logic into is_fork_activated and fork_activation_time_or_block, refactored get_fork, get_blob_schedule_for_time, and next_fork methods |
| crates/common/types/block.rs | Updated method calls from fork to get_fork and get_fork_blob_schedule to get_blob_schedule_for_time |
| crates/blockchain/blockchain.rs | Updated to use new is_fork_activated method and renamed method calls |
| crates/blockchain/payload.rs | Updated to use new is_fork_activated method with Fork::* wildcard import pattern, removed replay protection checks |
| crates/blockchain/mempool.rs | Removed pre-Istanbul gas cost handling, unconditionally uses EIP-2028 gas cost (16 instead of 68), removed test for pre-Istanbul behavior |
| crates/blockchain/constants.rs | Consolidated TX_DATA_NON_ZERO_GAS constant from separate pre/post-Istanbul values to single EIP-2028 value |
| crates/vm/backends/mod.rs | Updated method call from fork to get_fork |
| crates/vm/backends/levm/mod.rs | Updated method calls from fork to get_fork |
| crates/vm/levm/src/environment.rs | Updated method calls to use get_fork and get_blob_schedule_for_time |
| crates/networking/rpc/eth/*.rs | Updated to use new method names and is_fork_activated with Fork::* imports |
| crates/networking/rpc/engine/*.rs | Updated to use is_fork_activated method with Fork::* wildcard imports |
| crates/l2/sequencer/block_producer/payload_builder.rs | Removed EIP-155 replay protection check and unused chain_config variable |
| tooling/ef_tests/state/deserialize.rs | Added Fork::EIP155 variant to deserialization, updated Fork::EIP158 and Fork::EIP150 mappings |
| tooling/ef_tests/state/runner/revm_runner.rs | Updated fork_to_spec_id to map Fork::EIP150, Fork::EIP155, and Fork::EIP158 to appropriate SpecIds |
| tooling/ef_tests/state_v2/src/modules/deserialize.rs | Updated fork name mappings to use Fork::EIP158 and Fork::EIP150 instead of Fork::SpuriousDragon and Fork::Tangerine |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if (current_fork as usize) < FORKS.len() { | ||
| FORKS.into_iter().find(|fork| { | ||
| *fork > current_fork && self.fork_activation_time_or_block(*fork).is_some() | ||
| }) | ||
| } else { | ||
| None | ||
| }; | ||
| match next { | ||
| Some(fork) if fork > self.fork(block_timestamp) => next, | ||
| _ => None, | ||
| } |
Copilot
AI
Nov 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The condition (current_fork as usize) < FORKS.len() will always be true because current_fork is an enum variant within FORKS, so its numeric value will always be less than the array length (25). This means the else branch at line 476 is unreachable.
The intent appears to be checking if there's a possible next fork, but since current_fork can never exceed the array bounds, this check doesn't serve its purpose. Consider simplifying this to just:
FORKS.into_iter().find(|fork| {
*fork > current_fork && self.fork_activation_time_or_block(*fork).is_some()
})
Motivation
Improve the maintainability of our code with regards to future forks.
Description
Currently we have a lot of match statements spread throughout various functions to get the current fork, blobschedule, etc; and separate functions for checking the activation of each fork. This creates serious maintainability issues down the line. This PR centralized the match statements in two functions: one to get the activation timestamp/block for a fork, and one to get the blob schedule for a fork, and rewrites the rest of the functions accordingly. All the funcitons to check fork activation were also aggregated into a single
is_fork_activatedfunction that takes the fork as a parameter and checks its activation. The implementation ofget_blob_schedule_for_timealso fetches the most recent active blob schedule if the current fork specifies none, which is part of tackling #4849.Additionally, some code related to pre-Merge logic was removed (checks for EIP 155 and EIP 2028)
Closes #4720