Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Merge main to kl factory 3 #1748

Merged
merged 5 commits into from
Apr 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

24 changes: 14 additions & 10 deletions core/lib/config/src/configs/chain.rs
Original file line number Diff line number Diff line change
Expand Up @@ -94,14 +94,18 @@ pub struct StateKeeperConfig {

/// Number of ms after which an L1 batch is going to be unconditionally sealed.
pub block_commit_deadline_ms: u64,
/// Number of ms after which a miniblock should be sealed by the timeout sealer.
pub miniblock_commit_deadline_ms: u64,
/// Capacity of the queue for asynchronous miniblock sealing. Once this many miniblocks are queued,
/// sealing will block until some of the miniblocks from the queue are processed.
/// Number of ms after which an L2 block should be sealed by the timeout sealer.
#[serde(alias = "miniblock_commit_deadline_ms")]
// legacy naming; since we don't serialize this struct, we use "alias" rather than "rename"
pub l2_block_commit_deadline_ms: u64,
/// Capacity of the queue for asynchronous L2 block sealing. Once this many L2 blocks are queued,
/// sealing will block until some of the L2 blocks from the queue are processed.
/// 0 means that sealing is synchronous; this is mostly useful for performance comparison, testing etc.
pub miniblock_seal_queue_capacity: usize,
/// The max payload size threshold (in bytes) that triggers sealing of a miniblock.
pub miniblock_max_payload_size: usize,
#[serde(alias = "miniblock_seal_queue_capacity")]
pub l2_block_seal_queue_capacity: usize,
/// The max payload size threshold (in bytes) that triggers sealing of an L2 block.
#[serde(alias = "miniblock_max_payload_size")]
pub l2_block_max_payload_size: usize,

/// The max number of gas to spend on an L1 tx before its batch should be sealed by the gas sealer.
pub max_single_tx_gas: u32,
Expand Down Expand Up @@ -175,9 +179,9 @@ impl StateKeeperConfig {
Self {
transaction_slots: 250,
block_commit_deadline_ms: 2500,
miniblock_commit_deadline_ms: 1000,
miniblock_seal_queue_capacity: 10,
miniblock_max_payload_size: 1_000_000,
l2_block_commit_deadline_ms: 1000,
l2_block_seal_queue_capacity: 10,
l2_block_max_payload_size: 1_000_000,
max_single_tx_gas: 6000000,
max_allowed_l2_tx_gas_limit: 4000000000,
reject_tx_at_geometry_percentage: 0.95,
Expand Down
6 changes: 3 additions & 3 deletions core/lib/config/src/testonly.rs
Original file line number Diff line number Diff line change
Expand Up @@ -149,9 +149,9 @@ impl Distribution<configs::chain::StateKeeperConfig> for EncodeDist {
configs::chain::StateKeeperConfig {
transaction_slots: self.sample(rng),
block_commit_deadline_ms: self.sample(rng),
miniblock_commit_deadline_ms: self.sample(rng),
miniblock_seal_queue_capacity: self.sample(rng),
miniblock_max_payload_size: self.sample(rng),
l2_block_commit_deadline_ms: self.sample(rng),
l2_block_seal_queue_capacity: self.sample(rng),
l2_block_max_payload_size: self.sample(rng),
max_single_tx_gas: self.sample(rng),
max_allowed_l2_tx_gas_limit: self.sample(rng),
reject_tx_at_geometry_percentage: self.sample(rng),
Expand Down

This file was deleted.

This file was deleted.

This file was deleted.

18 changes: 9 additions & 9 deletions core/lib/dal/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@ Current schema is managed by `sqlx`. Schema changes are stored in the [`migratio

_This overview skips prover-related and Ethereum sender-related tables, which are specific to the main node._

### Miniblocks and L1 batches
### L2 blocks and L1 batches

- `miniblocks`. Stores miniblock headers.
- `miniblocks`. Stores L2 block headers. The naming is due to historic reasons.

- `miniblocks_consensus`. Stores miniblock data related to the consensus algorithm used by the decentralized sequencer.
Tied one-to-one to miniblocks (the consensus side of the relation is optional).
- `miniblocks_consensus`. Stores L2 block data related to the consensus algorithm used by the decentralized sequencer.
Tied one-to-one to L2 blocks (the consensus side of the relation is optional).

- `l1_batches`. Stores L1 batch headers.

Expand All @@ -24,7 +24,7 @@ _This overview skips prover-related and Ethereum sender-related tables, which ar
### Transactions

- `transactions`. Stores all transactions received by the node, both L2 and L1 ones. Transactions in this table are not
necessarily included into a miniblock; i.e., the table is used as a persistent mempool as well.
necessarily included into an L2 block; i.e., the table is used as a persistent mempool as well.

### VM storage

Expand Down Expand Up @@ -75,14 +75,14 @@ In addition to foreign key constraints and other constraints manifested directly
invariants are expected to be upheld:

- If a header is present in the `miniblocks` table, it is expected that the DB contains all artifacts associated with
the miniblock execution, such as `events`, `l2_to_l1_logs`, `call_traces`, `tokens` etc. (See State keeper I/O logic
the L2 block execution, such as `events`, `l2_to_l1_logs`, `call_traces`, `tokens` etc. (See State keeper I/O logic
for the exact definition of these artifacts.)
- Likewise, if a header is present in the `l1_batches` table, all artifacts associated with the L1 batch execution are
also expected in the DB, e.g. `initial_writes` and `protective_reads`. (See State keeper I/O logic for the exact
definition of these artifacts.)
- Miniblocks and L1 batches present in the DB form a continuous range of numbers. If a DB is recovered from a node
snapshot, the first miniblock / L1 batch is **the next one** after the snapshot miniblock / L1 batch mentioned in the
`snapshot_recovery` table. Otherwise, miniblocks / L1 batches must start from number 0 (aka genesis).
- L2 blocks and L1 batches present in the DB form a continuous range of numbers. If a DB is recovered from a node
snapshot, the first L2 block / L1 batch is **the next one** after the snapshot L2 block / L1 batch mentioned in the
`snapshot_recovery` table. Otherwise, L2 blocks / L1 batches must start from number 0 (aka genesis).

## Contributing to DAL

Expand Down
2 changes: 1 addition & 1 deletion core/lib/dal/src/blocks_dal.rs
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ impl BlocksDal<'_, '_> {
miniblocks
"#
)
.instrument("get_sealed_miniblock_number")
.instrument("get_sealed_l2_block_number")
.report_latency()
.fetch_one(self.storage)
.await?;
Expand Down
21 changes: 11 additions & 10 deletions core/lib/dal/src/consensus_dal.rs
Original file line number Diff line number Diff line change
Expand Up @@ -100,17 +100,17 @@ impl ConsensusDal<'_, '_> {
Ok(())
}

/// Fetches the range of miniblocks present in storage.
/// Fetches the range of L2 blocks present in storage.
/// If storage was recovered from snapshot, the range doesn't need to start at 0.
pub async fn block_range(&mut self) -> DalResult<ops::Range<validator::BlockNumber>> {
let mut txn = self.storage.start_transaction().await?;
let snapshot = txn
.snapshot_recovery_dal()
.get_applied_snapshot_status()
.await?;
// `snapshot.miniblock_number` indicates the last block processed.
// `snapshot.l2_block_number` indicates the last block processed.
// This block is NOT present in storage. Therefore, the first block
// that will appear in storage is `snapshot.miniblock_number+1`.
// that will appear in storage is `snapshot.l2_block_number + 1`.
let start = validator::BlockNumber(snapshot.map_or(0, |s| s.l2_block_number.0 + 1).into());
let end = txn
.blocks_dal()
Expand Down Expand Up @@ -295,11 +295,12 @@ impl ConsensusDal<'_, '_> {
Ok(Some(block.into_payload(transactions)))
}

/// Inserts a certificate for the miniblock `cert.header().number`.
/// It verifies that
/// * the certified payload matches the miniblock in storage
/// * the `cert.header().parent` matches the parent miniblock.
/// * the parent block already has a certificate.
/// Inserts a certificate for the L2 block `cert.header().number`. It verifies that
///
/// - the certified payload matches the L2 block in storage
/// - the `cert.header().parent` matches the parent L2 block.
/// - the parent block already has a certificate.
///
/// NOTE: This is an extra secure way of storing a certificate,
/// which will help us to detect bugs in the consensus implementation
/// while it is "fresh". If it turns out to take too long,
Expand All @@ -317,10 +318,10 @@ impl ConsensusDal<'_, '_> {
.consensus_dal()
.block_payload(cert.message.proposal.number)
.await?
.context("corresponding miniblock is missing")?;
.context("corresponding L2 block is missing")?;
anyhow::ensure!(
header.payload == want_payload.encode().hash(),
"consensus block payload doesn't match the miniblock"
"consensus block payload doesn't match the L2 block"
);
sqlx::query!(
r#"
Expand Down
22 changes: 11 additions & 11 deletions core/lib/dal/src/events_dal.rs
Original file line number Diff line number Diff line change
Expand Up @@ -70,14 +70,14 @@ impl EventsDal<'_, '_> {
for (tx_location, events) in all_block_events {
let IncludedTxLocation {
tx_hash,
tx_index_in_miniblock,
tx_index_in_l2_block,
tx_initiator_address,
} = tx_location;

for (event_index_in_tx, event) in events.iter().enumerate() {
write_str!(
&mut buffer,
r"{block_number}|\\x{tx_hash:x}|{tx_index_in_miniblock}|\\x{address:x}|",
r"{block_number}|\\x{tx_hash:x}|{tx_index_in_l2_block}|\\x{address:x}|",
address = event.address
);
write_str!(&mut buffer, "{event_index_in_block}|{event_index_in_tx}|");
Expand Down Expand Up @@ -143,11 +143,11 @@ impl EventsDal<'_, '_> {

let mut buffer = String::new();
let now = Utc::now().naive_utc().to_string();
let mut log_index_in_miniblock = 0u32;
let mut log_index_in_l2_block = 0u32;
for (tx_location, logs) in all_block_l2_to_l1_logs {
let IncludedTxLocation {
tx_hash,
tx_index_in_miniblock,
tx_index_in_l2_block,
..
} = tx_location;

Expand All @@ -163,18 +163,18 @@ impl EventsDal<'_, '_> {

write_str!(
&mut buffer,
r"{block_number}|{log_index_in_miniblock}|{log_index_in_tx}|\\x{tx_hash:x}|"
r"{block_number}|{log_index_in_l2_block}|{log_index_in_tx}|\\x{tx_hash:x}|"
);
write_str!(
&mut buffer,
r"{tx_index_in_miniblock}|{tx_number_in_block}|{shard_id}|{is_service}|"
r"{tx_index_in_l2_block}|{tx_number_in_block}|{shard_id}|{is_service}|"
);
writeln_str!(
&mut buffer,
r"\\x{sender:x}|\\x{key:x}|\\x{value:x}|{now}|{now}"
);

log_index_in_miniblock += 1;
log_index_in_l2_block += 1;
}
}

Expand Down Expand Up @@ -451,13 +451,13 @@ mod tests {

let first_location = IncludedTxLocation {
tx_hash: H256([1; 32]),
tx_index_in_miniblock: 0,
tx_index_in_l2_block: 0,
tx_initiator_address: Address::default(),
};
let first_events = vec![create_vm_event(0, 0), create_vm_event(1, 4)];
let second_location = IncludedTxLocation {
tx_hash: H256([2; 32]),
tx_index_in_miniblock: 1,
tx_index_in_l2_block: 1,
tx_initiator_address: Address::default(),
};
let second_events = vec![
Expand Down Expand Up @@ -532,13 +532,13 @@ mod tests {

let first_location = IncludedTxLocation {
tx_hash: H256([1; 32]),
tx_index_in_miniblock: 0,
tx_index_in_l2_block: 0,
tx_initiator_address: Address::default(),
};
let first_logs = vec![create_l2_to_l1_log(0, 0), create_l2_to_l1_log(0, 1)];
let second_location = IncludedTxLocation {
tx_hash: H256([2; 32]),
tx_index_in_miniblock: 1,
tx_index_in_l2_block: 1,
tx_initiator_address: Address::default(),
};
let second_logs = vec![
Expand Down
9 changes: 0 additions & 9 deletions core/lib/dal/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,6 @@ pub mod pruning_dal;
pub mod snapshot_recovery_dal;
pub mod snapshots_creator_dal;
pub mod snapshots_dal;
mod storage_dal;
pub mod storage_logs_dal;
pub mod storage_logs_dedup_dal;
pub mod storage_web3_dal;
Expand Down Expand Up @@ -94,10 +93,6 @@ where

fn storage_logs_dal(&mut self) -> StorageLogsDal<'_, 'a>;

#[deprecated(note = "Soft-removed in favor of `storage_logs`; don't use")]
#[allow(deprecated)]
fn storage_dal(&mut self) -> storage_dal::StorageDal<'_, 'a>;

fn storage_logs_dedup_dal(&mut self) -> StorageLogsDedupDal<'_, 'a>;

fn tokens_dal(&mut self) -> TokensDal<'_, 'a>;
Expand Down Expand Up @@ -182,10 +177,6 @@ impl<'a> CoreDal<'a> for Connection<'a, Core> {
StorageLogsDal { storage: self }
}

fn storage_dal(&mut self) -> storage_dal::StorageDal<'_, 'a> {
storage_dal::StorageDal { storage: self }
}

fn storage_logs_dedup_dal(&mut self) -> StorageLogsDedupDal<'_, 'a> {
StorageLogsDedupDal { storage: self }
}
Expand Down
8 changes: 4 additions & 4 deletions core/lib/dal/src/pruning_dal/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -41,13 +41,13 @@ async fn insert_l2_block(
async fn insert_l2_to_l1_logs(conn: &mut Connection<'_, Core>, l2_block_number: L2BlockNumber) {
let first_location = IncludedTxLocation {
tx_hash: H256([1; 32]),
tx_index_in_miniblock: 0,
tx_index_in_l2_block: 0,
tx_initiator_address: Address::default(),
};
let first_logs = vec![mock_l2_to_l1_log(), mock_l2_to_l1_log()];
let second_location = IncludedTxLocation {
tx_hash: H256([2; 32]),
tx_index_in_miniblock: 1,
tx_index_in_l2_block: 1,
tx_initiator_address: Address::default(),
};
let second_logs = vec![
Expand All @@ -68,13 +68,13 @@ async fn insert_l2_to_l1_logs(conn: &mut Connection<'_, Core>, l2_block_number:
async fn insert_events(conn: &mut Connection<'_, Core>, l2_block_number: L2BlockNumber) {
let first_location = IncludedTxLocation {
tx_hash: H256([1; 32]),
tx_index_in_miniblock: 0,
tx_index_in_l2_block: 0,
tx_initiator_address: Address::default(),
};
let first_events = vec![mock_vm_event(0), mock_vm_event(1)];
let second_location = IncludedTxLocation {
tx_hash: H256([2; 32]),
tx_index_in_miniblock: 1,
tx_index_in_l2_block: 1,
tx_initiator_address: Address::default(),
};
let second_events = vec![mock_vm_event(2), mock_vm_event(3), mock_vm_event(4)];
Expand Down
Loading
Loading