Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
101 changes: 94 additions & 7 deletions crates/rpc/src/config/ctx.rs
Original file line number Diff line number Diff line change
@@ -1,4 +1,60 @@
//! RPC context wrapping [`UnifiedStorage`].
//!
//! # Consistency Model
//!
//! The RPC layer reads from two storage tiers with different consistency
//! guarantees:
//!
//! - **Hot storage** (MDBX): synchronous writes, MVCC snapshot isolation,
//! authoritative source for state and headers. Reads open an MDBX read
//! transaction that provides a consistent point-in-time snapshot.
//!
//! - **Cold storage** (async task): eventually consistent. Writes are
//! dispatched asynchronously after hot storage commits via
//! `append_blocks()`, so cold may lag by milliseconds to seconds under
//! normal operation.
//!
//! # Query Routing
//!
//! | Query type | Resolution | Data source | Staleness risk |
//! |---|---|---|---|
//! | State (`getBalance`, `getStorageAt`, `getCode`, `getTransactionCount`) | Hot tag → height | Hot | Low — single tier, single transaction |
//! | EVM execution (`eth_call`, `estimateGas`) | Hot tag → height | Hot | Low — single tier, single transaction |
//! | Block/header queries | Hot tag → height | Cold | Medium — cold lag |
//! | Transaction queries | Hot tag or hash → height | Cold | Medium — cold lag |
//! | Receipt queries | Hot tag or hash → height | Cold | Medium — cold lag |
//! | Log queries (`getLogs`) | Hot tag → height range | Cold | Medium — cold lag |
//! | Filter changes | Hot tag → latest | Cold | Medium — reorg detection via ring buffer mitigates |
//!
//! # Resolve-then-Query Pattern
//!
//! Most endpoints follow a two-step pattern:
//!
//! 1. **Resolve** a block tag or hash to a concrete block number (reads
//! atomic tag values or queries hot storage's `HeaderNumbers` table).
//! 2. **Query** the resolved block number against hot or cold storage.
//!
//! For hot-only queries (state, EVM), both steps share a single MDBX
//! read transaction, eliminating races between resolution and query.
//!
//! For cold queries, the resolved number is passed to cold storage.
//! Between resolution and cold query, tags can advance or a reorg can
//! replace the block at that height. The caller gets a consistent view
//! of the **resolved** height but may miss a newer block. This is
//! acceptable per JSON-RPC spec — clients retry on stale data.
//!
//! # Future Work: Hash-based Consistency Verification
//!
//! For queries where correctness is critical (e.g., `eth_call`), a
//! stronger guarantee is possible: after resolving the block number,
//! read the block hash from hot storage and pass both (number, hash) to
//! cold. Cold verifies the hash matches before returning data, catching
//! reorgs that replaced the block between resolution and query.
//!
//! This was deferred because: (a) the reorg window is small
//! (milliseconds), (b) it adds one extra hot storage read per query,
//! and (c) for read-only queries the impact of returning stale data is
//! low. See ENG-1901 for the full trade-off analysis.

use crate::{
config::{
Expand Down Expand Up @@ -51,8 +107,16 @@ pub(crate) struct EvmBlockContext<Db> {

/// RPC context backed by [`UnifiedStorage`].
///
/// Provides access to hot storage (state), cold storage (blocks/txs/receipts),
/// block tag resolution, and optional transaction forwarding.
/// Provides access to hot storage (state, headers), cold storage
/// (blocks, transactions, receipts, logs), block tag resolution,
/// filter/subscription management, and optional transaction forwarding.
///
/// Hot-only queries (state reads, EVM execution) open a single MDBX
/// read transaction for both block resolution and data access. Cold
/// queries resolve a block number from hot storage, then pass it to
/// the cold read handle.
///
/// See the module-level documentation for the full consistency model.
///
/// # Construction
///
Expand Down Expand Up @@ -283,8 +347,12 @@ impl<H: HotKv> StorageRpcCtx<H> {

/// Resolve a [`BlockId`] to a header and revm database in one pass.
///
/// Fetches the header from hot storage and creates a revm-compatible
/// database snapshot at the resolved block height.
/// Opens a single MDBX read transaction and uses it for both the
/// header lookup and the revm-compatible database snapshot. This
/// ensures the header and EVM state come from the same database
/// snapshot, eliminating races where a reorg between two separate
/// transactions could yield a header from one fork and state from
/// another.
///
/// For `Pending` block IDs, remaps to `Latest` and synthesizes a
/// next-block header (incremented number, timestamp +12s, projected
Expand All @@ -294,14 +362,29 @@ impl<H: HotKv> StorageRpcCtx<H> {
id: BlockId,
) -> Result<EvmBlockContext<RevmRead<H::RoTx>>, EthError>
where
<H::RoTx as HotKvRead>::Error: DBErrorMarker,
<H::RoTx as HotKvRead>::Error: DBErrorMarker + std::error::Error + Send + Sync + 'static,
{
let pending = id.is_pending();
let id = if pending { BlockId::latest() } else { id };

let sealed = self.resolve_header(id)?.ok_or(EthError::BlockNotFound(id))?;
let db = self.revm_state_at_height(sealed.number)?;
// Single MDBX read transaction for both header lookup and
// RevmRead construction. `hot_reader()` returns
// `Result<H::RoTx, StorageError>` — the `?` converts through
// `StorageError -> EthError::Hot`.
let reader = self.hot_reader()?;

let sealed = match id {
BlockId::Hash(h) => {
reader.header_by_hash(&h.block_hash).map_err(|e| ResolveError::Db(Box::new(e)))?
}
BlockId::Number(tag) => {
let height = self.resolve_block_tag(tag);
reader.get_header(height).map_err(|e| ResolveError::Db(Box::new(e)))?
}
}
.ok_or(EthError::BlockNotFound(id))?;

let height = sealed.number;
let parent_hash = sealed.hash();
let mut header = sealed.into_inner();

Expand All @@ -315,6 +398,10 @@ impl<H: HotKv> StorageRpcCtx<H> {

let spec_id = self.spec_id_for_header(&header);

// Wrap the same reader in RevmRead — no range validation needed
// because we already proved the height exists by fetching the header.
let db = StateBuilder::new_with_database(RevmRead::at_height(reader, height)).build();

Ok(EvmBlockContext { header, db, spec_id })
}
}
52 changes: 49 additions & 3 deletions crates/rpc/src/config/resolve.rs
Original file line number Diff line number Diff line change
@@ -1,8 +1,42 @@
//! Block tag tracking and BlockId resolution.
//! Block tag tracking and `BlockId` resolution.
//!
//! [`BlockTags`] holds externally-updated atomic values for Latest, Safe,
//! and Finalized block numbers. The RPC context owner is responsible for
//! updating these as the chain progresses.
//!
//! # Atomic Ordering Guarantees
//!
//! All tag loads use [`Acquire`] ordering and all stores use [`Release`]
//! ordering. This establishes a happens-before relationship: any data
//! written before a tag update is visible to readers who observe the new
//! tag value.
//!
//! Multi-tag updates ([`BlockTags::update_all`]) store in order:
//! finalized → safe → latest. This ensures readers never observe a
//! state where `latest` is behind `finalized` — they may see a
//! slightly stale view (old latest with new finalized), but never an
//! inverted one.
//!
//! During reorgs, [`BlockTags::rewind_to`] uses [`fetch_min`] with the
//! same finalized → safe → latest order, atomically capping each tag
//! without risk of a reader seeing `latest > finalized` while values
//! are being decreased.
//!
//! # Race Window
//!
//! Between [`StorageRpcCtx::resolve_block_tag`] and the subsequent
//! storage query, tags can advance. The caller gets a point-in-time
//! snapshot of the tag values — not a guarantee that those values are
//! still current by the time the query executes. For hot-only queries,
//! the MDBX read transaction provides snapshot isolation. For cold
//! queries, the resolved number may reference a block that cold storage
//! has not yet received (returning `None`) or that a reorg has since
//! replaced (returning stale data).
//!
//! [`Acquire`]: std::sync::atomic::Ordering::Acquire
//! [`Release`]: std::sync::atomic::Ordering::Release
//! [`fetch_min`]: std::sync::atomic::AtomicU64::fetch_min
//! [`StorageRpcCtx::resolve_block_tag`]: crate::StorageRpcCtx::resolve_block_tag

use alloy::primitives::B256;
use signet_storage::StorageError;
Expand All @@ -29,8 +63,20 @@ pub struct SyncStatus {

/// Externally-updated block tag tracker.
///
/// Each tag is an `Arc<AtomicU64>` that the caller updates as the chain
/// progresses. The RPC layer reads these atomically for tag resolution.
/// Each tag is an [`Arc<AtomicU64>`] that the caller updates as the
/// chain progresses. The RPC layer reads these atomically for tag
/// resolution.
///
/// Tags are updated in a specific order to maintain consistency:
/// - **Writes** ([`update_all`](Self::update_all)): finalized → safe →
/// latest, so readers never see `latest` behind `finalized`.
/// - **Rewinds** ([`rewind_to`](Self::rewind_to)): same order, using
/// [`fetch_min`](AtomicU64::fetch_min) for atomic capping.
/// - **Reads**: each tag is loaded independently with [`Acquire`]
/// ordering. A reader may see a slightly stale combination (e.g.,
/// new `finalized` but old `latest`) but never an inverted one.
///
/// [`Acquire`]: std::sync::atomic::Ordering::Acquire
///
/// # Example
///
Expand Down
23 changes: 19 additions & 4 deletions crates/rpc/src/eth/helpers.rs
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ use alloy::{
};
use serde::Deserialize;
use signet_cold::ColdReceipt;
use signet_hot::db::HotDbRead;
use signet_storage_types::ConfirmationMeta;
use trevm::MIN_TRANSACTION_GAS;

Expand Down Expand Up @@ -129,9 +130,17 @@ pub(crate) use response_tri;

/// Resolve a block ID and open a hot storage reader at that height.
///
/// Shared by account-state endpoints (`balance`, `storage_at`,
/// `addr_tx_count`, `code_at`) which all follow the same
/// resolve → open reader → query pattern.
/// Opens a single MDBX read transaction and resolves the block ID
/// within it, ensuring the resolution and subsequent data query share
/// the same database snapshot.
///
/// For hash-based [`BlockId`]s, the hash→number lookup happens inside
/// the returned transaction. For tag-based IDs, the atomic tag read
/// is outside MDBX (unavoidable), but the snapshot is opened first so
/// data is guaranteed to be at least as fresh as the resolved tag.
///
/// Used by account-state endpoints (`balance`, `storage_at`,
/// `addr_tx_count`, `code_at`).
pub(crate) fn hot_reader_at_block<H>(
ctx: &crate::config::StorageRpcCtx<H>,
id: BlockId,
Expand All @@ -140,8 +149,14 @@ where
H: signet_hot::HotKv,
<H::RoTx as signet_hot::model::HotKvRead>::Error: std::error::Error + Send + Sync + 'static,
{
let height = ctx.resolve_block_id(id).map_err(|e| e.to_string())?;
let reader = ctx.hot_reader().map_err(|e| e.to_string())?;
let height = match id {
BlockId::Number(tag) => ctx.resolve_block_tag(tag),
BlockId::Hash(h) => reader
.get_header_number(&h.block_hash)
.map_err(|e| e.to_string())?
.ok_or_else(|| format!("block hash not found: {}", h.block_hash))?,
};
Ok((reader, height))
}

Expand Down