Merged
Conversation
## Summary - Fixes A-791: LMDB cursors in `ReadTransaction` could leak if an error occurred between cursor creation and the `try` block entry. Moved `START_CURSOR` inside the `try` block in both `#iterate` and `#countEntries` so the `finally` cleanup always runs. ## Test plan - Existing kv-store tests cover iteration and cursor lifecycle - The fix is structural (moving code inside try) with no behavior change on the happy path 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…rices (#22512) ## Summary - Fixes A-755: L1 gas price, gas consumption, and fee histograms were using OTel default buckets which don't match real-world distributions. Added View-based bucket configs for `gwei` (0.1–1000), `gas` (10k–30M), and `eth` (0.0001–10) unit histograms. - Also fixed a unit mismatch in `l1_tx_metrics.ts` where gas prices were recorded in wei but the metric definitions declared gwei as the unit. Now converts wei to gwei before recording. ## Test plan - Verify metrics are emitted with correct bucket boundaries in Grafana after deployment - Existing telemetry and publisher tests should pass unchanged 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
#22511) ## Summary - Fixes A-747: The OTel `BatchSpanProcessor` silently drops spans when its internal queue is full. Introduced `MonitoredBatchSpanProcessor` that extends it, tracks approximate queue depth, and emits rate-limited warnings (every 30s) when drops are detected. Also logs total drops on shutdown. ## Test plan - Verify telemetry client starts correctly with the new processor - Under high span volume, confirm warning logs appear instead of silent drops - Existing telemetry tests should pass unchanged 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Fixes A-746: wraps DB connection string in a SecretValue 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… epoch N+1 starts (#22508) ## Summary - Fixes A-720: `cleanupStaleJobs()` was removing all stale jobs including those still being actively proved. Added `!this.inProgress.has(id)` guard so in-progress jobs are left alone and handled correctly by `reEnqueueExpiredJobs()` when they time out. ## Test plan - Existing proving broker tests cover the stale cleanup and timeout paths - Verify that in-progress jobs from epoch N survive into epoch N+1 until they either complete or time out 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… path (#22112) Slightly verbose refactor, but essentially just wrapping everything in a try catch and unwinding on failure path. Co-authored-by: danielntmd <danielntmd@nethermind.io>
…rd retry limit (#21842) ## Summary - **Fixes A-711**: `cleanUpProvingJobState` was calling `deferred.promise.catch(() => {})` before `deferred.reject()` to suppress unhandled rejections, but this doesn't work — `.catch()` creates a new branched promise; any code already awaiting the original promise still receives an unhandled rejection. Fixed by resolving with `{ status: 'rejected', reason: '...' }` instead, consistent with how the rest of the class settles promises, and making unhandled rejections impossible. Fixes [A-711](https://linear.app/aztec-labs/issue/A-711/audit-31-prover-cleanupprovingjobstate-deletes-unsettled-promises)
…22644) ## Motivation Some Ethereum RPC nodes prune historical logs. If an Aztec node points at such an RPC, L1 sync silently misses events the archiver (and slasher, sequencer) depend on, producing corrupt state or wrong behavior rather than a clear failure. We want startup to abort with a clear, actionable error instead. See also A-927 ## Approach On archiver start, after the existing debug/trace probe, query the `OwnershipTransferred` event that every Rollup emits via `Ownable` in its constructor (on `l1StartBlock`). This is a cheap, guaranteed log at a known old block. The L1 `PublicClient` is typically a viem fallback over several user-configured RPC URLs, so the validator introspects its transport, builds a single-URL client per URL, and probes each in sequence — the first URL that fails throws. The error message names the failing URL, its `web3_clientVersion`, the addresses whose logs must be retained (Rollup, Inbox, Registry, GovernanceProposer), and reth-specific guidance when applicable. `ARCHIVER_SKIP_HISTORICAL_LOGS_CHECK=true` bypasses the check. ## Changes - **ethereum (contracts/rollup)**: New `RollupContract.getOwnershipTransferredEventsAtDeploy()` that queries the event at `l1StartBlock`. - **ethereum (client)**: New `getRpcUrlsFromClient()` helper that extracts URLs from a viem fallback transport. - **archiver (l1/validate_historical_logs)**: New validator that iterates over each configured RPC URL, probes it with a dedicated single-URL client, and builds a rich operator-facing error on failure (per-URL client version, contract addresses required to have log retention, reth-specific guidance). - **archiver (archiver)**: Wire the validator into `start()` after `validateAndLogTraceAvailability`; widen `l1Addresses` to include `rollupAddress` / `inboxAddress`. - **archiver (config)**: New `ARCHIVER_SKIP_HISTORICAL_LOGS_CHECK` env var (default `false`), plumbed through `mapArchiverConfig`, `factory`, and `ArchiverSpecificConfig`. - **foundation (config/env_var)**: Register the new env var name. - **ethereum/archiver (tests)**: Anvil-backed test covering the positive path (`rollup.test.ts`); unit tests for the validator covering multi-URL iteration, per-URL failure reporting, reth vs generic guidance, and skip behavior; existing archiver tests updated for the new address shape.
When rolling back local L1 to L2 messages, we query each message against L1 by fetching its log. Since #22154 (v5) this was done via blockHash. However, querying logs by blockhash throws an RPC error if the block no longer exists, which is likely if the local message was removed due to an L1 reorg. We never hit this during testing because of foundry-rs/foundry#14371. To fix this, we know query by an L1 block number range. This also allows to still find the locall message on L1 even if it was moved by a few blocks due to a small reorg. Additionally, this PR adds a fast-path in `rollbackL1ToL2Messages` that matches the local rolling hash against the current remote state, so we don't query by event when we don't need to.
## Motivation E2e tests that spin up their own validator nodes (P2P and epochs suites) were unnecessarily starting a sequencer on the initial setup node and waiting for it to mine block 1. With the genesis timestamp support from #22359, this initial block is no longer needed. The initial sequencer also interfered with P2P topology (sharing a validator key with test nodes) and wasted time in setup. ## Approach Added a `dontStartSequencer` option to the e2e `setup()` function that skips sequencer startup and block-1 waiting on the initial node. For P2P tests, the initial node becomes a lightweight archiver (no sequencer, no validator, no P2P), and the wallet is pointed at a test node for transaction propagation. For epochs tests, account deployment is deferred until validator nodes are running, then sequencers are stopped so the test body can restart them with specific configurations. ## Changes - **end-to-end (fixtures)**: Added `dontStartSequencer` option to `SetupOptions`. When set, passes it through to `AztecNodeService.createAndSync()` and skips block progression logic - **end-to-end (P2P base class)**: Initial node starts with no sequencer/validator/P2P. Config cleaned up after setup so validator nodes don't inherit initial-node-only settings. Added `setupWalletOnNode()` method - **end-to-end (epochs base class)**: Added `deployTestAccounts()` helper for deferred account deployment. Cleans `dontStartSequencer` from config so validator nodes don't inherit it - **end-to-end (P2P tests)**: 9 test files updated to call `setupWalletOnNode(nodes[0])` before `setupAccount()`. Removed now-unnecessary sequencer stop in `reex.test.ts` - **end-to-end (epochs tests)**: 7 test files updated with `dontStartSequencer: true`, deferred account deployment via `deployTestAccounts()`, and stop-then-restart pattern for sequencers. `epochs_ha_sync` kept original approach (HA pairs with `skipPublishingCheckpointsPercent` need the initial sequencer for reliable account deployment) --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Motivation
On freshly-started L1 devnets (anvil, local geth/reth/nethermind) there
is a startup window during which the `finalized` block tag is not yet
available — calls like `getBlock({ blockTag: 'finalized' })` fail with a
JSON-RPC error instead of returning a block. The archiver currently logs
noisy warnings during this window and epoch-cache would crash
dereferencing the missing block's timestamp.
## Approach
Added a shared `getFinalizedL1Block` helper in `@aztec/ethereum/queries`
that returns `undefined` when the chain has no finalized block yet,
distinguishing that specific failure from other RPC errors by walking
the viem error chain for the `"finalized|safe block not found"` message
(the code `-32000` surfaced by geth/reth/nethermind). All three
production call sites now handle `undefined` gracefully: the archiver
skips the finalized checkpoint update, and epoch-cache treats entries as
"not finalized yet" so they continue to refresh on TTL until the L1
chain catches up.
## Changes
- **ethereum**: New `getFinalizedL1Block` helper and
`isFinalizedBlockTagNotFoundError` predicate in `queries.ts`.
- **archiver**: `updateFinalizedCheckpoint` uses the helper and returns
early with a trace log instead of warning; removed the old `"returned no
data"` substring workaround.
- **epoch-cache**: `refreshStaleEntry` and `fetchAndCache` use the
helper and guard the `samplingTs <= finalizedBlock.timestamp` comparison
so entries stay unfinalized when L1 has none yet.
- **tests**: Unit tests for the helper/predicate, an archiver sync test
using `FakeL1State` configured with no finalized block, and an
epoch-cache test asserting entries stay unfinalized and keep refreshing.
Collaborator
Author
|
🤖 Auto-merge enabled after 4 hours of inactivity. This PR will be merged automatically once all checks pass. |
… latest (#22679) ## Summary - The C++ world state overloaded `WorldStateRevision.blockNumber == 0` as "use latest committed state" via `if (revision.blockNumber)` checks, rather than pinning to block 0. This silently returned the current tip instead of the genesis tree for any genesis-anchored query, once the node advanced past genesis. - `AztecNodeService.getWorldState` had a short-circuit that mapped initial-header queries directly to `getSnapshot(BlockNumber.ZERO)` and bypassed the archive-root double-check that would otherwise catch the mismatch. - Any PXE holding its anchor at the initial header (e.g., `syncChainTip: 'checkpointed'` before the first checkpoint commits) produced private-kernel proofs that failed with `Proving public value inclusion failed`: the public-data-tree witness came from the node's advanced tip while the circuit validated it against the initial header's root. ## How PXE ends up querying block zero 1. **PXE seeds its anchor from the initial header.** On first run, `BlockSynchronizer.doSync` (`pxe/src/block_synchronizer/block_synchronizer.ts:178-181`) sees no stored anchor and calls `node.getBlockHeader(BlockNumber.ZERO)`. It stores the resulting header — whose hash is the initial-header hash and whose tree roots are the genesis roots. 2. **`syncChainTip: 'checkpointed'` keeps it pinned.** `handleBlockStreamEvent` (`block_synchronizer.ts:60-74`) only advances the anchor on `chain-checkpointed` events; `blocks-added` is ignored. Until a checkpoint commits on L1 — which takes seconds or longer after sequencers start — the PXE's anchor stays at the initial header. 3. **`proveTx` hands that anchor to the kernel oracle.** After the sync at the top of `proveTx`, the PXE reads the anchor and passes its hash into `new PrivateKernelOracle(..., anchorBlockHash)`. Every oracle call (`getPublicDataWitness`, `getPublicStorageAt`, etc.) uses that hash. 4. **The kernel oracle hits the node with the initial-header hash.** `PrivateKernelOracle.getUpdatedClassIdHints` (`pxe/src/private_kernel/private_kernel_oracle.ts:121-150`) issues `node.getPublicDataWitness(initialHeaderHash, hashLeafSlot)` plus a matching `getPublicStorageAt` read, both pinned to the same hash. 5. **The node short-circuits the initial header.** `AztecNodeService.getWorldState` (`aztec-node/src/aztec-node/server.ts:1714-1719`, pre-fix) recognised the initial-header hash and returned `worldStateSynchronizer.getSnapshot(BlockNumber.ZERO)` directly, skipping the archive-tree reorg check below. 6. **`getSnapshot(0)` builds a revision with `blockNumber = 0`.** `NativeWorldState.getSnapshot` (`world-state/src/native/native_world_state.ts:157-163`) constructed `new WorldStateRevision(forkId=0, blockNumber=0, includeUncommitted=false)` and handed it to the `MerkleTreesFacade`, which forwards it unchanged on every native call. 7. **Native C++ treated `blockNumber == 0` as "latest".** In `barretenberg/cpp/src/barretenberg/world_state/world_state.cpp` every tree op (`get_meta_data`, `get_sibling_path`, `find_low_leaf`, etc.) checked `if (revision.blockNumber)` / `if (revision.blockNumber != 0U)` — zero is falsy, so the code fell into the "no block pin, use latest committed" branch. The returned sibling path, low-leaf preimage, next-index and next-slot were all taken against the current tip. 8. **The circuit mismatched.** Back in Noir (`noir-protocol-circuits/crates/types/src/data/storage_read.nr:41` via `delayed_public_mutable/with_hash.nr`), the membership hash is compared against `historical_header.state.partial.public_data_tree.root` — the genesis root from the PXE's anchor. The oracle-supplied witness was computed against the advanced tip's root. Pre-sequencer the tip happens to equal genesis so it "works"; once block 1 lands they diverge and `assert(is_leaf_in_tree, ...)` fires. ## Fix - Add an explicit `WorldStateRevision::LATEST` sentinel (`std::numeric_limits<uint32_t>::max()` in C++; mirrored in TS) and an `is_historical()` helper. Replace every `if (revision.blockNumber)` call site in `world_state.cpp` with `if (revision.is_historical())`. Zero now correctly means "pin to block 0". - Update TS `WorldStateRevision.empty()` and `NativeWorldState.fork()` to pass `LATEST` where the old "0 means latest" semantics were relied on. `getSnapshot(blockNumber)` passes the number through unchanged, so `getSnapshot(0)` now genuinely pins to block 0. - In `AztecNodeService.getWorldState`, replace the initial-header early return with a block-number resolution to `BlockNumber.ZERO` that falls through to the standard snapshot + archive-root double-check. The archive tree at index 0 stores the initial-header hash (per the assertion in `native_world_state.ts:143`), so the check works uniformly for block 0 too. - Defensively capture the anchor header once per `proveTx` / `simulateTx` / `profileTx` in `pxe/src/pxe.ts` and thread it through to both `#executePrivate` and `#prove`, rather than re-reading `anchorBlockStore` independently in each call. This cannot drift today (both live inside the same job-queue slot), but makes the invariant explicit and type-checked going forward. ## Test plan - [x] `yarn build` from `yarn-project`. - [ ] CI runs full suite, including world-state and e2e tests that exercise `syncChainTip: 'checkpointed'`. - [ ] Confirm `e2e_epochs/epochs_mbps_redistribution` second test stops emitting `Proving public value inclusion failed` errors once the full stack (including rebuilt `bb`) is deployed. --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…L1 submission (#22586) ## Motivation With pipelining enabled, the sequencer optimistically builds a checkpoint on top of a proposed parent. If that parent checkpoint lands on L1 with invalid attestations, the pipelined checkpoint was never invalidating it — the invalidation was cleared at build time under the assumption the parent would handle it. ## Approach At submission time (after the pipelining sleep), the sequencer now waits for the parent checkpoint to land on L1, then verifies it matches expectations: correct hash, valid attestations, and no unexpected checkpoints appeared. If the parent is invalid, the pipelined work is discarded and an invalidation is enqueued instead. The `skipInvalidateBlockAsProposer` config is respected so the committee member fallback path still works. ## Changes - **sequencer-client (checkpoint_proposal_job)**: Restructured `waitForAttestationsAndEnqueueSubmissionAsync` to defer `enqueueCheckpointForSubmission` until after parent validation when pipelining. Added `waitForParentCheckpointOnL1` which polls the archiver and checks 5 failure conditions: archiver sync timeout, parent not on L1, parent hash mismatch, parent invalid attestations, unexpected parent appeared. Added `enqueueInvalidationForParent` helper that respects `skipInvalidateBlockAsProposer`. - **sequencer-client (events)**: New `checkpoint-parent-mismatch` event with slot, checkpoint number, and reason. - **sequencer-client (metrics)**: New `recordPipelineParentCheckpointMismatch` counter with reason attribute. - **telemetry-client**: New `SEQUENCER_PIPELINE_PARENT_CHECKPOINT_MISMATCH_COUNT` metric definition. - **sequencer-client (tests)**: 8 new unit tests covering all failure reasons and success paths via `executeAndAwait`, asserting publisher actions and emitted events. - **end-to-end**: Enabled pipelining (`enableProposerPipelining: true, inboxLag: 2`) in `epochs_invalidate_block` tests. Updated `proposer invalidates multiple checkpoints` test to verify the invalidation happens promptly (proposer path, not committee fallback). Fixes A-909 Fixes A-921 --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Motivation PXE proves a tx anchored to the genesis block (block 0). When the node has already advanced past block 0, the node's `getWorldState(genesisHeaderHash)` short-circuited to `getCommitted()`, which returns the latest committed tip instead of genesis state. The public-data-tree witnesses computed against the tip diverge from the genesis root the kernel circuit checks against, firing `assert(is_leaf_in_tree, "Proving public value inclusion failed")`. The deeper cause is that block 0 was never a first-class historical block in world state: trees persisted a `BlockPayload` only from block 1 onwards, and low-level tree ops explicitly threw on `blockNumber == 0`. ## Approach Persist a `BlockPayload` for block 0 at tree genesis (captured from `meta.initialRoot` / `meta.initialSize` in `commit_genesis_state`). Remove the read-path `blockNumber == 0` throw guards now that `get_block_data(0)` succeeds and returns the correct initial state. On the aztec-node, resolve the genesis anchor hash to `BlockNumber.ZERO` and fall through to the standard snapshot + archive-root double-check path instead of short-circuiting to committed. ## Changes - **barretenberg (cached_content_addressed_tree_store)**: write a block-0 payload in `commit_genesis_state`, plus a block-index entry for trees with non-empty initial state (NULLIFIER, PUBLIC_DATA, ARCHIVE). - **barretenberg (append_only_tree, indexed_tree)**: drop the `blockNumber == 0` throw guards on read paths (`get_sibling_path`, `get_leaf`, `find_leaf_indices_from`, `find_leaf_sibling_paths`, `find_low_leaf`). Destructive guards (unwind/remove/finalize at block 0) are preserved. - **aztec-node (server.getWorldState)**: resolve the initial header hash to `BlockNumber.ZERO` and reuse the standard snapshot path so genesis-anchored queries see genesis state even after the tip advances. - **aztec-node (server.test)**: the `initial header hash` test now asserts `getSnapshot(BlockNumber.ZERO)` is called and the archive double-check passes. - **end-to-end (e2e_genesis_timestamp)**: unskip the regression test that proves a second genesis-anchored deploy after a prior deploy has modified the public data tree. --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
## Overview Use optimized verifier as default rollup verifier
…owser tests (#22693) ## What `yarn-project/kv-store/src/sqlite-opfs/worker.ts::handleDeleteDb` installed an OPFS SAH Pool on every teardown, even for `:memory:` ephemeral DBs (which never back a file). The OPFS SAH Pool acquires an exclusive directory lock, and under heavy test churn + CPU contention (CI runs this container with `--cpus=2`) the next test's pool install can block indefinitely waiting for the previous terminated worker's OPFS handles to release. The `deleteDb` RPC has no timeout, so `afterEach` hangs and the whole `yarn test` run times out. All kv-store browser tests use `AztecSQLiteOPFSStore.open(mockLogger, undefined, true)` (ephemeral), so `handleInit` never sets up a pool for them — there is literally nothing to unlink. Skip the pool install in that case. Non-ephemeral stores are unchanged: `handleInit` still calls `ensurePool`, so `pool` is set by the time `handleDeleteDb` runs and `pool.unlink(path)` still removes the file. ## Failing CI - Run: https://github.com/AztecProtocol/aztec-packages/actions/runs/24719874446 - Log: http://ci.aztec-labs.com/1776770924267106 → kv-store step http://ci.aztec-labs.com/e2b0c5a0a7db1b0f - Symptom: `yarn test` killed by `timeout 600s`; last printed test was `sqlite-opfs/multi_map.test.ts > multiple keys are independent`, then 10 minutes of silence before TERM. Full analysis: https://gist.github.com/AztecBot/f977b8a3c3debb9b9b00087bc951a04d ## Verification - `yarn test:browser` — 131 passed / 2 skipped (~9s, same as before). - `yarn test:node` — 264 passed. The full `./bootstrap.sh ci` was not run — it takes hours and this fix is scoped to one function. Targeted browser-test runs exercise the exact code path. ClaudeBox log: https://claudebox.work/s/89231bcae70f2f78?run=1
…test (#22719) ## Summary - `e2e_p2p/fee_asset_price_oracle_gossip.test.ts` was failing with `TypeError: Cannot read properties of undefined (reading 'checkpoint')`. - Root cause: `setupAccount()` no longer sends an L2 tx (it just registers a hardcoded account in PXE), so `targetBlock = aztecNode.getBlockNumber()` is `0` and the existing `retryUntil` is trivially satisfied. The test then queries `dataStore.getCheckpoints(...)` before any checkpoint has been published/indexed, gets `[]` back, and crashes on destructure. - Fix: wrap the `getCheckpoints` call in a `retryUntil` (120s timeout, ample for a ~48s checkpoint cadence at `slotDuration=24` × 2 blocks) that waits until the archiver has indexed the first published checkpoint. Query `CheckpointNumber(0)` directly — `fromBlockNumber` is deprecated and any published checkpoint is sufficient to validate the attestation signers. Failure: https://ci.aztec-labs.com/logs/acab993c3a6c25c5 ## Test plan - [ ] CI run of `e2e_p2p/fee_asset_price_oracle_gossip.test.ts` passes 🤖 Generated with [Claude Code](https://claude.com/claude-code)
…isabled When `PROVER_NODE_DISABLE_PROOF_PUBLISH=true` (as set on mainnet), the prover node previously finalized the proof and then silently dropped it. This PR changes that path to: - Run the same on-chain validation as a real submission (`validateEpochProofSubmission`) — catches public-input mismatches early - Encode the proof submission calldata and call `l1TxUtils.estimateGas` to get an accurate gas limit - Fetch the current gas price via `l1TxUtils.getGasPrice()` and the latest block's `baseFeePerGas` - Compute `estimatedTotalFee = gasLimit * (baseFee + priorityFee)` and record it Three new metrics are emitted under `aztec.prover_node.estimated_submission.*`: - `gas` — estimated gas limit for the proof submission tx - `gas_price` — estimated effective gas price (gwei) - `total_fee` — estimated total fee (ETH) This gives us visibility into what a mainnet prover would have paid per epoch, enabling profitability analysis against the rewards tracked by `aztec.prover_node.rewards_per_epoch`. Fixes [A-929](https://linear.app/aztec-labs/issue/A-929/track-proof-submission-fee-estimate-in-fisherman-prover-node)
…isabled (#22691) When `PROVER_NODE_DISABLE_PROOF_PUBLISH=true` (as set on mainnet), the prover node previously finalized the proof and then silently dropped it. This PR changes that path to: - Run the same on-chain validation as a real submission (`validateEpochProofSubmission`) — catches public-input mismatches early - Encode the proof submission calldata and call `l1TxUtils.estimateGas` to get an accurate gas limit - Fetch the current gas price via `l1TxUtils.getGasPrice()` and the latest block's `baseFeePerGas` - Compute `estimatedTotalFee = gasLimit * (baseFee + priorityFee)` and record it Three new metrics are emitted under `aztec.prover_node.estimated_submission.*`: - `gas` — estimated gas limit for the proof submission tx - `gas_price` — estimated effective gas price (gwei) - `total_fee` — estimated total fee (ETH) This gives us visibility into what a mainnet prover would have paid per epoch, enabling profitability analysis against the rewards tracked by `aztec.prover_node.rewards_per_epoch`. Fixes [A-929](https://linear.app/aztec-labs/issue/A-929/track-proof-submission-fee-estimate-in-fisherman-prover-node)
## Motivation When a PR fails the Squashed PR Check against `next` and the author then changes the base branch to a merge-train (where the check does not apply), the stale failure remained on the PR. This blocked the PR from merging cleanly without manual intervention. ## Approach Add `edited` to the `pull_request` trigger types. GitHub fires `edited` whenever the base branch (or title/body) changes, so the workflow reruns, the job's `if` guard evaluates false, and the skipped run supersedes the earlier failure status. ## Changes - **.github/workflows/squashed-pr-check.yml**: include `edited` in the `pull_request` event types so base branch changes retrigger the workflow Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…er (#22716) Reopening #22472 which was accidentally merged ## Motivation When the node is a validator that already built a checkpoint locally (via `addProposedBlock` + `setProposedCheckpoint`), the blocks are already in the archiver store. Fetching blobs from the beacon chain is redundant and expensive, especially during sync. This decouples calldata and blob retrieval so we can skip blob fetching when the proposed checkpoint matches. ## Approach Split `retrieveCheckpointsFromRollup` into two phases: (1) fetch calldata only (header, attestations, archive root), (2) check the archiver store for a proposed checkpoint with matching header. If found, promote it to confirmed via a fast path. Otherwise, fetch blobs in parallel (same `asyncPool(10, ...)` concurrency as before) and store as normal. ## Changes - **archiver (l1/data_retrieval.ts)**: New `CalldataOnlyCheckpoint` type, `retrieveCheckpointCalldataFromRollup` (calldata-only fetch), and `fetchBlobsAndBuildPublishedCheckpoint` (deferred blob fetch). Existing functions left intact. - **archiver (store/block_store.ts)**: New `promoteProposedToCheckpointed` method that reads existing blocks from store, writes a confirmed checkpoint entry with L1 metadata + attestations, and clears the proposed singleton. - **archiver (store/kv_archiver_store.ts)**: Pass-through for `promoteProposedToCheckpointed`. - **archiver (modules/data_store_updater.ts)**: New `promoteProposedCheckpoint` wrapper that handles validation status and tips cache refresh. - **archiver (modules/l1_synchronizer.ts)**: `handleCheckpoints` now partitions calldata checkpoints into promote-vs-fetch-blobs, fetches blobs in parallel for non-matching ones, promotes the matched one, then merges results for validation. Fixes A-877 --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Enables `enableProposerPipelining: true` in several e2e_epochs test suites - Adds `inboxLag: 2` for validator-based tests (required with pipelining) - Increases timeouts for solo-sequencer tests (~2x slots per checkpoint with pipelining) --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…2118) ## Motivation The tx pool currently accepts transactions with `maxFeesPerGas` below the current block fees, marking them as `skipped` to wait for lower fees. This fills the pool with low-fee txs unlikely to be mined. We need to reject them outright and evict existing txs that fall below the fee threshold after a new block. Original PR: #21281 Fixes A-878 ## Approach Changes `GasTxValidator` to return `invalid` instead of `skipped` for txs below current fees. Extracts `MaxFeePerGasValidator` as a standalone validator. Adds `InsufficientFeePerGasEvictionRule` that evicts pending txs after each block if their fees no longer meet the minimum. ## Changes - **p2p**: `MaxFeePerGasValidator` extracted from `GasTxValidator`, `InsufficientFeePerGasEvictionRule` added to tx pool eviction - **p2p (tests)**: Tests for max fee per gas validation, eviction rule, and pool behavior - **stdlib**: `BlockMinFeesProvider` interface for providing min fees to the tx pool - **aztec-node**: Passes `globalVariableBuilder` as `BlockMinFeesProvider` to `createP2PClient` - **p2p**: Uses projected next-block fees (via `BlockMinFeesProvider`) in gossip validator instead of stale block header fees - **end-to-end**: Unified fee padding constants (`DEFAULT_MIN_FEE_PADDING=5`, `LARGE_MIN_FEE_PADDING=15` for long-lived txs), switched from `getCurrentMinFees` to `getPredictedMinFees` - **cli-wallet**: Reverted `MIN_FEE_PADDING` to original `0.5` --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Phil Windle <philip.windle@gmail.com>
…e size (#22724) ## Motivation Follow-up to #22711 addressing a review flag from PhilWindle: `commit_genesis_state` only wrote a block-index entry for block 0 when `initialSize > 0`. This created a minor asymmetry with the regular commit path, which writes `write_block_index_data` unconditionally for every block (including zero-size blocks). ## Approach Drop the `if (meta.initialSize > 0)` guard so genesis state is indexed the same way as any other block. The read API (`find_block_for_index`) uses `get_value_or_greater(index + 1)`, so a `sizeAtBlock=0` entry is unreachable and the extra write is harmless — it just restores the invariant that every committed block appears in the index-to-block database. ## Changes - **barretenberg (cached_content_addressed_tree_store)**: remove the `initialSize > 0` guard around the block-0 `write_block_index_data` call so the genesis commit matches the regular commit path.
The test asserted that checkpoints built with normal multiplier all had 1 tx per block. But if the first block is empty, which may happen if the mempool-filler loop takes too much time, then the 2nd block gets double the allocation, breaking the expectation. A fix is to allow for higher allocation on the last block if the first block is empty. The easier fix for this test is that we are not really testing that, and we only care that a checkpoint with a very large initial block is actually accepted for validators. The easier fix is the one in this PR.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
BEGIN_COMMIT_OVERRIDE
fix(kv-store): ensure LMDB cursor is closed on iteration abort (#22509)
fix(telemetry-client): use appropriate histogram buckets for L1 gas prices (#22512)
fix(telemetry-client): log warning when BatchSpanProcessor drops spans (#22511)
fix(stdlib): wrap HA signer databaseUrl in SecretValue (#22510)
fix(prover-client): don't mark in-progress epoch N jobs as stale when epoch N+1 starts (#22508)
chore: (A-730) graceful shutdown for services in node startup failure path (#22112)
fix(prover-client): reject stale job promises and count timeouts toward retry limit (#21842)
feat(archiver): validate historical L1 log availability at startup (#22644)
fix(archiver): do not query MessageSent events by blockhash (#22641)
refactor(e2e): skip initial sequencer in p2p and epochs tests (#22535)
fix: handle missing L1 finalized block on devnets (#22663)
fix(world-state): treat historical block 0 queries as historical, not latest (#22679)
fix(sequencer): re-check parent checkpoint validity before pipelined L1 submission (#22586)
fix(world-state): make block 0 a first-class historical block (#22711)
chore: show all running versions (#22376)
chore: fix prettier inside worktrees (#22557)
feat: use optimized verifier for rollup (#21840)
fix(kv-store): skip pool creation on ephemeral deleteDb to unstick browser tests (#22693)
chore: rm claude lockfile (#22718)
fix(e2e): wait for first checkpoint in fee_asset_price_oracle_gossip test (#22719)
chore(prover-node): track estimated L1 fee when proof publishing is disabled (#22691)
fix(ci): rerun squashed PR check on base branch change (#22713)
feat(archiver): decouple calldata from blob fetching in L1 synchronizer (#22716)
refactor(e2e): enable pipelining in e2e_epochs tests (#22544)
feat(p2p): reject and evict txs with insufficient max fee per gas (#22118)
refactor(world-state): always index block 0 regardless of initial tree size (#22724)
fix(e2e): fix redistribution test (#22729)
END_COMMIT_OVERRIDE