Conversation
… setups Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Wire peerFailedBanTimeMs as new env and set tx collector test ban time to 5 minutes -> 5 seconds. The test would flake due to timeout and aggregation of peers took 1 full minute on attempting to get peers per subtest despite never obtaining all peers. This is because the peer dial is serialized and limited to 5 for this test and peers may dial repeatedetly without success then get banned for 5 minutes, never being able to reconnect within the 1 minute wait. This should allow all peers to connect in time and lower the 1 minute timeout, resulting in less timeouts overall for the test.
#21605) ## Motivation When `VALIDATOR_MAX_TX_PER_BLOCK` is not set but `VALIDATOR_MAX_TX_PER_CHECKPOINT` is, the gossip-level proposal validator enforces no per-block transaction limit at all. A single block can't have more transactions than the entire checkpoint allows, so the checkpoint limit is a valid upper bound for per-block validation. ## Approach Use `validateMaxTxsPerCheckpoint` as a fallback when `validateMaxTxsPerBlock` is not set in the proposal validator construction. This applies at both construction sites: the P2P libp2p service (gossip validation) and the validator-client factory (block proposal handler). ## Changes - **p2p**: Added `validateMaxTxsPerCheckpoint` to `P2PConfig` interface and config mappings (reads from `VALIDATOR_MAX_TX_PER_CHECKPOINT` env var) - **p2p (libp2p_service)**: Use `validateMaxTxsPerBlock ?? validateMaxTxsPerCheckpoint` when constructing proposal validators - **validator-client (factory)**: Same fallback when constructing the `BlockProposalValidator` Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
# Fix: ARM64 Mac (M3) Devcontainer Build Failures ## Problem Building inside a devcontainer on Mac with Apple M3 chip fails in multiple ways: 1. **SIGILL crashes** — The `bb-sol` build step crashes when running `honk_solidity_key_gen`, and E2E tests fail with `Illegal instruction` errors. 2. **Rust compilation failures** — The `noir` build fails with `can't find crate for serde` and similar errors when noir and avm-transpiler build in parallel, racing on the shared `CARGO_HOME`. ## Root Cause ### SVE instructions from zig `-target native` 1. CI runs on **AWS Graviton** (ARM64 with SVE vector extensions) 2. The zig compiler wrapper uses `-target native-linux-gnu.2.35`, which on Graviton enables **SVE instructions** 3. Mac M3 devcontainer (ARM64 **without SVE**) downloads the same cached binaries 4. Binaries contain SVE opcodes (e.g. `0x04be4000`) that Apple Silicon can't execute → **SIGILL** Cache keys already include architecture via `cache_content_hash` (which appends `$OSTYPE-$(uname -m)`), so amd64 vs arm64 caches never collide. The problem is specifically that two ARM64 machines (Graviton with SVE vs Apple Silicon without SVE) share the same architecture tag but have different CPU feature sets. The fix is to stop emitting CPU-specific instructions in the first place. ### Parallel Rust build race condition The top-level bootstrap runs `noir` and `avm-transpiler` builds in parallel. Both invoke `cargo build`, and both share the same `CARGO_HOME` (`~/.cargo`) which contains the crate registry and download cache. When both cargo processes run concurrently, they race on shared registry state, causing downstream crates (e.g. `serde-big-array`, `ecdsa`) to fail with `can't find crate` errors during compilation. This does not happen on CI where builds are cached, only on local fresh builds (e.g. `NO_CACHE=1`). ## Fixes ### 1. Zig compiler wrappers: explicit ARM64 target **Files:** `barretenberg/cpp/scripts/zig-cc.sh`, `barretenberg/cpp/scripts/zig-c++.sh` Changed `-target native-linux-gnu.2.35` to use explicit `aarch64-linux-gnu.2.35` on ARM64 Linux. This produces generic ARM64 code without CPU-specific extensions (SVE, etc.), ensuring binaries work on all ARM64 machines — Graviton, Apple Silicon, Ampere, etc. x86_64 behavior is unchanged (still uses `native`). ### 2. Extract native_cache_key variable in barretenberg bootstrap **File:** `barretenberg/cpp/bootstrap.sh` Extracted the repeated cache key pattern `barretenberg-$native_preset-$hash` into a single `native_cache_key` variable, used by `build_native_objects`, `build_native`, and related functions. Pure refactor, no change in cache key values. ### 3. Better error handling in init_honk.sh **File:** `barretenberg/sol/scripts/init_honk.sh` Added `set -eu` so the script fails immediately on error instead of silently continuing after SIGILL. Added an existence check for the `honk_solidity_key_gen` binary with a clear error message. ### 4. Serialize parallel cargo builds with flock **Files:** `noir/bootstrap.sh`, `avm-transpiler/bootstrap.sh` Both scripts wrap their `cargo build` invocations with `flock -x 200` on a shared lock file (`/tmp/rustup.lock`): ```bash ( flock -x 200 cd noir-repo && cargo build --locked --release --target-dir target ) 200>/tmp/rustup.lock ``` This acquires an exclusive file lock before running cargo, so if both `noir` and `avm-transpiler` builds run in parallel, one waits for the other to finish. The lock is automatically released when the subshell exits. This eliminates the `CARGO_HOME` race condition without requiring changes to the top-level parallelism. ## Notes ### E2E Tests The E2E test failures (SIGKILL from invalid instructions) have the same root cause as the SIGILL crashes — the `bb` binary used by tests was from the SVE-contaminated cache. After rebuilding with these fixes, E2E tests work. --------- Co-authored-by: Aztec Bot <49558828+AztecBot@users.noreply.github.com> Co-authored-by: ludamad <adam.domurad@gmail.com>
Collaborator
Author
|
🤖 Auto-merge enabled after 4 hours of inactivity. This PR will be merged automatically once all checks pass. |
PR #21597 increased the finalized block lookback from epochDuration*2 to epochDuration*2*4. This caused the finalized block number to jump backwards past blocks that had already been pruned from world-state, causing advance_finalized_block to fail with 'Failed to read block data'. Two fixes: 1. TypeScript: clamp blockNumber to oldestHistoricalBlock before calling setFinalized, so we never request a pruned block. 2. C++: reorder checks in advance_finalized_block to check the no-op condition (already finalized past this block) before attempting to read block data. This makes the native layer resilient to receiving a stale finalized block number.
…uned blocks Tests that handleBlockStreamEvent with chain-finalized for a block older than the oldest available block does not throw, validating the clamping fix in handleChainFinalized.
Calling `Array.from({length})` allocates length immediately. We were
calling this method in the context of deserialization with untrusted
input.
This PR changes it so we use `new Array(size)` for untrusted input. A
bit less efficient, but more secure.
## Summary PR #21597 increased the finalized block lookback from `epochDuration*2` to `epochDuration*2*4`, which caused the finalized block number to jump backwards past blocks already pruned from world-state. The native `advance_finalized_block` then failed trying to read pruned block data, crashing the block stream with: ``` Error: Unable to advance finalized block: 15370. Failed to read block data. Tree name: NullifierTree ``` Two fixes: - **TypeScript** (`server_world_state_synchronizer.ts`): Clamp the finalized block number to `oldestHistoricalBlock` before calling `setFinalized`, so we never request a pruned block. - **C++** (`cached_content_addressed_tree_store.hpp`): Reorder checks in `advance_finalized_block` to check the no-op condition (`finalizedBlockHeight >= blockNumber`) before attempting `read_block_data`. This makes the native layer resilient to stale finalized block numbers. Full analysis: https://gist.github.com/AztecBot/6221fb074ed7bbd8a753ec3602133b42 ClaudeBox log: https://claudebox.work/s/8e97449f22ba9343?run=1
Correlate script by trace ID.
Wire peerFailedBanTimeMs as new env and set tx collector test ban time to 5 minutes -> 5 seconds. The test would flake due to timeout and aggregation of peers took 1 full minute on attempting to get peers per subtest despite never obtaining all peers. This is because the peer dial is serialized and limited to 5 for this test and peers may dial repeatedetly without success then get banned for 5 minutes, never being able to reconnect within the 1 minute wait. This should allow all peers to connect in time and lower the 1 minute timeout, resulting in less timeouts overall for the test.
When the finalized block jumps backwards past pruned state, return early instead of clamping and continuing into the pruning logic. The previous clamping fix avoided the setFinalized error but then removeHistoricalBlocks would fail trying to prune to a block that is already the oldest. Also guard removeHistoricalBlocks against being called with a block number that is not newer than the current oldest available block.
… setups (#21603) ## Motivation In an HA setup, two nodes (A and B) share the same validator keys. When node A proposes a block, node B receives it via gossipsub but ignores it because `validateBlockProposal` detects the proposer address matches its own validator keys and returns early. This means node B never re-executes the block, never pushes it to its archiver, and falls behind the proposed chain. Additionally, both HA peers independently try to build and propose blocks for the same slot. If the losing peer commits its block to the archiver before signing fails, it ends up with a stale block that prevents it from accepting the winning peer's proposal. ## Approach Three changes work together to fix HA proposed chain sync: 1. **Remove self-filtering**: Remove the early return in `validateBlockProposal` for self-proposals, letting them flow through the normal re-execution path so the HA peer pushes the winning block to its archiver. 2. **Sign before syncing to archiver**: Reorder the checkpoint proposal job so that non-last blocks are signed via `createBlockProposal` *before* being synced to the archiver. If the shared slashing protection DB rejects signing (because the HA peer already signed), the block is never added to the archiver, keeping it clean to accept the winning peer's block via gossipsub. 3. **Shared slashing protection for testing**: Add `createSharedSlashingProtectionDb` (backed by a shared LMDB store) and `createSignerFromSharedDb` factories, and thread an optional `slashingProtectionDb` through the validator creation chain. This allows e2e tests to simulate HA signing coordination without PostgreSQL. ## Changes - **validator-client**: Remove self-proposal filtering in `validateBlockProposal`. Add optional `slashingProtectionDb` parameter to `ValidatorClient.new` and `createValidatorClient` factory for injecting a shared signing protection DB. - **validator-client (tests)**: Add unit test verifying block proposals signed with the validator's own key are processed and forwarded to `handleBlockProposal`. - **sequencer-client**: Reorder `checkpoint_proposal_job` so non-last blocks call `createBlockProposal` before `syncProposedBlockToArchiver`. If signing fails (HA signer rejects), the block is never added to the archiver. - **validator-ha-signer**: Add `createSharedSlashingProtectionDb` and `createSignerFromSharedDb` factory functions for testing HA setups with a shared in-memory LMDB store. - **aztec-node**: Thread `slashingProtectionDb` through `AztecNodeService.createAndSync` deps. - **end-to-end**: Add `epochs_ha_sync` e2e test with 4 nodes in 2 HA pairs (each pair sharing validator keys and a slashing protection DB), different coinbase addresses per node, MBPS enabled, checkpoint publishing disabled. Asserts all 4 nodes converge on the same proposed block hash before any checkpoint is published. Fixes A-675
Calling `Array.from({length})` allocates length immediately. We were
calling this method in the context of deserialization with untrusted
input.
This PR changes it so we use `new Array(size)` for untrusted input. A
bit less efficient, but more secure.
…21656) ## Summary Follow-up to #21643. The clamping fix avoided the `setFinalized` error, but the method continued into the pruning logic where `removeHistoricalBlocks` failed with: ``` Unable to remove historical blocks to block number 15812, blocks not found. Current oldest block: 15812 ``` Two changes: - When the finalized block is older than `oldestHistoricalBlock`, return early instead of clamping and continuing. There's nothing useful to do — world-state is already finalized past this point. - Guard `removeHistoricalBlocks` against being called with a block `<= oldestHistoricalBlock`, which the C++ layer rejects. The C++ reorder fix from #21643 is preserved. ClaudeBox log: https://claudebox.work/s/8e97449f22ba9343?run=4
## Summary Demotes the "Finalized block X is older than oldest available block Y. Skipping." log from `warn` to `trace`. This message fires on every block stream tick while the finalized block is behind the oldest available, filling up operator logs on deployed networks. ClaudeBox log: https://claudebox.work/s/8e97449f22ba9343?run=6
## Summary Fixes CI failure on merge-train/spartan caused by `-march=skylake` being injected into aarch64 cross-compilation builds (arm64-android, arm64-ios, arm64-macos). **Root cause:** The `arch.cmake` auto-detection added in #21611 defaults `TARGET_ARCH` to `skylake` when `ARM` is not detected. Cross-compile presets (ios, android) don't set `CMAKE_SYSTEM_PROCESSOR`, so ARM detection fails and `-march=skylake` gets passed to aarch64 Zig builds — which errors with `unknown CPU: 'skylake'`. For arm64-macos, `-march=generic` overrides Zig's `-mcpu=apple_a14`, breaking libdeflate. **Fix:** Gate auto-detection on `NOT CMAKE_CROSSCOMPILING`. Cross-compile toolchains handle architecture targeting via their own flags (e.g. Zig `-mcpu`). Presets that explicitly set `TARGET_ARCH` (amd64-linux, arm64-linux) are unaffected. Also restores `native_build_dir` variable dropped in the build infrastructure refactor. ## Test plan - Verified all cross-compile presets (arm64-android, arm64-ios, arm64-ios-sim, arm64-macos, x86_64-android) configure with zero `-march` flags - Verified native presets (default, amd64-linux, arm64-linux) still get correct `-march` values
The pool should never reject a tx that passed validation. However, in case it does, we now add a warning and penalize the peer that sent us the invalid tx.
Attestation validation is handled in `validateAndStoreCheckpointAttestation`.
The pool should never reject a tx that passed validation. However, in case it does, we now add a warning and penalize the peer that sent us the invalid tx.
Brings down test time for this suite from minutes to 2s. Seems to be caused by [this issue](marchaos/jest-mock-extended#128). Unfortunately, `jest-mock-extended` looks unmaintained.
…slots (#21692) ## Motivation Three bugs in how per-block gas/tx limits are computed and enforced during checkpoint building made the redistribution logic ineffective in multi-block-per-slot mode: 1. Config `maxBlocksPerCheckpoint` was not propagated to the checkpoint builder, so `remainingBlocks` always defaulted to 1 — making redistribution a no-op. 2. The static per-block limit computed in the sequencer-client at startup always equaled the first-block fair share, so redistribution could only tighten, never relax — later blocks couldn't use surplus budget from light early blocks. 3. Redistribution ran during validator re-execution with the proposer's multiplier logic, causing potential false rejections. ## Approach Delete the sequencer's `computeBlockLimits` — the checkpoint builder now derives per-block limits dynamically from checkpoint-level budgets. Move `maxBlocksPerCheckpoint` and `perBlockAllocationMultiplier` out of config into `BlockBuilderOptions` (passed from the sequencer's timetable at build time). Split behavior on `isBuildingProposal`: proposers get redistribution with multiplier; validators only cap by per-block limit + remaining checkpoint budget (no fair-share). Introduce `BlockBuilderOptions` as a discriminated union type: when `isBuildingProposal: true`, redistribution params (`maxBlocksPerCheckpoint`, `perBlockAllocationMultiplier`) are required; when `false`, they're absent. This makes it a compile-time error to forget redistribution params during proposal building or to accidentally include them during validation. ## Changes - **stdlib**: Split `PublicProcessorLimits` (processor-only fields) from `BlockBuilderOptions` (discriminated union with proposer/validator branches). Remove `maxBlocksPerCheckpoint` from `SequencerConfig`. Make `perBlockAllocationMultiplier` required on `ResolvedSequencerConfig`. - **sequencer-client**: Delete `computeBlockLimits`. Simplify `SequencerClient.new` to cap operator overrides at checkpoint limits. Pass `maxBlocksPerCheckpoint` and `perBlockAllocationMultiplier` via opts in `CheckpointProposalJob`. - **validator-client**: Rewrite `capLimitsByCheckpointBudgets` — first cap by remaining budget (always), then further cap by fair share only when proposing. Validator re-execution no longer applies redistribution. - **slasher**: Update `epoch_prune_watcher` buildBlock call to use new opts shape. - **validator-client (tests)**: Update tests to pass redistribution params via opts. Remove redundant tests. Add `validatorOpts`/`proposerOpts` helpers. - **end-to-end**: Add e2e test verifying redistribution allows late txs to fit in the last block, and a second test verifying validators accept blocks built with a larger proposer multiplier. - **validator-client (README)**: Update block building limits documentation. Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary - Replace the unbounded `Promise.all` in `handleEpochPrune` with a sequential for-loop - Prevents memory pressure when the gap between local pending and proven checkpoint numbers is large Fixes https://linear.app/aztec-labs/issue/A-690 🤖 Generated with [Claude Code](https://claude.com/claude-code)
) ## Summary PR #21692 added a required 4th `opts: BlockBuilderOptions` parameter to `CheckpointBuilder.buildBlock()`, but three call sites in the test file were not updated, causing `TS2554: Expected 4 arguments, but got 3`. Adds `validatorOpts()` as the 4th argument to the three affected calls (lines 174, 177, 187). ## Test plan - All 30 tests in `checkpoint_builder.test.ts` pass - `yarn tsgo -b --emitDeclarationOnly` passes with no errors ClaudeBox log: https://claudebox.work/s/2cad3714097b4ca5?run=1
Ref: A-513 - Replaces MissingTxsTracker with RequestTracker that unifies missing tx tracking, deadline management, and cancellation signaling into a single object - Ensures cancellation propagates from the deepest stack level upward: inner workers and node loops settle before collectFast returns (no orphaned promises) - Makes node loop inter-retry sleep interruptible by racing against cancellationToken
## Summary The HA slashing protection Postgres DB on staging-public ran out of disk space (1Gi default) causing sequencers to stop producing blocks. This increases the default PVC size to 10Gi in both the Helm chart defaults and the Terraform module variable. ## Changes - `spartan/aztec-postgres/values.yaml`: persistence size 1Gi → 10Gi - `spartan/terraform/modules/validator-ha-postgres/variables.tf`: STORAGE_SIZE default 1Gi → 10Gi **Note:** Existing PVCs will need to be manually resized or recreated — this only affects new deployments. ClaudeBox log: https://claudebox.work/s/4e6dbeb8dfd49038?run=2
## Summary - Three tests in the "Smart peer demotion" describe block used the removed `MissingTxsTracker` class and an old `BatchTxRequester` constructor signature that included `deadline` as a separate argument. - Updated them to use `RequestTracker.create()` (which wraps the deadline into a `Date`) and the current constructor signature, matching all other tests in the file. ## Context PR #21496 refactored `BatchTxRequester` to take an `IRequestTracker` (which owns the deadline) instead of a separate deadline parameter, and removed `MissingTxsTracker`. Three tests in the "Smart peer demotion" section were not updated, causing `tsgo` type-check failures on `merge-train/spartan`. ClaudeBox log: https://claudebox.work/s/5d20c8f4f47c8f3a?run=1
…21744) ## Motivation `BOOTSTRAP_TO=yarn-project ./bootstrap.sh` was used in several places to build up to yarn-project, but the env var is no longer read by any code — it became dead after the Makefile introduction. Running it just runs a full `./bootstrap.sh` ignoring the variable entirely. ## Approach Replace all occurrences with `./bootstrap.sh build yarn-project`, which calls `prep` (submodule update + toolchain checks) then `make yarn-project`. ## Changes - **bootstrap.sh**: Replace in `ci-docs` case - **container-builds/avm-fuzzing-container/src/Dockerfile**: Replace in build step - **yarn-project/CLAUDE.md**: Update developer instructions - **.claude/skills/{backport,fix-pr,rebase-pr}**: Update skill instructions
… (A-677) (#21747) ## Motivation The `sendRequests` method in the sequencer publisher correctly filters L1 publish requests by `lastValidL2Slot` to discard expired ones. However, gas and blob configs were extracted from the unfiltered request list, meaning expired requests' gas configurations leaked into the aggregated gas limit calculation. This could over-estimate gas and overpay for L1 transactions. Fixes A-677 ## Approach Changed the gas and blob config extraction to use the filtered `validRequests` list instead of the unfiltered `requestsToProcess` list, so only non-expired requests contribute to the aggregated gas limit. ## Changes - **sequencer-client**: Use `validRequests` instead of `requestsToProcess` when extracting `gasConfigs` and `blobConfigs` in `sendRequests` - **sequencer-client (tests)**: Added test verifying that expired requests' gas configs are excluded from the aggregated gas limit Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary - Use deterministic BN254 secret keys instead of `Fr.random()` to eliminate randomness in committee ordering / proposer selection - Guard `teardown?.()` in afterAll to prevent `TypeError: teardown is not a function` when beforeAll times out - Increase jest timeout from 300s to 540s as safety margin ## Root Cause The test stakes 4 validators on L1 but only loads 3 into the keystore. When the RANDAO seed (derived from random BN254 keys) causes the missing 4th validator to be selected as proposer for consecutive slots, the sequencer cannot produce blocks. With 72s per L2 slot and a 300s timeout (~4 slot opportunities), there's a ~1/256 chance all slots have the wrong proposer, causing a timeout. The secondary `TypeError: teardown is not a function` error occurs because `teardown` is never assigned when `beforeAll` times out before `setup()` returns. ## Test plan - CI should pass — the deterministic keys produce a predictable committee ordering, and the increased timeout provides additional margin. ClaudeBox log: https://claudebox.work/s/91bb3fd09c0c7f41?run=1
## Summary - Added `skipPushProposedBlocksToArchiver: true` to both malicious node configs in `duplicate_attestation_slash.test.ts` - Without this flag, the second malicious node receives the first's block proposal via mock gossipsub and yields instead of building its own block, preventing the equivocation scenario needed for offense detection - This matches the pattern already used in `duplicate_proposal_slash.test.ts` which has the same flag with the comment: "Prevent HA peer proposals from being added to the archiver, so both malicious nodes build their own blocks instead of one yielding to the other" ## Test plan - [x] `./bootstrap.sh build yarn-project` passes - [ ] CI runs `duplicate_attestation_slash.test.ts` successfully Detailed analysis: https://gist.github.com/AztecBot/000ee6113d23edae3fae601304654698 ClaudeBox log: https://claudebox.work/s/4d7ac2d20eb5f07b?run=1
Collaborator
Author
|
🤖 Auto-merge enabled after 4 hours of inactivity. This PR will be merged automatically once all checks pass. |
## Overview Epoch cache operations now return two views, the current slot and the pipelined slot / epoch ## Testing The main test which showcases this functionality is epoch_mbps.pipeline, this runs the sequencers with the pipeline mode enabled. At the moment it only expects 1 block per slot as it still waits until the proposal slot to send the checkpoint to l1. For this PR it uses a blocking sleep here, that stops all sequencers for this test. This is addressed in a pr stacked ontop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
BEGIN_COMMIT_OVERRIDE
fix(p2p): fall back to maxTxsPerCheckpoint for per-block tx validation (#21605)
chore: fixing M3 devcontainer builds (#21611)
fix: clamp finalized block to oldest available in world-state (#21643)
chore: fix proving logs script (#21335)
fix: (A-649) tx collector bench test (#21619)
fix(validator): process block proposals from own validator keys in HA setups (#21603)
fix: add bounds when allocating arrays in deserialization (#21622)
fix: skip handleChainFinalized when block is behind oldest available (#21656)
chore: demote finalized block skip log to trace (#21661)
fix: skip -march auto-detection for cross-compilation presets (#21356)
chore: revert "add bounds when allocating arrays in deserialization" (#21622) (#21666)
fix: capture txs not available error reason in proposal handler (#21670)
fix: estimate gas in bot and make BatchCall.simulate() return SimulationResult (#21676)
fix: prevent HA peer proposals from blocking equivocation in duplicate proposal test (#21673)
fix(p2p): penalize peers for errors during response reading (#21680)
feat(sequencer): add build-ahead config and metrics (#20779)
chore: fixing build on mac (#21685)
fix: HA deadlock for last block edge case (#21690)
fix: process all contract classes in storeBroadcastedIndividualFunctions (A-683) (#21686)
chore: add slack success post on nightly scenario (#21701)
fix(builder): persist contractsDB across blocks within a checkpoint (#21520)
fix: only delete logs from rolled-back blocks, not entire tag (A-686) (#21687)
chore(p2p): lower attestation pool per-slot caps to 2 (#21709)
chore(p2p): remove unused method (#21678)
fix(p2p): penalize peer on tx rejected by pool (#21677)
fix(test): workaround slow mock creation (#21708)
fix(sequencer): fix checkpoint budget redistribution for multi-block slots (#21692)
fix: batch checkpoint unwinding in handleEpochPrune (A-690) (#21668)
fix(sequencer): add missing opts arg to checkpoint_builder tests (#21733)
fix: race condition in fast tx collection (#21496)
fix: increase default postgres disk size from 1Gi to 10Gi (#21741)
fix: update batch_tx_requester tests to use RequestTracker (#21734)
chore: replace dead BOOTSTRAP_TO env var with bootstrap.sh build arg (#21744)
fix(sequencer): extract gas and blob configs from valid requests only (A-677) (#21747)
fix: deflake attempt for l1_tx_utils (#21743)
fix(test): fix flaky keystore reload test (#21749)
fix(test): fix flaky duplicate_attestation_slash test (#21753)
feat(pipeline): introduce pipeline views for building (#21026)
END_COMMIT_OVERRIDE