Skip to content

fix: handle empty or malformed logs during message processing#21192

Merged
nventuro merged 7 commits intomerge-train/fairiesfrom
fix/decrypt-returns-none-instead-of-panicking
Mar 13, 2026
Merged

fix: handle empty or malformed logs during message processing#21192
nventuro merged 7 commits intomerge-train/fairiesfrom
fix/decrypt-returns-none-instead-of-panicking

Conversation

@nchamo
Copy link
Contributor

@nchamo nchamo commented Mar 5, 2026

Problem

AES128::decrypt in aes128.nr currently panics at 5 points when given malformed ciphertext or wrong-key data. During message discovery (do_sync_state), a panic in decrypt crashes the entire sync process rather than gracefully skipping the unprocessable log.

Additionally, do_sync_state itself panics when it encounters empty logs, since it unconditionally indexes into the log without checking its length first.

Panic points in decrypt

  1. Empty ciphertext -- ciphertext.get(0) panics when the BoundedVec is empty.
  2. Short header plaintext -- header_plaintext.get(0) / .get(1) panic when the AES decrypt oracle returns fewer than 2 bytes (e.g. wrong-key PKCS#7 stripping produces an empty result).
  3. Invalid ciphertext length -- BoundedVec::from_parts(ciphertext_with_padding, ciphertext_length) panics when the 2-byte header decodes to a length exceeding MESSAGE_PLAINTEXT_SIZE_IN_BYTES (e.g. 65535 from corrupted data).
  4. Invalid plaintext length -- fields_from_bytes asserts bytes.len() % 32 == 0, panicking when the decrypted body has a non-aligned length (e.g. 33 bytes).
  5. Field overflow -- fields_from_bytes asserts each 32-byte chunk fits within the BN254 field modulus, panicking when decrypted bytes exceed it (e.g. 0xFF repeated 32 times).

Panic in do_sync_state

do_sync_state processes every pending tagged log without validating its size. It would panic when pending_tagged_log.log.get(0) was called on empty logs

Important for reviewers

I recommend using "hide whitespace" config when reviewing file changes, since it would make the changes easier to understand

Fixes F-356
Fixes F-191

/// order to perform validation, store results, etc. For example, messages containing notes require knowledge of note
/// hashes and the first nullifier in order to find the note's nonce.
#[derive(Deserialize, Eq)]
#[derive(Deserialize, Eq, Serialize)]
Copy link
Contributor Author

@nchamo nchamo Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needed for the do_sync_state test

#[derive(Deserialize, Eq)]
#[derive(Deserialize, Eq, Serialize)]
pub(crate) struct PendingTaggedLog {
pub log: BoundedVec<Field, PRIVATE_LOG_SIZE_IN_FIELDS>,
Copy link
Contributor Author

@nchamo nchamo Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needed for the do_sync_state test

@nchamo nchamo self-assigned this Mar 6, 2026
@nchamo nchamo force-pushed the fix/decrypt-returns-none-instead-of-panicking branch from 8d1b7b8 to ba3b667 Compare March 6, 2026 01:13
recipient: AztecAddress,
) -> Option<BoundedVec<Field, MESSAGE_PLAINTEXT_LEN>> {
let eph_pk_x = ciphertext.get(0);
// Extract the ephemeral public key x-coordinate and masked fields, returning None for empty ciphertext.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since Noir lacks early return, I though the best way to make this clear as possible was to fully embrace Option. Open to other suggestions though

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

embrace monads

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit annoying that we end up in nested-hell, but it is the cleanest way of handling this I think.

}

#[test]
unconstrained fn decrypt_returns_none_on_empty_ciphertext() {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All these scenarios used to cause a panic

for j in 0..32 {
let next_byte = bytes.get(i * 32 + j);
field = field * 256 + next_byte as Field;
try_fields_from_bytes(bytes).expect(f"Value does not fit in field")
Copy link
Contributor Author

@nchamo nchamo Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm now using try_fields_from_bytes to avoid the repeated logic, but let me know if you think the two should coexist (maybe as a form of optimization?)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But expect can fail, no? So if the bytes cannot be put into fields we'll panic. Is this not an issue?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I wasn't clear. Both functions can co-exists. I didn't delete fields_from_bytes because it was public and maybe someone users are using it

My comment tried (and failed) to imply that having fields_from_bytes use try_fields_from_bytes might not be the most optimized solution, since we do things a little differently to handle the Option instead of panicking. It should be negligible, but I wanted to double check

@nchamo nchamo changed the title fix: make decrypt return Option::none() instead of panicking on malformed input fix: handle empty or malformed logs during message processing Mar 6, 2026
@nchamo nchamo marked this pull request as ready for review March 6, 2026 13:53
@nchamo nchamo requested a review from nventuro as a code owner March 6, 2026 13:53
@nchamo nchamo requested a review from benesjan March 6, 2026 13:53
Comment on lines +172 to +180
capsules::store(contract_address, base_slot, 1 as Field);
capsules::store(
contract_address,
base_slot + 1,
PendingTaggedLog { log: BoundedVec::new(), context: std::mem::zeroed() },
);

let logs: CapsuleArray<PendingTaggedLog> = CapsuleArray::at(contract_address, base_slot);
assert_eq(logs.len(), 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of doing things this way, just create the capsule array and push into it.

use crate::protocol::address::AztecAddress;

#[test]
unconstrained fn do_sync_state_can_handle_empty_logs() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
unconstrained fn do_sync_state_can_handle_empty_logs() {
unconstrained fn do_sync_state_does_not_panic_on_empty_logs() {

recipient: AztecAddress,
) -> Option<BoundedVec<Field, MESSAGE_PLAINTEXT_LEN>> {
let eph_pk_x = ciphertext.get(0);
// Extract the ephemeral public key x-coordinate and masked fields, returning None for empty ciphertext.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit annoying that we end up in nested-hell, but it is the cleanest way of handling this I think.

) -> Option<BoundedVec<Field, MESSAGE_PLAINTEXT_LEN>> {
let eph_pk_x = ciphertext.get(0);
// Extract the ephemeral public key x-coordinate and masked fields, returning None for empty ciphertext.
split_ciphertext(ciphertext).and_then(|(eph_pk_x, masked_fields)| {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if this were a lambda?

|ciphertext| {
    if ciphertext.len() > 0 {
        Option::some(
            (ciphertext.get(0), array::subbvec(ciphertext, EPH_PK_X_SIZE_IN_FIELDS)),
        )
    } else {
        Option::none()
    }
}.and_then(|(eph_pk_x, masked_fields)| {

Comment on lines +720 to +721
header_bytes[0] = 0xFF;
header_bytes[1] = 0xFF;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better to have the len be MESSAGE_PLAINTEXT_SIZE_IN_BYTES+1, so that we make sure we're handling the edge case correctly.

for j in 0..32 {
let next_byte = bytes.get(i * 32 + j);
field = field * 256 + next_byte as Field;
try_fields_from_bytes(bytes).expect(f"Value does not fit in field")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But expect can fail, no? So if the bytes cannot be put into fields we'll panic. Is this not an issue?

ok = true;
/// Non-panicking version of `fields_from_bytes`. Returns `Option::none()` if the input
/// length is not a multiple of 32 or if any 32-byte chunk exceeds the field modulus.
pub fn try_fields_from_bytes<let N: u32>(bytes: BoundedVec<u8, N>) -> Option<BoundedVec<Field, N / 32>> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pub fn try_fields_from_bytes<let N: u32>(bytes: BoundedVec<u8, N>) -> Option<BoundedVec<Field, N / 32>> {
fn try_fields_from_bytes<let N: u32>(bytes: BoundedVec<u8, N>) -> Option<BoundedVec<Field, N / 32>> {

Let's keep API surface small for now.

if bytes.len() % 32 == 0 {
let mut fields = BoundedVec::new();
let p = std::field::modulus_be_bytes();
let mut valid = true;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might read easier if extracting this body into a fn. Typically helps with nested options etc.

Comment on lines +172 to +173
// 0xFF repeated 32 times is larger than the BN254 field modulus.
let input = BoundedVec::<_, 32>::from_array([0xFF as u8; 32]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better to test with p directly to check the edge case.

@benesjan
Copy link
Contributor

benesjan commented Mar 8, 2026

Since Nico went through this and the feedback was not yet addressed here I don't think my review is necessary at this point.

@nchamo please request review again from me if you would like me to chime in.

@benesjan benesjan removed their request for review March 8, 2026 09:55
Comment on lines +130 to +132
} else {
aztecnr_warn_log_format!("Skipping empty log from tx {0}")([pending_tagged_log.context.tx_hash]);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extremely minor (and probably late at this stage), but I personally prefer inverting the blocks here, sort of like an early return. OTherwise the dangling block is a bit hard to read.

Comment on lines +386 to +387
Option::some(ciphertext)
.and_then(|ciphertext| {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we already have nesting anyway I don't think this gains us much?

Comment on lines +386 to +396
Option::some(ciphertext)
.and_then(|ciphertext| {
if ciphertext.len() > 0 {
let masked_fields: BoundedVec<Field, MESSAGE_CIPHERTEXT_LEN - EPH_PK_X_SIZE_IN_FIELDS> =
array::subbvec(ciphertext, EPH_PK_X_SIZE_IN_FIELDS);
Option::some((ciphertext.get(0), masked_fields))
} else {
Option::none()
}
})
.and_then(|(eph_pk_x, masked_fields)| {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Option::some(ciphertext)
.and_then(|ciphertext| {
if ciphertext.len() > 0 {
let masked_fields: BoundedVec<Field, MESSAGE_CIPHERTEXT_LEN - EPH_PK_X_SIZE_IN_FIELDS> =
array::subbvec(ciphertext, EPH_PK_X_SIZE_IN_FIELDS);
Option::some((ciphertext.get(0), masked_fields))
} else {
Option::none()
}
})
.and_then(|(eph_pk_x, masked_fields)| {
if ciphertext.len() > 0 {
let masked_fields: BoundedVec<Field, MESSAGE_CIPHERTEXT_LEN - EPH_PK_X_SIZE_IN_FIELDS> =
array::subbvec(ciphertext, EPH_PK_X_SIZE_IN_FIELDS);
Option::some((ciphertext.get(0), masked_fields))
} else {
Option::none()
}
.and_then(|(eph_pk_x, masked_fields)| {

Comment on lines +65 to +92
it('should return garbage when decrypting with wrong key', async () => {
const data = randomBytes(32);
const key = randomBytes(16);
const wrongKey = randomBytes(16);
const iv = randomBytes(16);

const ciphertext = await aes128.encryptBufferCBC(data, iv, key);
const result = await aes128.decryptBufferCBC(ciphertext, iv, wrongKey);

// Barretenberg decrypts to garbage, then blindly strips "padding" based on the last
// garbage byte. The result is truncated garbage (often empty if that byte is large).
expect(result).not.toEqual(data);
});

it('should return empty buffer for ciphertext not a multiple of 16', async () => {
const key = randomBytes(16);
const iv = randomBytes(16);
const badCiphertext = randomBytes(17);
const result = await aes128.decryptBufferCBC(badCiphertext, iv, key);
expect(result.length).toBe(0);
});

it('should return empty buffer for empty ciphertext', async () => {
const key = randomBytes(16);
const iv = randomBytes(16);
const result = await aes128.decryptBufferCBC(Buffer.alloc(0), iv, key);
expect(result.length).toBe(0);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is insufficient, we need to test that the oracle does not fail. There may be failure later on (e.g. the len of 0 looks scary).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we need to do some oracle work beforehand, I created F-452 to tackle this separately. The idea would be to catch all potential errors during decryption, and have the oracle return an Option

Will leave the tests here though, since we didn't have many 😅

nchamo added 2 commits March 13, 2026 12:55
…crypt-returns-none-instead-of-panicking

# Conflicts:
#	noir-projects/aztec-nr/aztec/src/messages/processing/message_context.nr
@nchamo nchamo requested a review from nventuro March 13, 2026 16:23
Copy link
Contributor

@nventuro nventuro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good work overall


// Extract ciphertext length from header (2 bytes, big-endian)
extract_ciphertext_length(header_plaintext)
.filter(|ciphertext_length| ciphertext_length <= MESSAGE_PLAINTEXT_SIZE_IN_BYTES)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would be a good place to have a Result, so that we can know why the process failed. Oh well.

@nventuro nventuro enabled auto-merge (squash) March 13, 2026 17:47
@nventuro nventuro merged commit 9cc2833 into merge-train/fairies Mar 13, 2026
23 checks passed
@nventuro nventuro deleted the fix/decrypt-returns-none-instead-of-panicking branch March 13, 2026 17:49
github-merge-queue bot pushed a commit that referenced this pull request Mar 13, 2026
BEGIN_COMMIT_OVERRIDE
feat: support emitting messages from utilities (#21422)
fix: handle empty or malformed logs during message processing (#21192)
END_COMMIT_OVERRIDE
@AztecBot
Copy link
Collaborator

❌ Failed to cherry-pick to v4-next due to conflicts. (🤖) View backport run.

AztecBot pushed a commit that referenced this pull request Mar 16, 2026
AztecBot pushed a commit that referenced this pull request Mar 16, 2026
github-merge-queue bot pushed a commit that referenced this pull request Mar 16, 2026
## Summary
- Reverts the three aes128 decrypt edge-case tests added in #21192
(`wrong key`, `bad ciphertext length`, `empty ciphertext`)

## Test plan
- CI passes without the removed tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)
AztecBot pushed a commit that referenced this pull request Mar 16, 2026
## Summary
- Reverts the three aes128 decrypt edge-case tests added in #21192
(`wrong key`, `bad ciphertext length`, `empty ciphertext`)

## Test plan
- CI passes without the removed tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)
jfecher added a commit that referenced this pull request Mar 19, 2026
Use new replacements from new nargo

revert(foundation): remove aes128 decrypt edge-case tests from #21192

feat: add ETHEREUM_HTTP_TIMEOUT_MS env var for viem HTTP transport (#20919)

- Adds `ETHEREUM_HTTP_TIMEOUT_MS` env var to configure the HTTP timeout
on viem's `http()` transport (default is viem's 10s)
- Introduces `makeL1HttpTransport` helper in `ethereum/src/client.ts` to
centralize the repeated `fallback(urls.map(url => http(url, { batch:
false })))` pattern
- Updates all non-test `createPublicClient` call sites (archiver,
aztec-node, sequencer, prover-node, epoch-cache, blob-client) to use the
helper with the configurable timeout

Users hitting `TimeoutError: The request took too long to respond` on
archiver `eth_getLogs` calls when querying slow or public L1 RPCs.
Viem's default 10s timeout is too short for large log queries and there
was no way to configure it.

- `yarn build` passes
- `yarn format` and `yarn lint` pass
- Set `ETHEREUM_HTTP_TIMEOUT_MS=60000` and confirm the archiver no
longer times out on large `eth_getLogs` ranges

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

fix(archiver): filter tagged log queries by block number (#21388)

Resolves the referenceBlock hash to a block number in the AztecNode and
passes it down as upToBlockNumber so the LogStore stops returning logs
from blocks beyond the client's sync point. Also adds an ordering check
on log insertion to guard against out-of-order appends.

Fixes F-417

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

fix(node): handle slot zero in getL2ToL1Messages (#21386)

If there was a block in L2 slot zero, then `getL2ToL1Messages` returned
an incorrect response, since the `slotNumber !== previousSlotNumber`
would fail in the first iteration of the loop.

feat(sequencer): redistribute checkpoint budget evenly across remaining blocks (#21378)

Update the per-block budgets so that, on every block, the limits are
further adjusted to `remainingCheckpointBudget / remainingBlocks *
multiplier`. This prevents the last blocks from starvation. Also adjusts
the multiplier from 2x to 1.2x.

Using the
https://github.com/AztecProtocol/explorations/tree/main/block-distribution-simulator

No redistribution, 2x multiplier

<img width="1544" height="737" alt="Screenshot From 2026-03-11 15-50-38"
src="https://github.com/user-attachments/assets/fda36d04-5d9e-456a-9ced-4649fa58d724"
/>

Redistribution enabled, 1.2x multiplier

<img width="1544" height="737" alt="Screenshot From 2026-03-11 15-50-49"
src="https://github.com/user-attachments/assets/2bc196f3-77fa-47bf-9294-4eb4199f8f93"
/>

For comparison purposes only, note the lower gas utilization

<img width="1544" height="737" alt="Screenshot From 2026-03-11 15-50-59"
src="https://github.com/user-attachments/assets/0facbc36-65e3-446e-abaf-eb7f637b87c9"
/>

- Adds `SEQ_REDISTRIBUTE_CHECKPOINT_BUDGET` (default: true) to
distribute remaining checkpoint budget evenly across remaining blocks
instead of letting one block consume it all. Fair share per block is
`ceil(remainingBudget / remainingBlocks * multiplier)`, applied to all
four dimensions (L2 gas, DA gas, blob fields, tx count).
- Changes default `perBlockAllocationMultiplier` from 2 to 1.2 for
smoother distribution.
- Wires `maxBlocksPerCheckpoint` from the timetable through to the
checkpoint builder config.

- Existing `capLimitsByCheckpointBudgets` tests pass with
`redistributeCheckpointBudget: false` (old behavior)
- New tests cover: even split with multiplier=1, fair share with
multiplier=1.2, last block gets all remaining, disabled flag falls back
to old behavior, DA gas and tx count redistribution
- `computeBlockLimits` tests updated for new default multiplier and
`maxBlocksPerCheckpoint` return value

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

fix: fall back to package.json for CLI version detection

Removed multipler config

Removed default snapshot url config

Read tx filestores from network config

fix(node): check world state against requested block hash (#21385)

When requesting world state at a given block hash, double check that the
returned world state is actually at that same block hash. Also check
that world state is synced to the requested block if using block hash.

feat(p2p): use l2 priority fee only for tx priority (#21420)

Simplifies the issue of summing two different rates over two different
magnitudes.

Fixes A-655

feat(p2p): reject and evict txs with insufficient max fee per gas (#21281)

- Previously, `GasTxValidator` returned `skipped` when a tx's
`maxFeesPerGas` was below current block fees, allowing it to wait for
lower fees. This changes it to `invalid`, rejecting the tx outright.
- Extracts `MaxFeePerGasValidator` as a standalone generic validator
(like `GasLimitsValidator`) so it can be used in pool migration
alongside full `Tx` validation.
- Adds `InsufficientFeePerGasEvictionRule` that evicts pending txs after
a new block is mined if their `maxFeesPerGas` no longer meets the
block's gas fees.
- Adds `maxFeesPerGas` to `TxMetaValidationData` so the eviction rule
and pool migration validator can access it from metadata.

**Caveat**: This may evict transactions that would become valid if block
fees later drop. A more nuanced approach would define a threshold (e.g.
50% of current fees) and only reject/evict below that. The current
approach is simpler and ensures the pool doesn't accumulate low-fee txs
unlikely to be mined soon.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

revert "feat(p2p): reject and evict txs with insufficient max fee per gas (#21281)" (#21432)

This reverts commit 9e2d79c7299fffe2d2f165d10e668fa175033af3.

Reduce log spam

Don't update state if we failed to execute sufficient transactions

Use an additional world state fork checkpoint when building blocks

Comment

fix(tx): reject txs with invalid setup when unprotecting (#21224)

When unprotecting txs, we were not running the check for allowed public
setup functions, which was skipped in the reqresp entrypoint.

This commit now tracks whether public setup is allowed or not for a tx
in the tx metadata, and uses it to drop the tx when it becomes
unprotected.

An alternative approach would have been to store the entire public setup
calls in the tx metadata, but this means a smaller memory footprint.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

fix: orchestrator enqueue yield

chore(builder): check archive tree next leaf index during block building (#21457)

Helps catch errors in world-state sync when building a block.

fix: scenario deployment
- use reasonable timeouts for each subtest in smoke test
- fix tf parse error in network deployment

chore: add claude skill to read network-logs

chore: update claude network-logs skill (#21523)

.

feat(rpc): add package version to RPC response headers (#21526)

The JSON-RPC server already returns `x-aztec-*` headers for protocol
component versions (chain id, rollup address, etc.), but does not
include the node's package version. This makes it harder to identify
which software version a node is running when debugging or monitoring.

Extended `getVersioningMiddleware` to accept an optional
`packageVersion` parameter and emit it as an `x-aztec-packageVersion`
response header alongside existing versioning headers.

```
$ curl -s -D- http://localhost:8090 -H 'content-type: application/json' -d '[{"jsonrpc":"2.0","id":1,"method":"node_getNodeVersion","params":[]}]'
HTTP/1.1 200 OK
Vary: Accept-Encoding, Origin
Access-Control-Allow-Origin: *
Content-Type: application/json; charset=utf-8
x-aztec-l2CircuitsVkTreeRoot: 0x06624799e2080c43aba671cdece34d5f4cdff2122bc6be41d80bd903ca0975cc
x-aztec-l2ProtocolContractsHash: 0x23c2022f0b69b29e354b4f7612d5db3ba0bc4259d71ac9186ed272945a644b9c
x-aztec-packageVersion: 5.0.0
Content-Length: 43
Date: Fri, 13 Mar 2026 16:04:31 GMT
Connection: keep-alive
Keep-Alive: timeout=5

[{"jsonrpc":"2.0","id":1,"result":"5.0.0"}]
```

- **stdlib**: Updated `getVersioningMiddleware` to accept an optional
`opts` bag with `packageVersion`, setting `x-aztec-packageVersion`
header when provided
- **aztec**: Wired `getPackageVersion()` into both the main and admin
RPC server middleware calls

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

chore(prover): silence "epoch to prove" debug logs (#21527)

Removed as not needed.

chore(sequencer): do not log blob data (#21530)

Blob data was too long for log entries to handle. This changes it so we
only log the size of the blobs published.

Before:

```
[13:06:08.794] VERBOSE: sequencer:publisher Published bundled transactions (propose) {"result":{"receipt":{"type":"eip4844","status":"success","cumulativeGasUsed":"385181","logs":[{"address":"0x322813fd9a801c5507c9de605d63cea4f2ce6c44","topics":["0x6ff492bf2b4ca1b93a175167d14b3e46085b935cab3f39ca94013000799b93a0","0x0000000000000000000000000000000000000000000000000000000000000003","0x2cf9ee43d934809911a0629cd290209348e0508ffdd3f30402351a5429f74ad4"],"data":"0x0000000000000000000000000000000000000000000000000000000000000060e49dcf9662d7eb83cf30af60c4d6cc8d564308edbc26fef51385c0c94fe65819bfab9e5d42c7aba8bf7d9f4048849f571a93b95e188cdec14fb8f555c1746550000000000000000000000000000000000000000000000000000000000000000101591ba24e6196c249ff33ec1d35987d4762fae761a978bb425463b12c90cc32","blockHash":"0x99a84eb88d3b66c87019755d87e92e65bf212d428aab9c7b94b7669568a4c85e","blockNumber":"16","blockTimestamp":"0x69b437b9","transactionHash":"0x8efedb655395bd83c2b9255b20906bc08e331081a9112a6ba0eeae52db67b2ba","transactionIndex":0,"logIndex":0,"removed":false}],"logsBloom":"0x00000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000020000000000000000000000000000000000800000000000000000000000000000000000001000000000100000000000000000100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000800000800000000000400000000000000000000000000000000002000000000000000000000000000000000000000010000000000000002000000000000000000000000000000000000000","transactionHash":"0x8efedb655395bd83c2b9255b20906bc08e331081a9112a6ba0eeae52db67b2ba","transactionIndex":0,"blockHash":"0x99a84eb88d3b66c87019755d87e92e65bf212d428aab9c7b94b7669568a4c85e","blockNumber":"16","gasUsed":"385181","effectiveGasPrice":"86710037164","blobGasUsed":"131072","blobGasPrice":"1","from":"0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266","to":"0xca11bde05977b3631167028862be2a173976ca11","contractAddress":null},"stats":{"sender":"0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266","transactionHash":"0x8efedb655395bd83c2b9255b20906bc08e331081a9112a6ba0eeae52db67b2ba","calldataSize":1156,"calldataGas":9160}},"requests":[{"action":"propose","request":{"to":"0x322813fd9a801c5507c9de605d63cea4f2ce6c44","data":"0x85b98fd82cf9ee43d934809911a0629cd290209348e0508ffdd3f30402351a5429f74ad400000000000000000000000000000000000000000000000000000000000000000d93c6531226171223911de09fea77fdafb779b9efdcb4b5b16b9f88b83d313c2f56b7eda997ca30d14f92163f11fabb64251cc3214f1ae2e7892c73394a405200c6b5236b5b2050261c85d5fb75ce253f61e8e78bd6ed2a1f2e510015a5173600de7b349d2306334734e4f58b1302a6ed5a6c796a706f6597a5641b6d46822300c95e0ceb41951039e1592745ec2faea9866f6eaf01bf189a4463b4143af09300000000000000000000000000000000000000000000000000000000000000060000000000000000000000000000000000000000000000000000000069b437ad000000000000000000000000f39fd6e51aad88f6f4ce6ab8827279cfffb92266000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000017e40840160000000000000000000000000000000000000000000000000000000000007617400000000000000000000000000000000000000000000000000000000000002800000000000000000000000000000000000000000000000000000000000000300000000000000000000000000000000000000000000000000000000000000001ca6ae226e0c0c881df1fd8a62aa8da91e24fff46aa5b2e332b8bf7c3d794ddbd72f61a710d6dd642a4d72709ea5b8a2ffcee8dccc3c4738985997ba1f21c7ad82000000000000000000000000000000000000000000000000000000000000032000000000000000000000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000000000000000000000060000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000003101965388a7c83f39d8a62700990eafe1c47aea33099620ef49e4692388140a89059bb706ae172b1be0f95ef5eae5d9d95e000000000000000000000000000000"},"lastValidL2Slot":6,"gasConfig":{"txTimeoutAt":"2026-03-13T16:14:33.000Z","gasLimit":"544173"},"blobConfig":{"blobs":[{"type":"Buffer","data":[0,0,0,0,0,156,112,117,24,0,1,0,2,0,0,0,1,0,1,0,16,0,0,0,0,0,0,0,0,0,0,25,26,238,19,223,98,108,51,39,185,48,7,135,133,178,147,239,57,158,83,100,234,40,148,244,48,140,10,85,81,56,181,92,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,11,5,71,67,73,245,255,128,39,74,107,34,150,128,61,115,97,225,233,233,194,46,196,104,26,177,246,81,82,26,141,101,32,153,156,119,213,215,189,21,30,3,237,5,172,202,237,51,111,235,148,198,17,74,149,165,234,251,231,67,150,146,214,87,205,222,247,251,121,227,248,88,21,206,223,50,10,178,185,48,113,224,144,254,51,188,206,182,14,193,245,184,231,229,108,104,167,114,117,215,121,226,123,6,32,187,127,152,6,8,218,238,170,45,66,79,166,221,2,131,116,183,84,218,103,159,44,203,146,105,85,194,139,13,44,77,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,30,14,219,130,119,104,74,0,128,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,16,23,194,113,146,86,119,231,234,201,34,17,252,249,170,155,250,188,59,104,53,30,67,90,239,40,221,185,206,67,192,153,240,20,160,162,216,211,135,12,159,150,144,79,240,31,54,236,203,158,230,56,163,234,2,214,186,161,242,128,79,82,165,106,215,9,64,39,67,157,216,98,204,153,91,75,49,87,19,88,219,166,57,43,7,70,229,209,69,244,235,137,187,136,49,34,112,27,19,168,33,114,234,201,59,123,189,172,166,141,13,228,3,62,240,81,88,228,75,227,34,202,106,180,111,74,3,68,230,46,152,197,95,180,208,252,1,246,222,70,178,196,206,138,239,48,0,68,63,159,193,213,185,99,38,125,134,57,82,77,228,9,69,163,75,46,188,77,39,79,233,139,166,198,56,50,98,102,204,84,100,59,85,252,5,84,144,67,102,32,186,105,76,39,198,140,238,221,146,244,149,158,173,47,21,14,106,37,211,54,64,110,33,184,20,9,178,154,3,165,90,205,101,33,225,4,254,15,42,134,15,220,194,5,7,123,208,144,249,255,50,38,231,53,246,255,235,7,218,100,171,66,52,77,100,143,19,33,137,120,171,80,249,227,240,121,120,211,197,216,180,176,52,195,37,19,80,164,154,169,135,113,194,61,109,124,201,158,183,4,90,38,197,107,154,59,30,11,14,212,231,16,76,92,83,20,27,137,81,64,50,170,185,235,146,35,16,58,136,191,244,24,102,76,98,216,202,148,222,94,4,24,226,59,127,81,220,85,129,141,31,82,175,118,42,150,111,247,99,150,76,73,31,46,217,239,39,171,32,46,121,209,216,199,37,219,118,203,229,36,187,102,148,241,105,248,162,201,37,224,126,152,46,165,55,13,65,88,163,142,168,250,39,108,205,59,217,132,238,71,186,247,204,119,144,3,83,117,197,202,150,19,225,18,100,221,132,45,173,121,0,215,81,124,239,129,2,198,78,198,128,127,49,164,143,132,106,231,205,189,154,61,248,35,150,11,235,58,133,3,167,204,128,123,88,235,113,102,117,104,78,144,225,115,113,4,170,212,164,25,242,11,202,104,244,42,96,167,170,16,203,18,1,108,240,36,194,238,226,87,204,44,152,233,133,157,132,223,212,58,124,145,142,151,77,131,127,167,188,198,171,85,64,0,0,0,0,0,0,0,0,0,0,0,0,0,0,235,141,205,191,0,0,0,0,105,180,55,173,0,0,0,3,0,1,0,0,0,0,0,0,0,0,0,192,0,0,0,0,3,0,0,0,0,1,64,0,0,0,0,128,0,0,0,7,97,116,13,147,198,83,18,38,23,18,35,145,29,224,159,234,119,253,175,183,121,185,239,220,180,181,177,107,159,136,184,61,49,60,48,31,126,100,208,109,52,94,81,74,80,188,159,121,245,132,173,242,87,54,30,117,147,96,244,71,241,112,146,196,28,175,25,45,25,220,110,32,158,202,38,189,248,17,35,184,173,35,55,206,221,137,201,109,206,24,17,182,213,21,66,27,84,119,5,41,141,189,74,163,10,246,45,56,13,4,165,122,107,249,165,228,134,75,248,5,48,87,76,163,124,122,177,60,83,222,13,88,44,16,255,129,21,65,58,165,183,5,100,253,210,243,206,254,31,51,161,228,58,71,188,73,80,129,233,30,115,229,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,140,99,116,67,0,0,0,33,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
```

After:

```
[13:30:41.119] VERBOSE: sequencer:publisher Published bundled transactions (propose) {"result":{"receipt":{"type":"eip4844","status":"success","cumulativeGasUsed":"385205","logs":[{"address":"0x322813fd9a801c5507c9de605d63cea4f2ce6c44","topics":["0x6ff492bf2b4ca1b93a175167d14b3e46085b935cab3f39ca94013000799b93a0","0x0000000000000000000000000000000000000000000000000000000000000003","0x26e3973a3203d93aaaf76d59c43c7f3cd4080a67684179ee3f1092cf6a4c9978"],"data":"0x0000000000000000000000000000000000000000000000000000000000000060f96dc77f2d110d9050959e38e4c9c84038ce49729dd16aa7eb255741148e79d8bfab9e5d42c7aba8bf7d9f4048849f571a93b95e188cdec14fb8f555c17465500000000000000000000000000000000000000000000000000000000000000001016b358729e956cccab674961e67294fe0bf963088adebef8f6e019219025606","blockHash":"0x1089308e44405731f84b1f67ad8448e87da12edffb26d3b0b017ac9627b31a9a","blockNumber":"16","blockTimestamp":"0x69b43d77","transactionHash":"0x4f498af5a8bf55d4826b695c41050402340ab5f0696092835ecd5e53d032dfbb","transactionIndex":0,"logIndex":0,"removed":false}],"logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000020000000000000000000000000000000000000020000000000000400000000000000000000800000000000000000000000000000000000001000400000000000000000000000100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000800000800000000000400000000000000000000000000000000002000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000","transactionHash":"0x4f498af5a8bf55d4826b695c41050402340ab5f0696092835ecd5e53d032dfbb","transactionIndex":0,"blockHash":"0x1089308e44405731f84b1f67ad8448e87da12edffb26d3b0b017ac9627b31a9a","blockNumber":"16","gasUsed":"385205","effectiveGasPrice":"86710037256","blobGasUsed":"131072","blobGasPrice":"1","from":"0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266","to":"0xca11bde05977b3631167028862be2a173976ca11","contractAddress":null},"stats":{"sender":"0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266","transactionHash":"0x4f498af5a8bf55d4826b695c41050402340ab5f0696092835ecd5e53d032dfbb","calldataSize":1156,"calldataGas":9184}},"requests":[{"action":"propose","request":{"to":"0x322813fd9a801c5507c9de605d63cea4f2ce6c44","data":"0x85b98fd826e3973a3203d93aaaf76d59c43c7f3cd4080a67684179ee3f1092cf6a4c997800000000000000000000000000000000000000000000000000000000000000002b43a3e89866a1db7350d1039543aef72e806b835bd8e79536d7bfd74c3d1dbe15115c9aa93b867f4d9ecd238642afaa48d24d61d7106bc8bca3abc3ec189c2800d00d7594bcbade97739ca33290faf152804f2185915c2ad9a68958292a6c8a00de7b349d2306334734e4f58b1302a6ed5a6c796a706f6597a5641b6d46822300c95e0ceb41951039e1592745ec2faea9866f6eaf01bf189a4463b4143af09300000000000000000000000000000000000000000000000000000000000000060000000000000000000000000000000000000000000000000000000069b43d6b000000000000000000000000f39fd6e51aad88f6f4ce6ab8827279cfffb92266000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000017e40840160000000000000000000000000000000000000000000000000000000000007617400000000000000000000000000000000000000000000000000000000000002800000000000000000000000000000000000000000000000000000000000000300000000000000000000000000000000000000000000000000000000000000001ca6ae226e0c0c881df1fd8a62aa8da91e24fff46aa5b2e332b8bf7c3d794ddbd72f61a710d6dd642a4d72709ea5b8a2ffcee8dccc3c4738985997ba1f21c7ad8200000000000000000000000000000000000000000000000000000000000003200000000000000000000000000000000000000000000000000000000000000040000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000310190114a7271b77c746c0ef4775090eea8ed71717b487db61e8e833a8519db1a52f2355b2248d2ee92f827b0bcf81b4552000000000000000000000000000000"},"lastValidL2Slot":6,"gasConfig":{"txTimeoutAt":"2026-03-13T16:39:03.000Z","gasLimit":"544203"},"blobConfig":{"blobs":[{"size":1056}],"kzg":{}}}]}
```

update tar

update minimatch

update glob

update glob

docs(p2p): nicer READMEs (#21456)

Nicer READMEs for the p2p module

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

fix(archiver): guard getL1ToL2Messages against incomplete message sync (#21494)

`getL1ToL2Messages(checkpointNumber)` returns the L1-to-L2 messages for
a given checkpoint by reading from the local message store. However, if
the message tree for that checkpoint hasn't been fully sealed on L1 yet
(or the archiver hasn't synced it), the method silently returns
incomplete data — or an empty array for a checkpoint that will
eventually have messages. This is indistinguishable from a legitimately
empty checkpoint (one where no L1-to-L2 messages were sent).

Any caller that uses this result to compute `inHash` — the sequencer,
validator, or slasher — would derive an incorrect hash, leading to
mismatches and potential block validation failures.

The L1 Inbox contract exposes a `treeInProgress` value: the checkpoint
number whose message tree is currently being filled. Trees for
checkpoints strictly below this value are sealed and complete. We
persist this value in the archiver's message store during each L1 sync
cycle and use it as a guard: `getL1ToL2Messages` now throws
`L1ToL2MessagesNotReadyError` if the requested checkpoint number is >=
`treeInProgress`. On first startup (before any sync), the guard is
permissive (skipped) since the value hasn't been set yet.

This approach was chosen over a simpler "last message checkpoint" bound
because it correctly handles empty checkpoints — a checkpoint with zero
messages is still sealed once `treeInProgress` moves past it.

- **archiver**: Added `L1ToL2MessagesNotReadyError`. The message store
now persists `treeInProgress` as an LMDB singleton and guards
`getL1ToL2Messages` against unsealed checkpoints. The L1 synchronizer
writes the value on every sync cycle immediately after fetching inbox
state.
- **archiver (tests)**: Updated the fake L1 state mock to compute
`treeInProgress` dynamically from both messages and checkpoints. Added
unit tests for the guard (sealed, unsealed, unset). Updated the L1 reorg
test to expect the new error for unsealed checkpoints.
- **stdlib**: Documented the new throw condition on the
`L1ToL2MessageSource` interface.

Fixes A-659

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

fix(sequencer): await syncing proposed block to archiver (#21554)

While this means we waste a bit more time during block building, it also
means we catch early any consistency errors, including an l1 reorg that
removed the previous checkpoint while we were building ours.

Fixes A-665

feat(ethereum): check VK tree root and protocol contracts hash in rollup compatibility (#21537)

- Adds `getVkTreeRoot()` and `getProtocolContractsHash()` methods to
`RollupContract` that read immutable config from L1 storage slots
(stfStorageSlot + 3 and + 4)
- Extends `waitForCompatibleRollup` to verify all three values (genesis
archive root, VK tree root, protocol contracts hash) before allowing the
node to proceed
- Prevents nodes from starting against a rollup deployed with
incompatible circuits or protocol contracts

- Added unit tests for `getVkTreeRoot` and `getProtocolContractsHash` in
`rollup.test.ts` that verify storage slot arithmetic against values set
during deployment
- Existing `waitForCompatibleRollup` behavior preserved: enters standby
mode on mismatch, polls until compatible rollup found

fix: marking peer as dumb on failed responses

penalizing the peer on archive root mismatch

refactor after code review

fix(kv-store): make LMDB clear and drop operations atomic across sub-databases (#21539)

part of https://github.com/AztecProtocol/aztec-packages/issues/21514

- **Problem**: `AztecLmdbStore.clear()` and `drop()` called their
respective operations on each sub-database (`#data`, `#multiMapData`,
`#rootDb`) sequentially without a wrapping transaction. A crash between
operations could leave the store in an inconsistent state (some sub-DBs
cleared, others not).
- **Fix**: Wrap all sub-database operations within a single
`this.#rootDb.transaction()` call using synchronous variants
(`clearSync()` / `dropSync()`) so they execute atomically.
- **Tests**: Added comprehensive test suite covering clear (maps,
multimaps, singletons, counters, sets), drop, and delete operations.

- `yarn-project/kv-store/src/lmdb/store.ts`: `clear()` now uses
`clearSync()` inside a transaction; `drop()` now uses `dropSync()`
inside a transaction.
- `yarn-project/kv-store/src/lmdb/store.test.ts`: New test file with 7
test cases.

- [x] New unit tests for `clear()`, `drop()`, and `delete()` operations
- [ ] Existing kv-store tests pass: `yarn workspace @aztec/kv-store
test`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

feat(world-state): add blockHash verification to syncImmediate

Adds an optional `BlockHash` parameter to `syncImmediate` for reorg detection.
When provided, verifies the block at the target number matches the expected hash.
On mismatch, triggers a resync; if still mismatched after sync, throws.
Also removes dead `skipThrowIfTargetNotReached` parameter (no caller passed `true`).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

chore(monitor): print out l2 fees components

chore: rm faucet (#21538)

.

chore: remove old merkle trees (#21577)

Fix A-672

Implement commit all and revert all for world state checkpoints

Test fix

Function rename

Change semantic of commitTo/revertTo so that it applies up to the depth below

Comment fix

Co-authored-by: Santiago Palladino <santiago@aztecprotocol.com>

chore: skip flaky browser acir tests in CI (#21596)

Temporarily skips the `acir_tests/browser-test-app` browser prove tests
(`verify_honk_proof` and `a_1_mul`) which are failing with "Failed to
fetch" errors in CI, blocking the v4 merge train.

This unblocks #21595 and transitively #21592 and #21443.

ClaudeBox log: https://claudebox.work/s/8663550bd346778b?run=1

---------

Co-authored-by: Santiago Palladino <santiago@aztec-labs.com>

Better detection for epoch prune

Use 2 checkpoint threashold

chore: logging

refactor: clean up bb build infrastructure

- **Rename MOBILE → BB_LITE** across all CMake files and presets for clarity
- **Unify build functions** via a single `build_preset` entry point that handles cache download/upload, cmake build, and version injection automatically
- **Auto-derive cache paths** from preset targets via `preset_cache_paths` helper (jq queries CMakePresets.json) — targeted presets cache only their outputs, not intermediate artifacts
- **Remove zig- prefix** from cross-compile preset names (`zig-arm64-linux` → `arm64-linux` etc.), enabling Makefile to call `build_preset` directly and eliminating wrapper functions
- **Merge wasm-threads-bench** targets into wasm-threads preset (negligible build impact), removing the shared-dir edge case
- **Disable wasm-opt** via `--no-wasm-opt` LDFLAGS in wasm preset (was silently picked up from PATH)
- **Move format check** to its own Makefile target (`bb-cpp-format-check`)
- **Move yarn install** to Makefile dependency (`bb-cpp-yarn`) for serialization
- **Add build preset targets** to CMakePresets.json for cross-compiles, asan-fast (single source of truth)
- **Rename `scripts/native-preset-build-dir`** → `scripts/preset-build-dir` to accept any preset name

fix(p2p): fall back to maxTxsPerCheckpoint for per-block tx validation (#21605)

When `VALIDATOR_MAX_TX_PER_BLOCK` is not set but
`VALIDATOR_MAX_TX_PER_CHECKPOINT` is, the gossip-level proposal
validator enforces no per-block transaction limit at all. A single block
can't have more transactions than the entire checkpoint allows, so the
checkpoint limit is a valid upper bound for per-block validation.

Use `validateMaxTxsPerCheckpoint` as a fallback when
`validateMaxTxsPerBlock` is not set in the proposal validator
construction. This applies at both construction sites: the P2P libp2p
service (gossip validation) and the validator-client factory (block
proposal handler).

- **p2p**: Added `validateMaxTxsPerCheckpoint` to `P2PConfig` interface
and config mappings (reads from `VALIDATOR_MAX_TX_PER_CHECKPOINT` env
var)
- **p2p (libp2p_service)**: Use `validateMaxTxsPerBlock ??
validateMaxTxsPerCheckpoint` when constructing proposal validators
- **validator-client (factory)**: Same fallback when constructing the
`BlockProposalValidator`

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

fix: show "CI booting..." in log URL immediately after creation

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

fix: pipe "CI booting..." to redis_setexz via stdin

redis_setexz reads its value from stdin, not a positional arg.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

fix: remove redundant native copy from copy_cross.sh (#21528)

- Removes the redundant native amd64-linux binary copy from
`copy_cross.sh` that was already handled by `copy_native.sh` (via the
`bb-ts` Makefile target)
- This eliminates the `Text file busy` (ETXTBSY) error in nightly
release builds when the docs tests Docker container is executing the
`bb` binary while `copy_cross.sh` tries to overwrite it
- Root cause: `copy_cross.sh` did `cp ../cpp/build/bin/bb
./build/amd64-linux/bb` over a binary being executed by a bind-mounted
Docker container running `bb msgpack run`

- [ ] CI release build passes without `Text file busy` error
- [ ] Cross-compiled binaries (arm64-linux, amd64-macos, arm64-macos)
are still correctly copied

fix(docs): include "Additional commands" in CLI reference generation

- The CLI reference doc (`aztec_cli_reference.md`) was missing `test`, `new`, and `init` commands because the scanner's parser for the "Additional commands" section in `aztec --help` used a regex expecting colon-separated format, but the actual output uses space-padded format
- Fixed `scan_cli.py` to fall back to space-padded parsing (same as the main Commands section) so all additional commands are now discovered and documented with their full options
- Regenerated `aztec_cli_reference.md` for both current and v4.1.0-rc.2 versioned docs

- Run `python3 docs/scripts/cli_reference_generation/scan_cli.py --command aztec --output /tmp/test.json` and verify `init`, `new`, `test` appear in the commands list
- Run `./docs/scripts/cli_reference_generation/generate_cli_docs.sh --force aztec current` and verify the generated doc includes sections for `aztec init`, `aztec new`, and `aztec test`
- Verify `aztec --help` "Additional commands" section matches what appears in the generated reference

🤖 Generated with [Claude Code](https://claude.com/claude-code)

chore: fixing M3 devcontainer builds (#21611)

Building inside a devcontainer on Mac with Apple M3 chip fails in
multiple ways:

1. **SIGILL crashes** — The `bb-sol` build step crashes when running
`honk_solidity_key_gen`, and E2E tests fail with `Illegal instruction`
errors.
2. **Rust compilation failures** — The `noir` build fails with `can't
find crate for serde` and similar errors when noir and avm-transpiler
build in parallel, racing on the shared `CARGO_HOME`.

1. CI runs on **AWS Graviton** (ARM64 with SVE vector extensions)
2. The zig compiler wrapper uses `-target native-linux-gnu.2.35`, which
on Graviton enables **SVE instructions**
3. Mac M3 devcontainer (ARM64 **without SVE**) downloads the same cached
binaries
4. Binaries contain SVE opcodes (e.g. `0x04be4000`) that Apple Silicon
can't execute → **SIGILL**

Cache keys already include architecture via `cache_content_hash` (which
appends `$OSTYPE-$(uname -m)`), so amd64 vs arm64 caches never collide.
The problem is specifically that two ARM64 machines (Graviton with SVE
vs Apple Silicon without SVE) share the same architecture tag but have
different CPU feature sets. The fix is to stop emitting CPU-specific
instructions in the first place.

The top-level bootstrap runs `noir` and `avm-transpiler` builds in
parallel. Both invoke `cargo build`, and both share the same
`CARGO_HOME` (`~/.cargo`) which contains the crate registry and download
cache. When both cargo processes run concurrently, they race on shared
registry state, causing downstream crates (e.g. `serde-big-array`,
`ecdsa`) to fail with `can't find crate` errors during compilation. This
does not happen on CI where builds are cached, only on local fresh
builds (e.g. `NO_CACHE=1`).

**Files:** `barretenberg/cpp/scripts/zig-cc.sh`,
`barretenberg/cpp/scripts/zig-c++.sh`

Changed `-target native-linux-gnu.2.35` to use explicit
`aarch64-linux-gnu.2.35` on ARM64 Linux. This produces generic ARM64
code without CPU-specific extensions (SVE, etc.), ensuring binaries work
on all ARM64 machines — Graviton, Apple Silicon, Ampere, etc.

x86_64 behavior is unchanged (still uses `native`).

**File:** `barretenberg/cpp/bootstrap.sh`

Extracted the repeated cache key pattern
`barretenberg-$native_preset-$hash` into a single `native_cache_key`
variable, used by `build_native_objects`, `build_native`, and related
functions. Pure refactor, no change in cache key values.

**File:** `barretenberg/sol/scripts/init_honk.sh`

Added `set -eu` so the script fails immediately on error instead of
silently continuing after SIGILL. Added an existence check for the
`honk_solidity_key_gen` binary with a clear error message.

**Files:** `noir/bootstrap.sh`, `avm-transpiler/bootstrap.sh`

Both scripts wrap their `cargo build` invocations with `flock -x 200` on
a shared lock file (`/tmp/rustup.lock`):

```bash
(
  flock -x 200
  cd noir-repo && cargo build --locked --release --target-dir target
) 200>/tmp/rustup.lock
```

This acquires an exclusive file lock before running cargo, so if both
`noir` and `avm-transpiler` builds run in parallel, one waits for the
other to finish. The lock is automatically released when the subshell
exits. This eliminates the `CARGO_HOME` race condition without requiring
changes to the top-level parallelism.

The E2E test failures (SIGKILL from invalid instructions) have the same
root cause as the SIGILL crashes — the `bb` binary used by tests was
from the SVE-contaminated cache. After rebuilding with these fixes, E2E
tests work.

---------

Co-authored-by: Aztec Bot <49558828+AztecBot@users.noreply.github.com>
Co-authored-by: ludamad <adam.domurad@gmail.com>

chore: merkle tree audit  (#21251)

Addresses the following in the merkle tree module:
- thread failures in `perform_updates_without_witness` were not
propagated, making the update path to report success even when threads
failed
- `NullifierMemoryTree` did not enforce its leaf capacity bound and
allowed inserts beyond the maximum tree size
- Add tests to validate the above two
- `execute_and_report` does not report exceptions thrown by completion
callbacks, which could hide failures and potentially lead to hangs
  - update `execute_and_report` to log the error and abort the process

chore: Ecc/curves audit - remove unused members (#21476)

Final PR in the ecc/curves audit. Removes the `small_elements` member
which is now unused. Add `bb::g2` to element.test.cpp so that we don't
duplicate tests.

- Remove `small_elements` member from curves (unused)
- Add `bb::g2` to `element.test.cpp` to avoid duplicating tests

- [ ] Audited all methods of the relevant module/class
- [ ] Audited the interface of the module/class with other (relevant)
components
- [ ] Documented existing functionality and any changes made (as per
Doxygen requirements)
- [ ] Resolved and/or closed all issues/TODOs pertaining to the audited
files
- [ ] Confirmed and documented any security or other issues found (if
applicable)
- [ ] Verified that tests cover all critical paths (and added tests if
necessary)
- [ ] Updated audit tracking for the files audited (check the start of
each file you audited)

feat: add debug-only asserts for 2p coarse modular form in field arithmetic (#21178)

* Adds debug-only assertions (`BB_ASSERT_DEBUG`) to verify that field
elements remain within the coarse representation range `[0, 2p)` after
arithmetic operations. This serves as partially checked documentation of
the coarse-form invariant used by 254-bit fields (base and scalar fields
of BN-254).
* killed a bit of dead code
* micro-optimization in unary negation, which was easiest way of fixing
the debug-assert being triggered in an edge case.
** This was _not_ a real issue, but the debug assert nonetheless was
triggered: formerly, the computation `-(x)` would first compute `2p - x`
and then `reduce_once()`, and the debug-assert would fail when `x==0`
for the first computation. Instead, we just compute `p-x` and let the
asm handle the underflow.

- **`field_declarations.hpp`**: Added `assert_coarse_form()` helper that
checks `val < twice_modulus` for small moduli in debug builds. Compiles
to nothing in release (`NDEBUG`).

- **`field_declarations.hpp`**: Removed dead code:
`conditionally_subtract_from_double_modulus` (never called, had the same
2p bug pattern), `sqr_512` (declared but never defined), `__swap` (never
called), `wnaf_table` (never instantiated). Removed `#include
<execinfo.h>` and backtrace instrumentation (non-portable, redundant
with ASAN).

- **`field_impl.hpp`**: Added `assert_coarse_form()` calls after every
asm coarse-reduction path (`operator+`, `+=`, `-`, `-=`, `*`, `*=`,
`sqr`, `self_sqr`).

- **`field_impl.hpp`**: Micro-optimization in `operator-()` and
`self_neg()`: use `p` instead of `2p` as minuend. `subtract(p, x)`
already handles `x > p` via underflow correction (+2p), so the result is
always in `[0, 2p)` strict without needing `reduce_once()`. This removes
one conditional subtraction per negation (~5% faster on `fr_bench`) and
avoids the false-positive `assert_coarse_form()` trigger when `x = 0`.

- **`field_impl_generic.hpp`**: Added output assertions to the generic
`add()`, `subtract()`, `montgomery_mul()`, and `montgomery_square()`
implementations for the small-modulus branch, guarded by `if
(!std::is_constant_evaluated())`.

- `ecc_tests`: all 809 tests pass (both ASAN and Release builds)
- `fr_bench`: no performance regression; unary negation ~5% faster

Resolves AztecProtocol/barretenberg#1429

ClaudeBox log: http://ci.aztec-labs.com/d1479c5106ce0b7a-1

---------

Co-authored-by: notnotraju <raju@aztec-labs.com>

chore: clean up native field audit scope (#21490)

Cleaned up audit scope docs and added audit status headers to field
source files.

---------

Co-authored-by: notnotraju <raju@aztec-labs.com>

feat!: integrate batched honk-translator proving into chonk (#21263)

Reduce chonk proof size using
https://github.com/AztecProtocol/aztec-packages/pull/21246 and
https://github.com/AztecProtocol/aztec-packages/pull/21376

Closes https://github.com/AztecProtocol/barretenberg/issues/1639

fix: Fix tests in debug mode (#21542)

Fix some tests that would fail in debug build

chore: bench compressed chonk proof size  (#21616)

we use ProofCompression module to reduce Chonk proof size without PI,
the PI are padded
(`PRIVATE_TO_ROLLUP_KERNEL_CIRCUIT_PUBLIC_INPUTS_LENGTH = 1281`), and
most of these fields are zero, so gz compresses those efficiently, now
we can track this size in the main flows' benches

feat: add public log filtering by tag (#21561)

Reimplementation of
https://github.com/AztecProtocol/aztec-packages/pull/21471. I did the
filtering in-memory as explained there due to lack of indices.

Additionally I fixed a bug in which certain index tuples were ignored -
e.g. if `afterLog` and `txHash` were both specified, `txHash` was
ignored.

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

feat: default to kernelless simulations (#21575)

We're confident enough in them after ensuring expiration_timestamp is
set

---------

Co-authored-by: Jan Beneš <janbenes1234@gmail.com>

fix(aztec-node): throw error in getLowNullifierMembershipWitness when nullifier exists

Previously, getLowNullifierMembershipWitness would log a warning and return the
nullifier's own witness when it already existed in the tree. This is wrong for a
non-inclusion proof and led to cryptic circuit assertion failures downstream.
Now it throws a descriptive error early.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

dropping redundant docs

fix: update nullifier non-inclusion test expectations after early oracle throw (#21600)

refactor(pxe): type and audit legacy oracle mappings (#21569)

fix: clamp finalized block to oldest available in world-state

PR #21597 increased the finalized block lookback from epochDuration*2
to epochDuration*2*4. This caused the finalized block number to jump
backwards past blocks that had already been pruned from world-state,
causing advance_finalized_block to fail with 'Failed to read block data'.

Two fixes:
1. TypeScript: clamp blockNumber to oldestHistoricalBlock before calling
   setFinalized, so we never request a pruned block.
2. C++: reorder checks in advance_finalized_block to check the no-op
   condition (already finalized past this block) before attempting to
   read block data. This makes the native layer resilient to receiving
   a stale finalized block number.

test: add integration test for finalized block backwards jump past pruned blocks

Tests that handleBlockStreamEvent with chain-finalized for a block
older than the oldest available block does not throw, validating
the clamping fix in handleChainFinalized.

chore: fix proving logs script

fix: tx collector bench test
Wire peerFailedBanTimeMs as new env and set tx collector test ban time to 5 minutes -> 5 seconds.
The test would flake due to timeout and aggregation of peers took 1 full minute on attempting to get peers per subtest despite never obtaining all peers. This is because the peer dial is serialized and limited to 5 for this test and peers may dial repeatedetly without success then get banned for 5 minutes, never being able to reconnect within the 1 minute wait. This should allow all peers to connect in time and lower the 1 minute timeout, resulting in less timeouts overall for the test.

fix(validator): process block proposals from own validator keys in HA setups

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

fix: avoid `Array.from` with untrusted sizes

Calling `Array.from({length})` allocates length immediately. We were
calling this method in the context of deserialization with untrusted
input.

This PR changes it so we use `new Array(size)` for untrusted input. A
bit less efficient, but more secure.

fix: same change but for field reader

fix: skip handleChainFinalized when block is behind oldest available

When the finalized block jumps backwards past pruned state, return early
instead of clamping and continuing into the pruning logic. The previous
clamping fix avoided the setFinalized error but then removeHistoricalBlocks
would fail trying to prune to a block that is already the oldest.

Also guard removeHistoricalBlocks against being called with a block number
that is not newer than the current oldest available block.

chore: demote finalized block skip log to trace (#21661)

Demotes the "Finalized block X is older than oldest available block Y.
Skipping." log from `warn` to `trace`. This message fires on every block
stream tick while the finalized block is behind the oldest available,
filling up operator logs on deployed networks.

ClaudeBox log: https://claudebox.work/s/8e97449f22ba9343?run=6

fix: skip -march auto-detection for cross-compilation presets (#21356)

Fixes CI failure on merge-train/spartan caused by `-march=skylake` being
injected into aarch64 cross-compilation builds (arm64-android,
arm64-ios, arm64-macos).

**Root cause:** The `arch.cmake` auto-detection added in #21611 defaults
`TARGET_ARCH` to `skylake` when `ARM` is not detected. Cross-compile
presets (ios, android) don't set `CMAKE_SYSTEM_PROCESSOR`, so ARM
detection fails and `-march=skylake` gets passed to aarch64 Zig builds —
which errors with `unknown CPU: 'skylake'`. For arm64-macos,
`-march=generic` overrides Zig's `-mcpu=apple_a14`, breaking libdeflate.

**Fix:** Gate auto-detection on `NOT CMAKE_CROSSCOMPILING`.
Cross-compile toolchains handle architecture targeting via their own
flags (e.g. Zig `-mcpu`). Presets that explicitly set `TARGET_ARCH`
(amd64-linux, arm64-linux) are unaffected.

Also restores `native_build_dir` variable dropped in the build
infrastructure refactor.

- Verified all cross-compile presets (arm64-android, arm64-ios,
arm64-ios-sim, arm64-macos, x86_64-android) configure with zero `-march`
flags
- Verified native presets (default, amd64-linux, arm64-linux) still get
correct `-march` values

chore: revert "add bounds when allocating arrays in deserialization" (#21622) (#21666)

It was a red herring.

We were not using `Array.from({ length })` but `Array.from({ length },
() => deserializer)`, and the deserializer would throw when reaching the
end of the buffer, preventing the full allocation of the array.

fix: capture txs not available error reason in proposal handler (#21670)

We were reporting txs not available as an unknown error.

chore(docs): cut new aztec and bb docs version for tag v5.0.0-nightly.20260317

chore: add L1 inclusion time to stg public

Update comments.

feat(fuzz): protocol fuzzer with bridge server and parallel batching

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

fix: estimate gas in bot and make BatchCall.simulate() return SimulationResult (#21676)

- Modifies the bot factory to estimate gas for all transactions during
setup (deploy, mint, add liquidity, etc.) instead of using default gas
settings.
- Makes `BatchCall.simulate()` always return `SimulationResult`
(consistent with `ContractFunctionInteraction` and `DeployMethod`),
instead of returning different shapes depending on whether gas
estimation was requested.

- [x] `yarn build` passes (no new type errors)
- [x] `yarn workspace @aztec/aztec.js test
src/contract/batch_call.test.ts` — all 7 tests pass
- [ ] Spartan network deployment with bot enabled

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

fix: prevent HA peer proposals from blocking equivocation in duplicate proposal test (#21673)

When PR #21603 changed the validator to process (not ignore) block
proposals from HA peers (same validator key), the
`duplicate_proposal_slash` test broke. The second malicious node now
processes the first node's proposal, adds the block to its archiver via
`blockSource.addBlock()`, and the sequencer sees "slot was taken" —
preventing it from ever building its own conflicting proposal.

**Root cause**: `validateBlockProposal` no longer returns `false` for
self-proposals (changed to process them for HA support). The
block_proposal_handler re-executes the proposal and pushes it to the
archiver. The sequencer then skips the slot.

**Fix**: Set `skipPushProposedBlocksToArchiver=true` on the malicious
nodes. This allows:
1. Node 1 builds and broadcasts its proposal
2. Node 2 receives it, re-executes (as HA peer), but does NOT add to
archiver
3. Node 2's sequencer doesn't see "slot taken" → builds its own block
with different coinbase
4. Node 2 broadcasts (allowed by `broadcastEquivocatedProposals=true`)
5. Honest nodes see both proposals → detect duplicate → offense recorded

- The `duplicate_proposal_slash` e2e test should now pass consistently
- Other slashing tests should be unaffected (only malicious nodes in
this test are changed)

ClaudeBox log: https://claudebox.work/s/ced449aa0eabbcb4?run=1

feat: entrypoint replay protection (#21649)

Threads ChainInfo through some methods that externals needed to protect
against replay attacks. While at it, protects our own entrypoints from
them too

Closes: https://github.com/AztecProtocol/aztec-packages/issues/21572

---------

Co-authored-by: Jan Beneš <janbenes1234@gmail.com>

feat: guard BoundedVec oracle returns against dirty trailing storage (#21589)

feat: implement manual Packable for structs with sub-Field members (#21576)

fix: off-by-1 in getBlockHashMembershipWitness archive snapshot (#21648)

fix(p2p): penalize peers for errors during response reading

Errors in readMessage (invalid status bytes, oversized snappy responses,
corrupt data) were caught and silently converted to UNKNOWN status returns.
Since sendRequestToPeer only calls handleResponseError in its own catch
block, none of these errors resulted in peer penalties. The request was
simply retried with another peer, allowing a malicious peer to waste
bandwidth indefinitely.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

feat(sequencer): add build-ahead config and metrics (#20779)

Baseline stuff for buildahead

- adds enable bool
- adds some metrics that will be required

chore: fixing build on mac (#21685)

zig aarch64 target

Fixes SIGILL (Illegal Instruction) crashes and build failures on ARM64
Mac (M3/Apple Silicon) devcontainers caused by incorrect `-march`
handling introduced in #21611.

PR #21611 originally fixed ARM64 devcontainer builds by using explicit
`aarch64-linux-gnu.2.35` zig targets. During the merge, that approach
was replaced with cmake-based auto-detection that sets
`TARGET_ARCH=generic` on ARM and passes `-march=generic` to the
compiler. This caused two distinct failures:

The zig compiler wrappers still used `-target native-linux-gnu.2.35`,
which auto-detects the host CPU. On CI (AWS Graviton with SVE
extensions), this produces binaries containing SVE instructions. These
cached binaries are then downloaded on Apple Silicon devcontainers
(ARM64 without SVE), causing SIGILL when executed — e.g.
`honk_solidity_key_gen` crashing during `barretenberg/sol` bootstrap.

The `-march=generic` flag was supposed to override this, but
`-march=generic` is **not a valid value on aarch64**. It's an x86
concept. LLVM/zig silently ignored it, so the native CPU detection still
produced SVE instructions.

Even attempting `-march=armv8-a` (a valid GCC/Clang aarch64 value) fails
because zig uses its own CPU naming scheme (e.g. `generic`,
`cortex_a72`, `apple_m3`), not GCC-style architecture strings. Zig
interprets `-march=armv8-a` as CPU name `armv8`, which doesn't exist →
`error: unknown CPU: 'armv8'`.

**Bottom line:** The `-march` cmake approach fundamentally doesn't work
with zig on ARM. Zig has its own architecture targeting via `-target`,
which is the correct mechanism.

Removed the ARM branch from the auto-detection. On x86_64, we still
auto-detect `TARGET_ARCH=skylake`. On ARM, we don't set `TARGET_ARCH` at
all, so no `-march` flag is passed — the zig wrappers handle
architecture targeting instead.

Restored the original fix from #21611 that was dropped during merge. On
ARM64 Linux, the wrappers now use `-target aarch64-linux-gnu.2.35`
instead of `-target native-linux-gnu.2.35`. This produces generic ARM64
code without CPU-specific extensions (SVE, etc.), ensuring cached
binaries work on all ARM64 machines — Graviton, Apple Silicon, Ampere,
etc.

x86_64 behavior is unchanged (still uses `-target native`).

After #21611 merged with the cmake auto-detection approach, it triggered
a cascade of follow-up PRs trying to fix the fallout:

| PR | Status | Issue |
|----|--------|-------|
| #21621 | Merged | Introduced the auto-detect approach (replaced zig
wrapper fix with cmake `-march`) |
| #21356 | Merged | Added `NOT CMAKE_CROSSCOMPILING` guard for
cross-compile failures |
| #21637 | Open | Attempting to fix cross-compiles + restore
`native_build_dir` |
| #21660 | Open | Attempting to fix cross-compile targets |
| #21632 | Open | Attempting to fix cross-compile targets |
| #21662 | Open | Adding `CMAKE_SYSTEM_PROCESSOR` to ARM64 cross-compile
presets |
| #21653 | Open | Attempting to skip auto-detection when cross-compiling
|
| #21655 | Open | Attempting to skip auto-detection for
cross-compilation targets |

This PR supersedes the still-open PRs above by addressing the root
cause: `-march` via cmake doesn't work with zig on ARM. The zig
`-target` mechanism is the correct approach.

fix: HA deadlock for last block edge case (#21690)

change ordering for `lastBlock` case when creating a checkpoint
proposal, so that we first sign the last block and then the checkpoint

fix: process all contract classes in storeBroadcastedIndividualFunctions (A-683) (#21686)

Remove early return in for...of loop that caused only the first contract
class's functions to be stored when multiple classes had broadcasts in
the same block.

Fixes https://linear.app/aztec-labs/issue/A-683

chore: add slack success post on nightly scenario

fix: rkapp stability tweak

fix(builder): persist contractsDB across blocks within a checkpoint (#21520)

When building multiple blocks within a single checkpoint, the
`CheckpointBuilder` was creating a new `PublicContractsDB` instance for
each block. This meant that contracts deployed in an earlier block
within the same checkpoint were not visible to subsequent blocks,
causing calls to newly deployed contracts to fail.

Move the `PublicContractsDB` instance to be a persistent field on
`CheckpointBuilder`, initialized once in the constructor and shared
across all blocks in the checkpoint. Wrap block building in
checkpoint/commit/revert semantics on the contracts DB so that failed
blocks don't leak state.

- **validator-client**: Promote `contractsDB` from a local variable in
`makeBlockBuilderDeps` to a class field on `CheckpointBuilder`. Wrap
`buildBlock` in `createCheckpoint`/`commitCheckpoint`/`revertCheckpoint`
calls on the contracts DB.
- **validator-client (tests)**: Add tests verifying that the contracts
DB checkpoint lifecycle is correctly managed across successful and
failed block builds.
- **end-to-end (tests)**: Add e2e test that deploys a contract and calls
it in separate blocks within the same slot, validating cross-block
contract visibility within a checkpoint.

Fixes A-658

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

hotfix: stop trying to update v4-next every 15 mins

As title.

fix: only delete logs from rolled-back blocks, not entire tag (A-686) (#21687)

During reorg, deleteLogs was deleting the entire log entry for a tag
instead of only removing logs from the rolled-back blocks. This caused
logs from earlier blocks to be lost.

Fixes https://linear.app/aztec-labs/issue/A-686

---------

Co-authored-by: Santiago Palladino <santiago@aztec-labs.com>

fix(stdlib): accept null return_type for void Noir functions (#21647)

Fixing issue reproted by @just-mitch on
[slack](https://aztecprotocol.slack.com/archives/C04PUD9AA4W/p1773715408859609).

Fixes a TypeScript compilation error when running `aztec-builder
codegen` on contracts where every function is void (most notably, a
blank `#[aztec] contract Main {}`).

The `#[aztec]` macro injects lifecycle functions like `process_message`
and `sync_state` into every contract. These are void, so the Noir
compiler outputs `"return_type": null` for them. Our `NoirFunctionAbi`
type only accepted a non-null object for `return_type`, which caused a
type error on the `as NoirCompiledContract` cast in the generated TS.

For contracts with at least one non-void function, TypeScript infers the
JSON array element type as a union (`null | { abi_type, visibility }`),
which has enough overlap with the expected type for the `as` cast to
succeed. But when *every* function is void, the inferred type is just
`null` — zero overlap — so the cast fails.

The runtime code in `contract_artifact.ts` already handled the `null`
case correctly. Only the type definition was out of sync with the
compiler's actual output.

Repro: https://github.com/just-mitch/mytoken

- Verified `yarn build` passes with no new type errors
- Cloned the repro, confirmed the TS error, patched
`node_modules/@aztec/stdlib` with the fix, confirmed clean compilation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

feat!: make AES128 decrypt oracle return Option (#21696)

- Rename AES128 decrypt oracle from `aztec_utl_aes128Decrypt` to
`aztec_utl_tryAes128Decrypt`, returning `Option<BoundedVec<u8, N>>`
instead of `BoundedVec<u8, N>`
- Wrap TS decrypt calls (PXE + TXE) in try/catch so Barretenberg
exceptions on malformed input return `Option::none()` instead of
crashing the process
- Update legacy `utilityAes128Decrypt` mapping to strip the Option
wrapper and re-throw on failure (preserving old error semantics for
pinned contracts)

Fixes F-452

fix(aztec-nr): fix OOB index with nonzero offset (#21613)

I simply asked Claude to go through our code and find bugs, and it found
this

- Fixes an out-of-bounds array access in
`extract_property_value_from_selector` when `PropertySelector.offset >
0`. The formula `31 + offset - i` produces index >= 32 at `i = 0`;
corrected to `31 - offset - i`.
- Adds a regression test exercising a nonzero offset.

The bug was dormant -- every `PropertySelector` in the codebase uses
`offset: 0` (the macro hardcodes it). But anyone trying to use sub-field
byte selection would hit a runtime panic.

feat!: include init_hash in private initialization nullifier to prevent privacy leak (#21427)

The private initialization nullifier was computed as just
`address.to_field()`. Anyone who knows a contract's address can compute
this nullifier and check for its existence in the nullifier tree,
revealing whether the contract has been initialized. This is a privacy
leak for fully private contracts.

The private initialization nullifier is now computed as
`poseidon2_hash(address, init_hash)` with a dedicated domain separator
(`DOM_SEP__PRIVATE_INITIALIZATION_NULLIFIER`). Since `init_hash` is not
publicly available for fully private contracts, address knowledge alone
is no longer sufficient to determine initialization status.

Fixes F-194
Fixes #17128

chore(docs): cut new aztec and bb docs version for tag v5.0.0-nightly.20260318

feat!: split compute note hash and nullifier to reduce hashing (#21639)

This should fix the performance regression from
https://github.com/AztecProtocol/aztec-packages/pull/21438. Marked as a
breaking change since some contracts might call `attempt_note_discovery`
manually.

Fixes F-344.
Fixes https://github.com/AztecProtocol/aztec-packages/issues/11157

---------

Co-authored-by: AztecBot <tech@aztec-labs.com>

feat: gas estimations on send (#21646)

EmbeddedWallet now simulates before sending in order to estimate gas and
capture autwitness data.

This PR also adds validation to captured authwitnesses from
offchaineffects (ensuring the inner hash matches the emitted preimage)
and fee payer handling to kernelless simulations

Closes:
https://linear.app/aztec-labs/issue/F-402/estimate-gas-limits-for-a-tx-if-not-provided-by-the-caller
Closes:
https://linear.app/aztec-labs/issue/F-403/compute-gas-limits-for-private-only-txs

---------

Co-authored-by: Nicolás Venturo <nicolas.venturo@gmail.com>

chore(p2p): lower attestation pool per-slot caps to 2

Equivocation detection fires at count > 1 (i.e., the 2nd distinct entry).
Nothing in the codebase uses counts beyond 2, so entries 3+ are dead storage.
A cap of 2 is sufficient to store the honest entry plus one conflicting entry
for detection.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

chore(p2p): remove unused method

Attestation validation is handled in
`validateAndStoreCheckpointAttestation`.

fix(p2p): penalize peer on tx rejected by pool

The pool should never reject a tx that passed validation. However, in
case it does, we now add a warning and penalize the peer that sent us
the invalid tx.

fix(test): workaround slow mock creation

Brings down test time from +1000s to 2s.

chore: fix image name

fix(sequencer): fix checkpoint budget redistribution for multi-block slots (#21692)

Three bugs in how per-block gas/tx limits are computed and enforced
during checkpoint building made the redistribution logic ineffective in
multi-block-per-slot mode:

1. Config `maxBlocksPerCheckpoint` was not propagated to the checkpoint
builder, so `remainingBlocks` always defaulted to 1 — making
redistribution a no-op.
2. The static per-block limit computed in the sequencer-client at
startup always equaled the first-block fair share, so redistribution
could only tighten, never relax — later blocks couldn't use surplus
budget from light early blocks.
3. Redistribution ran during validator re-execution with the proposer's
multiplier logic, causing potential false rejections.

Delete the sequencer's `computeBlockLimits` — the checkpoint builder now
derives per-block limits dynamically from checkpoint-level budgets. Move
`maxBlocksPerCheckpoint` and `perBlockAllocationMultiplier` out of
config into `BlockBuilderOptions` (passed from the sequencer's timetable
at build time). Split behavior on `isBuildingProposal`: proposers get
redistribution with multiplier; validators only cap by per-block limit +
remaining checkpoint budget (no fair-share).

Introduce `BlockBuilderOptions` as a discriminated union type: when
`isBuildingProposal: true`, redistribution params
(`maxBlocksPerCheckpoint`, `perBlockAllocationMultiplier`) are required;
when `false`, they're absent. This makes it a compile-time error to
forget redistribution params during proposal building or to accidentally
include them during validation.

- **stdlib**: Split `PublicProcessorLimits` (processor-only fields) from
`BlockBuilderOptions` (discriminated union with proposer/validator
branches). Remove `maxBlocksPerCheckpoint` from `SequencerConfig`. Make
`perBlockAllocationMultiplier` required on `ResolvedSequencerConfig`.
- **sequencer-client**: Delete `computeBlockLimits`. Simplify
`SequencerClient.new` to cap operator overrides at checkpoint limits.
Pass `maxBlocksPerCheckpoint` and `perBlockAllocationMultiplier` via
opts in `CheckpointProposalJob`.
- **validator-client**: Rewrite `capLimitsByCheckpointBudgets` — first
cap by remaining budget (always), then further cap by fair share only
when proposing. Validator re-execution no longer applies redistribution.
- **slasher**: Update `epoch_prune_watcher` buildBlock call to use new
opts shape.
- **validator-client (tests)**: Update tests to pass redistribution
params via opts. Remove r…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants