feat(stdlib): deterministic execution-result job IDs and BLOCK_EXECUTION job type#22916
Open
PhilWindle wants to merge 11 commits intophil/proving-orchestrator-splitfrom
Open
feat(stdlib): deterministic execution-result job IDs and BLOCK_EXECUTION job type#22916PhilWindle wants to merge 11 commits intophil/proving-orchestrator-splitfrom
PhilWindle wants to merge 11 commits intophil/proving-orchestrator-splitfrom
Conversation
…ECUTION job type Foundational pieces for the execution offload: - `makeExecutionResultJobId(epoch, blockNumber, slotNumber, txIndex, type)` — produces IDs that the orchestrator and an execution agent can compute independently from the same coordinates without exchanging data. Format keeps `getEpochFromProvingJobId` working. - New `BLOCK_EXECUTION` `ProvingRequestType` with `BlockExecutionInputs` (epoch, checkpoint index, block header, tx hashes) and a `BlockExecutionResult` marker. Wired through `ProvingJobInputs` / `ProvingJobResult` (and their maps), the broker's per-type queues, and the priority order — placed at the top, since execution gates the rest of the proving DAG. - `ProvingJobController` exhaustive switch gets a placeholder case; the agent-side handler lands in a follow-up.
- Adds `executeBlock` to `ServerCircuitProver`. `BBNativeRollupProver`, `TestCircuitProver`, `MockProver`, and `BrokerCircuitProverFacade` get implementations: the first three reject (general-purpose proving agents never receive BLOCK_EXECUTION); the facade enqueues the job to the broker for the orchestrator side. - New `BlockExecutionHandler` (also `ServerCircuitProver`-shaped, with all non-execution methods rejecting). It fetches the txs, forks world state at the parent block, runs `PublicProcessor.process`, persists each public tx's AVM circuit inputs to the proof store, and enqueues the per-tx AVM jobs under deterministic IDs computed from `(epoch, blockNumber, slotNumber, txIndex)`. The fork is closed in a `finally` on both success and error. - New `BrokerCircuitProverFacade.expectJob(id, type, signal)` reserves a Promise for a job ID that the facade did not enqueue itself. Used in the next commit by the orchestrator to await the agent-enqueued AVM proofs. - `ProvingJobController` BLOCK_EXECUTION case now dispatches to `circuitProver.executeBlock` instead of throwing. - Adds `@aztec/stdlib/block_execution` subpath export.
…hed AVM proofs Adds `ProvingOrchestrator.addBlockForExecution(txs, expectAvmProofForTx)` as a parallel to `addTxs`. The new path runs the same per-tx setup (validate, prepareBaseRollupInputs, chonk verifier kickoff, base rollup) but obtains the AVM proof from a caller-supplied callback instead of enqueueing a fresh AVM proving job locally. Callers wire the callback to the broker facade's `expectJob` (added in the previous commit) against deterministic IDs the execution agent uses when enqueueing per-tx AVM jobs. The shared per-tx loop is factored into a private `addProcessedTxsToBlock` helper that takes a callback for "how to obtain the AVM proof". `addTxs` keeps its current behaviour (enqueue) and the new method passes a "watch deterministic-ID job" callback. Integration test (`orchestrator_block_execution.test.ts`) drives a four-tx block alternating private/public through the new path and asserts the callback fires exactly once per public tx index, the block header matches, and proving completes end-to-end via the same TestContext fixtures used by the existing addTxs tests.
Adds an opt-in set of in-process execution agents controlled by two new env vars: - `PROVER_NODE_EXECUTION_AGENT_COUNT` (default `0` — feature off) - `PROVER_NODE_EXECUTION_AGENT_POLL_INTERVAL_MS` (default `100` — tighter than regular proving agents because execution gates the rest of the DAG) `InternalExecutionAgents` (in `prover-client/src/block_execution/`) wires N `ProvingAgent`s with allowList `[BLOCK_EXECUTION]`, each backed by a `BlockExecutionHandler` constructed against the prover node's broker, world state, public processor factory and tx provider. The class exposes start/stop and is owned by the `ProverNode` so its lifecycle matches the rest of the node. The factory wires the tx provider through to the handler via a small adapter that calls `getAvailableTxs` and validates ordering, and constructs its own `PublicProcessorFactory` against the existing archiver — no RPC archiver yet, that comes in Phase 3. Tests cover the happy path (agents spin up, ask the broker only for BLOCK_EXECUTION jobs, stop cleanly) and the disabled-by-default case.
…passenger data
Introduces three new types in `@aztec/stdlib/block_execution`:
- `BlockExecutionTxData` — per-tx execution data the agent will compute and the
orchestrator will use to enqueue the public base rollup without touching its
own world-state fork: `{ baseRollupHints, avmCircuitPublicInputs }`.
- `AvmProvingInputs` — wraps `AvmCircuitInputs` with optional `executionTxData`.
Becomes `ProvingJobInputsMap[PUBLIC_VM]`.
- `AvmProvingResult` — wraps the AVM `RecursiveProof` with the same passenger
field, passed through unchanged by the proving agent. Becomes
`ProvingJobResultsMap[PUBLIC_VM]`.
The proving agent (BBNative/Test/Mock) reads `inputs.avmCircuitInputs`,
generates the proof from those inputs only, and packages the result with
`inputs.executionTxData` passed through. The legacy `addTxs` path constructs
inputs via `AvmProvingInputs.fromAvmCircuitInputs(...)` (passenger undefined)
and unwraps the result with `.proof`, so existing call sites continue to work.
`ServerCircuitProver.getAvmProof` signature updates accordingly. All four
existing implementations (`BBNativeRollupProver`, `TestCircuitProver`,
`MockProver`, `BrokerCircuitProverFacade`) and the orchestrator's `enqueueVM`
are updated. Issue 5's `BlockExecutionHandler` keeps compiling by wrapping
`AvmCircuitInputs` with empty passenger data — Phase 3 rewrites it to
populate the passenger.
…ollup hints
Rewrites `BlockExecutionHandler` to do all the per-tx work the orchestrator
used to do for the public-tx path, so the orchestrator's base rollup is
input-independent of any local fork.
Per tx, in tx-order against the agent's fork:
- `publicProcessor.process([tx])` (single-tx call, accumulates fork state).
- `insertSideEffectsAndBuildBaseRollupHints` to compute the per-tx hints.
- For private-only txs: build `PrivateTxBaseRollupPrivateInputs` from the Tx
+ the hints, save inputs to the proof store, and enqueue
`PRIVATE_TX_BASE_ROLLUP` directly with a deterministic ID. The orchestrator
watches by ID and never has to construct or enqueue the job itself.
- For public txs: bundle `BlockExecutionTxData = { baseRollupHints,
avmCircuitPublicInputs }` into `AvmProvingInputs`, enqueue `PUBLIC_VM` with
a deterministic ID. The proving agent passes the passenger data through to
`AvmProvingResult` so the orchestrator gets it alongside the AVM proof.
Block-level state shipped with the job (via `BlockExecutionInputs`):
`isFirstBlockInCheckpoint` + `l1ToL2Messages` (so the agent inserts them on
its fork only for the first block of the checkpoint) and `startSpongeBlob`
(carried through across blocks in the checkpoint). The agent returns the
`endSpongeBlob` in `BlockExecutionResult` so the orchestrator can carry it
forward.
The handler reports `BLOCK_EXECUTION` complete only after the per-tx jobs
are enqueued. Orchestrator-side per-tx pipelining is preserved: each tx's
proving job becomes visible to the broker as soon as the agent finishes
that tx's execution.
… and watches per-tx jobs Reshapes `ProvingOrchestrator.addBlockForExecution` to drive the offloaded- execution proving DAG without holding a per-block fork or constructing ProcessedTxs: - Signature is now `(blockNumber, txs: Tx[], watchers)`. The orchestrator classifies each tx by `tx.data.forPublic` and sets up a watcher. - Private-only tx: the agent has already enqueued `PRIVATE_TX_BASE_ROLLUP` with a deterministic ID (relaxes the plan's "agent never enqueues base rollup" rule, only for the private case where the agent has every input). `watchers.expectPrivateBaseRollupProofForTx` resolves with the proof; the orchestrator pipes it into `setBaseRollupProof` and on into the merge tree. - Public tx: orchestrator enqueues the chonk verifier itself from the raw `Tx`. `watchers.expectAvmProofForTx` resolves with the AVM proof + the `BlockExecutionTxData` passenger. Once both are ready, the orchestrator builds `PublicTxBaseRollupPrivateInputs` from the passenger hints + the two proofs and enqueues `PUBLIC_TX_BASE_ROLLUP` itself. Block-level summary state (end sponge, end state, total fees, total mana used) is intentionally not handled here — Phase 6 will wire that up alongside the EpochProvingJob cutover, since `setBlockCompleted` and buildBlockHeader still want per-tx data the orchestrator no longer carries. The Issue 6 integration test (`orchestrator_block_execution.test.ts`) exercised the old `ProcessedTx[]` signature end-to-end; it's removed here and Phase 6 will add coverage that runs through the new shape with a real `BlockExecutionDispatcher`. The legacy `addTxs` / `addProcessedTxsToBlock` path is untouched.
…a a composite prover Drops the separate `InternalExecutionAgents` agent pool added in Issue 7 and replaces it with a single composite `ServerCircuitProver` shared by the existing prover-client agents. One agent count, one polling loop, one allowlist — the broker hands jobs to whichever agent is free and the composite dispatches internally: - Proving methods (`getAvmProof`, `getPrivateTxBaseRollupProof`, parity, merge, block-root, etc.) delegate to the regular prover (`BBNativeRollupProver` or `TestCircuitProver`). - `executeBlock` delegates to a `BlockExecutionHandler` instance constructed against the supplied world state, public processor factory and tx fetcher. Wiring: - `ProverClient.new(...)` and `createProverClient(...)` take an optional `ProverClientBlockExecutionDeps` (`publicProcessorFactory + txFetcher`). When supplied, every agent gets the composite. When absent, agents stay proving-only and `executeBlock` rejects. - The prover node factory now always supplies these deps (it has every ingredient already — archiver, world state synchronizer, p2p client). No new config switch — execution capability is automatic when the prover node has the prerequisites. - `proverAgentCount` continues to control the agent count. There is no second pool. Removes: - `InternalExecutionAgents` class + its test. - `proverNodeExecutionAgentCount` / `proverNodeExecutionAgentPollIntervalMs` config and the matching env vars. - The optional `internalExecutionAgents` constructor arg on `ProverNode` and the corresponding start/stop calls. Adds a small unit test exercising the composite's dispatch behaviour.
…helper Wires the orchestrator-side and prover-node-side scaffolding for the EpochProvingJob cutover (without flipping the production call site yet — the legacy `publicProcessor.process` + `addTxs` path still runs by default because the existing job tests assert against it; flipping the switch is a follow-up that goes hand-in-hand with rewriting those tests). - `BlockExecutionResult` now carries everything the orchestrator needs to finish a block without touching `ProcessedTx`: `endSpongeBlob`, `endState`, `totalFees`, `totalManaUsed`, and per-tx `txEffects`. The agent populates these during execution and the prover node hands them to the orchestrator. - `BlockExecutionHandler` collects the per-tx data and the aggregates as it walks the block, then returns the full summary in `BlockExecutionResult`. - `BlockProvingState` gains a `setBlockSummary` setter and overrides `getTxEffects`, `getTotalFees`, `getTotalManaUsed`, and `isAcceptingTxs` to read from the summary when supplied. The legacy per-tx `TxProvingState` path is otherwise unchanged. - `ProvingOrchestrator` gains `applyBlockExecutionResult` (set summary + end state + end sponge with block-end blob fields absorbed) and `getBlockStartSpongeBlob` (so the caller can build `BlockExecutionInputs` for the next block). - `ProverClient`/`EpochProverFactory` exposes `getBrokerCircuitProverFacade()` so EpochProvingJob can dispatch `BLOCK_EXECUTION` and watch deterministic-ID per-tx jobs through the same facade the orchestrators use. - `EpochProvingJob.dispatchOffloadedBlock(...)` is the new code path: registers per-tx watchers (private base rollup + AVM with passenger), builds `BlockExecutionInputs`, awaits the agent, and applies the summary. Not yet invoked from the per-block loop — that flip will follow alongside test updates. Job test gains a stub for `getBrokerCircuitProverFacade()` so the new helper is callable from tests when needed.
… into phil/execution-offload # Conflicts: # yarn-project/prover-client/src/prover-client/prover-client.ts # yarn-project/prover-node/src/job/epoch-proving-job.ts
4 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Foundational pieces (Issues 3 + 4 of the prover-stack redesign Phase 2) for offloading block execution to a stateful prover-agent. No behaviour change yet — these are the types and broker wiring that the new execution path will plug into.
makeExecutionResultJobId(epoch, blockNumber, slotNumber, txIndex, type)— orchestrator and execution agent independently compute the same job ID from block coordinates, no coordination needed. Format keepsgetEpochFromProvingJobIdworking.BLOCK_EXECUTIONProvingRequestType(appended to keep existing numeric values stable) with:BlockExecutionInputs(epoch, checkpoint index, block header, tx hashes)BlockExecutionResultmarker (justblockNumber)ProvingJobInputs/ProvingJobResultand their mapsPROOF_TYPES_IN_PRIORITY_ORDER— execution gates the rest of the DAGProvingJobControllerexhaustive switch gets a placeholder throw; the agent-side handler lands in the follow-up issue.Stack
This PR sits on top of #22915 (
phil/proving-orchestrator-split), which is on top of #22783 (phil/a-955-...). All three need to merge in order.Test plan
yarn workspace @aztec/stdlib test src/block_execution src/interfaces/proving-job.test.tsyarn workspace @aztec/prover-client test src/proving_broker/yarn buildandyarn lintfor stdlib / prover-client / bb-prover