Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 0 additions & 4 deletions docs/external/src/img/node_architecture.svg

This file was deleted.

4 changes: 2 additions & 2 deletions docs/external/src/img/operator_architecture.svg
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is missing a couple of links:

  1. RPC sends incoming transactions to the validator for validation.
  2. I would maybe add an arrow going from the block prover back to the store and make it clear that we are sending a block proof back.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 0 additions & 4 deletions docs/external/src/img/workspace_tree.svg

This file was deleted.

36 changes: 29 additions & 7 deletions docs/external/src/operator/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,20 @@ title: "Architecture"
sidebar_position: 2
---

# Node architecture
# Network architecture

The node itself consists of four distributed components: store, block-producer, network transaction builder, and RPC.
The network itself consists of five distributed components: store, block-producer, network transaction builder, validator, and RPC.

The components can be run on separate instances when optimised for performance, but can also be run as a single process
for convenience. The exception to this is the network transaction builder which can currently only be run as part of
the single process. At the moment both of Miden's public networks (testnet and devnet) are operating in single process
for convenience. At the moment both of Miden's public networks (testnet and devnet) are operating in single process
mode.

The inter-component communication is done using a gRPC API which is assumed trusted. In other words this _must not_ be
Inter-component communication is done using a gRPC API which is assumed trusted. In other words this _must not_ be
public. External communication is handled by the RPC component with a separate external-only gRPC API.

The image below shows a rough example of what a network architecture may look like. Only the more important data
flows are pictured to improve clarity.

[![node architecture](../img/operator_architecture.svg)](../img/operator_architecture.svg)

## RPC
Expand All @@ -33,14 +35,18 @@ It can be trivially scaled horizontally e.g. with a load-balancer in front as sh
The store is responsible for persisting the chain state. It is effectively a database which holds the current state of
the chain, wrapped in a gRPC interface which allows querying this state and submitting new blocks.

It receives new blocks from the block-producer, which it then submits to the validator for signing before it is committed
on chain. It then submits the block to the prover whereafter the block is marked as proven. Blocks therefore undergo
two levels of finalization, `committed` and then `proven`.

It expects that this gRPC interface is _only_ accessible internally i.e. there is an implicit assumption of trust.

## Block-producer

The block-producer is responsible for aggregating received transactions into blocks and submitting them to the store.

Transactions are placed in a mempool and are periodically sampled to form batches of transactions. These batches are
proved, and then periodically aggregated into a block. This block is then proved and committed to the store.
proved, and then periodically aggregated into a block. This proposed block is then submitted to the store.
Comment on lines 48 to +49
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is "proposed block" the right phrasing here? Maybe it should be "constructed block" or "composed block".

Also, I think this is missing the part that the block producer fist sends the block for a signature to the validator - and then, only once the validator signs the block, it is sent to the store.


Proof generation in production is typically outsourced to a remote machine with appropriate resources. For convenience,
it is also possible to perform proving in-process. This is useful when running a local node for test purposes.
Expand All @@ -49,7 +55,7 @@ it is also possible to perform proving in-process. This is useful when running a

The network transaction builder monitors the mempool for network notes, and creates transactions consuming these.
We call these network transactions and at present this is the only entity that is allowed to create such transactions.
This restriction is will be lifted in the future, but for now this component _must_ be enabled to have support for
This restriction may be lifted in the future, but for now this component _must_ be enabled to have support for
network transactions.

The mempool is monitored via a gRPC event stream served by the block-producer.
Expand All @@ -66,3 +72,19 @@ number of failures, preventing resource exhaustion. The threshold can be set wit
The builder also exposes an internal gRPC server that the RPC component uses to proxy debugging endpoints such as
`GetNoteError`. In bundled mode this is wired automatically; in distributed mode operators must set
`--ntx-builder.url` (or `MIDEN_NODE_NTX_BUILDER_URL`) on the RPC component.

## Validator

The validator is responsible for verifying the integrity of the blockchain by signing new blocks before they can be commited.

At the moment this is implemented by having all transactions sent here to be re-executed to double-check their integrity. This
also guards against bugs in the proving or execution systems, by backing up the transactions and their private inputs. This
forms part of our training wheels while Miden is maturing.

The validator signs a new block if:

- all transactions were previously verified
- block proof is valid
- block delta matches the aggregated transaction deltas
- block header is valid and matches the data
- block builds on the current chain tip
2 changes: 1 addition & 1 deletion docs/external/src/operator/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ can be used so long as the checksum file and the package file are in the same fo

## Install using `cargo`

Install Rust version **1.89** or greater using the official Rust installation
Install Rust using the official Rust installation
[instructions](https://www.rust-lang.org/tools/install).

Depending on the platform, you may need to install additional libraries. For example, on Ubuntu 22.04 the following
Expand Down
1 change: 0 additions & 1 deletion docs/external/src/operator/versioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@ The following is considered the node's public API, and will therefore be conside

- RPC gRPC specification (note that this _excludes_ internal inter-component gRPC schemas).
- Node configuration options.
- Database schema changes which cannot be reverted.
- Large protocol and behavioral changes.

We intend to include our OpenTelemetry trace specification in this once it stabilizes.
Expand Down
3 changes: 2 additions & 1 deletion docs/internal/src/SUMMARY.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
<!-- This file is used to represent aggregate documentation for the Miden book -->
<!-- This file is used to represent agregate documentation for the Miden book -->

# Summary

Expand All @@ -11,4 +11,5 @@
- [Store](./store.md)
- [Block producer](./block-producer.md)
- [Network transaction builder](./ntx-builder.md)
- [Validator](./validator.md)
- [Common issues other oddities](./oddities.md)
4 changes: 2 additions & 2 deletions docs/internal/src/assets/node_architecture.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 0 additions & 4 deletions docs/internal/src/assets/operator_architecture.svg

This file was deleted.

4 changes: 0 additions & 4 deletions docs/internal/src/assets/workspace_tree.svg

This file was deleted.

19 changes: 13 additions & 6 deletions docs/internal/src/block-producer.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,9 @@
# Block Producer Component

The block-producer is responsible for ordering transactions into batches, and batches into blocks, and creating the
proofs for these. Proving is usually outsourced to a remote prover but can be done locally if throughput isn't
proofs for batches. Proving is usually outsourced to a remote prover but can be done locally if throughput isn't
essential, e.g. for test purposes on a local node.

It hosts a single gRPC endpoint to which the RPC component can forward new transactions.

The core of the block-producer revolves around the mempool which forms a DAG of all in-flight transactions and batches.
It also ensures all invariants of the transactions are upheld e.g. account's current state matches the transaction's
initial state, that all input notes are valid and unconsumed and that the transaction hasn't expired.
Expand All @@ -17,8 +15,17 @@ the mempool where it can be included in a block.

## Block production

Proven batches are selected from the mempool periodically to form the next block. The block is then proven and committed
to the store. At this point all transactions and batches in the block are removed from the mempool as committed.
Proven batches are selected from the mempool periodically to form the next block. The block is then built and submitted
to the store, which ensures it gets signed by the validator before it is committed. At this point all transactions and
batches in the block are marked in the mempool as committed.
Comment on lines +18 to +20
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the block producer -> store flow is a bit inverted here. Specifically:

  • The block producer sends the block to the validator and gets a signature back.
  • Then, once the block is signed, the block producer sends it to the store.


## Mempool data pruning

The mempool keeps the `N` most recent blocks locally, to allow incoming transactions a grace period so we can verify their
state against the store, and the local state deltas in the mempool. Without this overlap, we would constantly be racing
transaction check against the store with newly committed blocks.

After each now block, the `N+1`th oldest block and its batches and transactionsa are pruned from the mempool state.

## Transaction lifecycle

Expand All @@ -42,5 +49,5 @@ above lifecycle (which effectively shows the happy path). This can occur if:

- The transaction expires before being included in a block.
- Any parent transaction is dropped (which will revert the state, invalidating child transactions).
- It causes proving or any part of block/batch creation to fail. This is a fail-safe against unforeseen bugs, removing
- It causes proving or any part of block/batch creation to fail repeatedly. This is a fail-safe against unforeseen bugs, removing
problematic (but potentially valid) transactions from the mempool to prevent outages.
23 changes: 5 additions & 18 deletions docs/internal/src/codebase.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,26 +3,13 @@
The code is organised using a Rust workspace with separate crates for the node and remote prover binaries, a crate for each node
component, a couple of gRPC-related codegen crates, and a catch-all utilities crate.

The primary artifacts are the node and remote prover binaries. The library crates are not intended for external usage, but
The primary execution artifacts are the node and remote prover binaries. The library crates are not intended for external usage, but
instead simply serve to enforce code organisation and decoupling.

| Crate | Description |
| ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `node` | The node executable. Configure and run the node and its components. |
| `remote-prover` | Remote prover executables. Includes workers and proxies. |
| `remote-prover-client` | Remote prover client implementation. |
| `block-producer` | Block-producer component implementation. |
| `store` | Store component implementation. |
| `ntx-builder` | Network transaction builder component implementation. |
| `rpc` | RPC component implementation. |
| `proto` | Contains and exports all protobuf definitions. |
| `rpc-proto` | Contains the RPC protobuf definitions. Currently this is an awkward clone of `proto` because we re-use the definitions from the internal protobuf types. |
| `utils` | Variety of utility functionality. |
| `test-macro` | Provides a procedural macro to enable tracing in tests. |

---
We have a top-level `proto` crate, which contains the external and internal gRPC and protobuf schemas. It also exposes the
`tonic`/`prost` file descriptors for each gRPC service for convenience. We then have an internal `proto` crate in `./crates`,
which uses the above file descriptors to generate the actual service traits, and also defines some domain objects and other gRPC
shared utilities and definitions.

> [!NOTE] > [`miden-protocol`](https://github.com/0xMiden/miden-protocol) is an important dependency which
> contains the core Miden protocol definitions e.g. accounts, notes, transactions etc.

[![workspace dependency tree](assets/workspace_tree.svg)](assets/workspace_tree.svg)
2 changes: 1 addition & 1 deletion docs/internal/src/components.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Node components

The node is split into three distinct components that communicate via gRPC. See the
The node is split into five distinct components that communicate via gRPC. See the
[Operator guide#architecture](https://0xmiden.github.io/miden-docs/miden-node/operator/architecture) chapter for an overview of each component.

The following sections will describe the inner architecture of each component.
Expand Down
11 changes: 5 additions & 6 deletions docs/internal/src/ntx-builder.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,13 @@ Network accounts are a special type of fully public account which contains no au
whose state can therefore be updated by anyone (in theory). Such accounts are required when publicly
mutable state is needed.

The issue with publicly mutable state is that transactions against an account must be sequential
An issue with publicly mutable state is that transactions against an account must be sequential
and require the previous account commitment in order to create the transaction proof. This conflicts
with Miden's client side proving and concurrency model since users would race each other to submit
transactions against such an account.

Instead the solution is to have the network be responsible for driving the account state forward,
and users can interact with the account using notes. Notes don't require a specific ordering and
Instead our solution is to have the network be responsible for driving the account state forward,
and users can interact with the account only indirectly using notes. Notes don't require a specific ordering and
can be created concurrently without worrying about conflicts. We call these network notes and they
always target a specific network account.

Expand Down Expand Up @@ -51,9 +51,8 @@ argument (default: 5 minutes).
Deactivated actors are re-spawned when new notes targeting their account are detected by the
coordinator (via the `send_targeted` path).

If an actor repeatedly crashes (shuts down due to a database error), its crash count is tracked by
the coordinator. Once the count reaches the configurable threshold, the account is **deactivated**
and no new actor will be spawned for it. This prevents resource exhaustion from a persistently
Each actors crash count is tracked, and once the count reaches a configurable threshold, the account is
**deactivated** and no new actor will be spawned for it. This prevents resource exhaustion from a persistently
failing account. The threshold is configurable via the `--ntx-builder.max-account-crashes` CLI
argument (default: 10).

Expand Down
7 changes: 7 additions & 0 deletions docs/internal/src/oddities.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,3 +13,10 @@ between the chain MMR and the block hash:
To work-around this the inclusion of a block hash in the chain MMR is delayed by one block. Or put differently, block
`N` is responsible for inserting block `N-1` into the chain MMR. This does _not_ break blockchain linkage because
the block header (and therefore hash) still includes the previous block's hash.

## Crate: `rocksdb-cxx-linkage-fix`

This crate is used to ensure that statically linking the `rocksdb` library works as intended.

More information can be found in the crate's doc comments, but this crate is required for now to be included
as part of the `build.rs` in the large SMT crate which depends on `rocksdb`.
35 changes: 9 additions & 26 deletions docs/internal/src/rpc.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,19 @@ get rejected _before_ reaching the store and block-producer, reducing their load
the proofs of submitting transactions. This allows the block-producer to skip proof verification (it trusts the RPC
component), reducing the load in this critical component.

## RPC Versioning
## RPC Versioning and the HTTP `ACCEPT` header

The RPC server enforces version requirements against connecting clients that provide the HTTP ACCEPT header. When this header is provided, its corresponding value must follow this format: `application/vnd.miden.0.9.0+grpc`.
The RPC component allows clients to negotiate their desired Miden RPC version using the well-known HTTP `ACCEPT` header, using the following format:

If there is a mismatch in version, clients will encounter an error while executing gRPC requests against the RPC server with the following details:
```sh
application/vnd.miden; version=<version-req>; genesis=<genesis-commitment>
```

- gRPC status code: 3 (Invalid Argument)
- gRPC message: Missing required ACCEPT header
The `version` lets the client specify their supported version and the server will attempt to comply if it can. At this early stage, only client versions which are semver compatible with the
server version are likely to be accepted i.e. the server in all likely only supports a _single version_.

The server will reject any version that does not have the same major and minor version to it. This behaviour will change after v1.0.0., at which point only the major version will be taken into account.
The `genesis` property is intended to let the client confirm they are on the correct network, by specifying the network's genesis commitment. This guards against operating on the wrong network,
as well as against network resets.

## Query limits (`GetLimits`)

Expand All @@ -38,23 +41,3 @@ Error handling follows this pattern:
1. **Domain Errors**: Business logic errors are defined in domain-specific enums
2. **gRPC Conversion**: Domain errors are converted to gRPC `Status` objects with structured details
3. **Error Details**: Specific error codes are embedded in `Status.details` as single bytes

### SubmitProvenTransaction Errors

Transaction submission errors are:

```rust
enum SubmitProvenTransactionGrpcError {
Internal = 0,
DeserializationFailed = 1,
InvalidTransactionProof = 2,
IncorrectAccountInitialCommitment = 3,
InputNotesAlreadyConsumed = 4,
UnauthenticatedNotesNotFound = 5,
OutputNotesAlreadyExist = 6,
TransactionExpired = 7,
}
```

Error codes are embedded as single bytes in `Status.details`

5 changes: 0 additions & 5 deletions docs/internal/src/store.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,6 @@ information is always read from disk. We will need to revisit this in the future
We have database migration support in place but don't actively use it yet. There is only the latest schema, and we reset
chain state (aka nuke the existing database) on each release.

Note that the migration logic includes both a schema number _and_ a hash based on the sql schema. These are both checked
on node startup to ensure that any existing database matches the expected schema. If you're seeing database failures on
startup its likely that you created the database _before_ making schema changes resulting in different schema hashes.

## RocksDB tree storage

The account and nullifier trees are persisted in separate RocksDB instances under
Expand All @@ -27,7 +23,6 @@ bits vary by depth (8.0–12.0) and memtables are 128 MiB per column family. See
full fixed configuration. Runtime-tuneable parameters are documented in the
[operator usage guide](https://github.com/0xMiden/node/blob/next/docs/external/src/operator/usage.md#rocksdb-tuning).


## Architecture

The store consists mainly of a gRPC server which answers requests from the RPC and block-producer components, as well as
Expand Down
Loading
Loading