diff --git a/docs/index.md b/docs/index.md index 81d34074f8..a44e7c380d 100644 --- a/docs/index.md +++ b/docs/index.md @@ -140,6 +140,9 @@ - [QA Guide & Instructions for Functional Requirement Tests](./topics/functionalRequirementTests.md) - [Double Spends](./topics/architecture/understandingDoubleSpends.md) - [Two Phase Commit](./topics/features/two_phase_commit.md) +- [Peer Registry and Reputation System](./topics/features/peer_registry_reputation.md) +- [UTXO Lock Records](./topics/features/utxo_lock_records.md) +- [Dashboard](./topics/dashboard.md) ----- diff --git a/docs/references/settings/services/p2p_settings.md b/docs/references/settings/services/p2p_settings.md index 6480b329b0..2bcefd0ee6 100644 --- a/docs/references/settings/services/p2p_settings.md +++ b/docs/references/settings/services/p2p_settings.md @@ -29,19 +29,31 @@ | ForceSyncPeer | string | "" | p2p_force_sync_peer | **CRITICAL** - Forced sync peer override | | SharePrivateAddresses | bool | true | p2p_share_private_addresses | Private address advertisement | | AllowPrunedNodeFallback | bool | true | p2p_allow_pruned_node_fallback | **CRITICAL** - Pruned node fallback behavior | +| DHTMode | string | "server" | p2p_dht_mode | DHT operation mode ("server" or "client") | +| DHTCleanupInterval | time.Duration | 24h | p2p_dht_cleanup_interval | DHT provider record cleanup interval | +| PeerMapMaxSize | int | 100000 | p2p_peer_map_max_size | Maximum entries in peer maps | +| PeerMapTTL | time.Duration | 30m | p2p_peer_map_ttl | Time-to-live for peer map entries | +| PeerMapCleanupInterval | time.Duration | 5m | p2p_peer_map_cleanup_interval | Peer map cleanup interval | +| EnableNAT | bool | false | p2p_enable_nat | Enable UPnP/NAT-PMP port mapping | +| EnableMDNS | bool | false | p2p_enable_mdns | Enable mDNS peer discovery | +| AllowPrivateIPs | bool | false | p2p_allow_private_ips | Allow connections to private IP addresses | +| SyncCoordinatorPeriodicEvaluationInterval | time.Duration | - | p2p_sync_coordinator_periodic_evaluation_interval | Sync coordinator evaluation interval | ## Configuration Dependencies ### Forced Sync Peer Selection + - `ForceSyncPeer` overrides automatic peer selection - `AllowPrunedNodeFallback` affects fallback behavior when forced peer unavailable ### Network Address Management + - `ListenAddresses` and `AdvertiseAddresses` control network presence - `Port` used as fallback when addresses don't specify port - `SharePrivateAddresses` controls address advertisement behavior ### Peer Connection Management + - `StaticPeers` ensures persistent connections - `RelayPeers` for NAT traversal - `PeerCacheDir` for peer persistence @@ -89,3 +101,57 @@ p2p_health_remove_after_failures = 3 p2p_force_sync_peer = "peer-id-12345" p2p_allow_pruned_node_fallback = true ``` + +### DHT Configuration + +The DHT (Distributed Hash Table) can operate in two modes: + +```text +# Server mode (default) - advertises on DHT and stores provider records +p2p_dht_mode = "server" +p2p_dht_cleanup_interval = 24h + +# Client mode - query-only, no provider storage (reduces network overhead) +p2p_dht_mode = "client" +``` + +**When to use client mode:** + +- Nodes that don't need to be discoverable by others +- Reduced network overhead and storage requirements +- Behind restrictive NAT/firewall + +### Peer Registry Configuration + +The peer registry persists peer reputation data across restarts: + +```text +# Directory for peer cache file (default: binary directory) +p2p_peer_cache_dir = "/var/lib/teranode/p2p" + +# Peer map memory management +p2p_peer_map_max_size = 100000 +p2p_peer_map_ttl = 30m +p2p_peer_map_cleanup_interval = 5m +``` + +### Network Security Configuration + +**IMPORTANT**: These settings can trigger network scanning alerts on shared hosting. + +```text +# Enable only on private/local networks +p2p_enable_nat = false # UPnP/NAT-PMP port mapping +p2p_enable_mdns = false # mDNS peer discovery +p2p_allow_private_ips = false # RFC1918 private networks +``` + +### Peer Selection and Reputation + +For details on how peer selection and reputation scoring work, see [Peer Registry and Reputation System](../../../topics/features/peer_registry_reputation.md). + +Key settings affecting peer selection: + +- `p2p_force_sync_peer` - Override automatic selection with specific peer +- `p2p_allow_pruned_node_fallback` - Whether to fall back to pruned nodes +- `p2p_peer_cache_dir` - Where peer reputation data is persisted diff --git a/docs/topics/dashboard.md b/docs/topics/dashboard.md new file mode 100644 index 0000000000..58ec4c4d5c --- /dev/null +++ b/docs/topics/dashboard.md @@ -0,0 +1,471 @@ +# Teranode Dashboard + +## Index + +1. [Overview](#1-overview) +2. [Features](#2-features) + - [2.1. Home](#21-home) + - [2.2. Blockchain Viewer](#22-blockchain-viewer) + - [2.3. P2P Message Monitor](#23-p2p-message-monitor) + - [2.4. Peers](#24-peers) + - [2.5. Network Status](#25-network-status) + - [2.6. Forks](#26-forks) + - [2.7. Admin](#27-admin) + - [2.8. WebSocket Test](#28-websocket-test) +3. [Technology](#3-technology) +4. [Running the Dashboard](#4-running-the-dashboard) +5. [Configuration](#5-configuration) +6. [Related Documentation](#6-related-documentation) + +## 1. Overview + +The Teranode Dashboard is a web-based user interface for monitoring and managing Teranode nodes. Built with SvelteKit and TypeScript, it provides real-time visibility into blockchain state, network peers, P2P connections, and administrative operations. + +The dashboard connects to the Teranode Asset Server via HTTP and WebSocket APIs to retrieve data and receive live updates about blockchain events, peer status, and system health. + +**Key Capabilities:** + +- Real-time blockchain state monitoring +- FSM (Finite State Machine) management and state transitions +- Block invalidation and revalidation +- Peer connection and reputation management +- Block, transaction, UTXO, and subtree viewing +- Merkle proof visualization (BRC-74 BUMP format) +- P2P message monitoring and filtering +- Fork visualization and chain analysis +- WebSocket-based live updates + +## 2. Features + +### 2.1. Home + +![Dashboard_Main.png](img/Dashboard_Main.png) + +The home page (`/home`) provides an overview of the node's current state: + +**Block Statistics Card:** + +- Block count and transaction count +- Maximum chain height +- Average block size +- Average transactions per block +- Transactions per second +- Chain work (proof-of-work) +- Manual refresh (Ctrl+R keyboard shortcut) + +**Block Graph:** + +- Interactive visualization of block data +- Configurable time periods (24h, 7d, etc.) + +### 2.2. Blockchain Viewer + +The viewer page (`/viewer`) allows inspection of blockchain data with multiple specialized views: + +**Block Viewer** (`/viewer/block?q={hash}`): + +- Block details: hash, height, timestamp, miner info +- Transaction count and block size +- Merkle root and previous block hash +- Coinbase transaction details +- Subtree structure information +- Link to block ancestors + +**Transaction Viewer** (`/viewer/tx?q={hash}`): + +- Transaction details: hash, block height, timestamp +- Input and output counts +- UTXO details for inputs and outputs +- **Merkle Proof Visualizer**: Interactive BRC-74 BUMP (BSV Unified Merkle Path) format visualization + +**UTXO Viewer** (`/viewer/utxo?q={hash}:{index}`): + +- Outpoint information (txid:vout) +- Value and script pubkey +- Spending status +- UTXO metadata + +**Subtree Viewer** (`/viewer/subtree?q={hash}`): + +- Subtree details and metadata +- Merkle tree visualization +- List of transactions in subtree + +**Blocks Table:** + +- Paginated list of recent blocks +- Sortable columns +- Clickable links to block details + +![Dashboard_Blocks.png](img/Dashboard_Blocks.png) + +**Search Functionality:** + +- Search by block height, block hash, transaction ID, or UTXO outpoint + +### 2.3. P2P Message Monitor + +![Dashboard_P2P.png](img/Dashboard_P2P.png) + +The P2P page (`/p2p`) provides real-time monitoring of P2P network messages: + +**Message Capture:** + +- Real-time WebSocket message streaming +- Live/Paused toggle for snapshot viewing + +**Filtering Options:** + +- Filter by message type (dropdown with discovered types) +- Reverse filter (exclude specific types) +- Free-text search across: type, hash, URL, miner, client_name, peer_id, fsm_state, version +- Toggle to show/hide local node messages + +**View Modes:** + +- **By Peer**: Messages grouped by peer ID with collapsible sections +- **By Time**: Chronological linear view of all messages + +**Display Options:** + +- Raw JSON mode toggle +- Expandable message content +- Connection status indicator +- Message count tracking + +### 2.4. Peers + +![Dashboard_Peers.png](img/Dashboard_Peers.png) + +The peers page (`/peers`) provides comprehensive peer management with the reputation system: + +**Peer Table:** + +| Column | Description | +|--------|-------------| +| Peer ID / Client Name | Identifier with tooltip showing full peer ID | +| Height | Peer's reported blockchain height | +| Reputation Score | Color-coded score (excellent/good/fair/poor) | +| Metrics | Link to view detailed catchup metrics | +| Bytes Received | Data received from peer | +| DataHub URL | Peer's data hub endpoint | + +**Peer Statistics:** + +- Total peers count +- Connected peers count +- Good reputation peers count + +**Features:** + +- Sortable columns (click to sort, reverse toggle) +- Configurable pagination (5-100 items per page) +- Live status indicator + +**Catchup Details Modal** (click "View" in Metrics column): + +Performance Metrics: + +- Reputation score (0-100) +- Success rate percentage +- Total attempts, successes, failures +- Malicious count +- Average response time + +Last Activity: + +- Last attempt timestamp +- Last success timestamp +- Last failure timestamp +- Last catchup error (if any) + +Peer Information: + +- Full peer ID +- Client name +- Current height +- DataHub URL + +**Catchup Status Bar** (shown during blockchain synchronization): + +- Animated progress indicator +- Syncing peer ID and URL +- Target block hash and height +- Starting height and progress (blocks validated / total) +- Progress percentage with visual bar +- Fork depth (if syncing a fork) +- Common ancestor information +- Previous attempt failure details (error type, duration, blocks validated) + +### 2.5. Network Status + +![Dashboard_Network.png](img/Dashboard_Network.png) + +The network page (`/network`) shows connected nodes with real-time WebSocket updates: + +**Connected Nodes Table:** + +| Column | Description | +|--------|-------------| +| Client Name | Node identifier (current node highlighted) | +| Best Height | Node's chain tip height | +| Best Block Hash | Current best block with miner info | +| Chain Rank | Chainwork score for ranking | +| FSM State | Current state machine state | +| Connected Peers | Number of connected peers | +| Uptime | Relative uptime display | + +**Features:** + +- Chainwork score calculation for visual ranking +- Multiple sort options +- Configurable pagination +- Live status indicator +- Real-time updates via WebSocket + +### 2.6. Forks + +The forks page (`/forks`) displays blockchain fork visualization for a specific block. + +**Accessing the Forks Page:** + +This page is accessed from the Block Viewer. When viewing a block's details, click the "forks" link in the Block Details Card to see the fork tree for that block. + +**Features:** + +- **Tree Visualization**: Interactive fork tree with configurable orientation (left-to-right, top-to-bottom, etc.) +- **Fork Explorer**: View alternative chain paths branching from the selected block +- **Responsive Layout**: Orientation changes based on device +- **Interactive Nodes**: Click on blocks in the tree to explore + +### 2.7. Admin + +![Dashboard_Admin.png](img/Dashboard_Admin.png) + +The admin page (`/admin`) provides administrative operations. **This page requires authentication.** + +#### FSM (Finite State Machine) Management + +**Current State Display:** + +- Shows blockchain state: IDLE, RUNNING, CATCHING BLOCKS, LEGACY SYNCING, DISCONNECTED + +**State Transitions:** + +| Event | Description | +|-------|-------------| +| RUN | Start blockchain processing | +| STOP | Stop blockchain (transition to IDLE) | +| CATCHUPBLOCKS | Enter block catchup mode | +| LEGACYSYNC | Enter legacy sync mode | + +- Dynamic button UI showing available events for current state +- Custom event submission + +#### Block Invalidation and Revalidation + +**Invalidate Block:** + +![Dashboard_Invalidate_Revalidate.png](img/Dashboard_Invalidate_Revalidate.png) + +- Input field for block hash (64 hex characters) +- Real-time validation feedback (valid/invalid format) +- Mark any block as invalid + +**Revalidate Block:** + +- Input field for previously invalidated block hash +- Re-validate and restore block to valid state + +**Invalid Blocks List:** + +- Table showing last 5 invalidated blocks +- Columns: Height, Hash (link to viewer), Size +- Quick re-validate button for each block + +#### Peer Reputation Management + +**Reset Peer Reputations:** + +- Reset all peer reputation scores to neutral (50.0) +- Clears all interaction metrics +- Useful for fresh start after network issues + +### 2.8. WebSocket Test + +The WebSocket test page (`/wstest`) provides connection testing tools: + +- Configurable WebSocket URL +- Connect/Disconnect controls +- Real-time message capture +- Connection log with timestamps +- Raw message display +- Message count tracking +- First node_status logging + +## 3. Technology + +The dashboard is built with modern web technologies: + +### Framework and Language + +- **SvelteKit**: Full-stack framework for building web applications +- **TypeScript**: Type-safe JavaScript for improved developer experience +- **Vite**: Fast build tool and development server + +### UI Components + +- **D3.js**: Data visualization library for charts and graphs +- **ECharts**: Rich interactive charting library +- **Custom Svelte Components**: Reusable UI components in `/src/lib/` + +### State Management + +- **Svelte Stores**: Reactive state management + - `authStore`: Authentication state + - `listenerStore`: WebSocket event listeners + - Node and P2P data stores + +### API Integration + +- **REST API**: HTTP endpoints for data retrieval +- **WebSocket**: Real-time updates for blockchain events +- **i18n**: Internationalization support via `svelte-i18next` + +### Build Configuration + +- **Static Adapter**: Builds as single-page application with `index.html` fallback +- **Source Maps**: Enabled for debugging +- **Path Aliases**: `$internal` for project-specific components + +## 4. Running the Dashboard + +### Development Mode + +```bash +# Install dependencies +npm install --prefix ./ui/dashboard + +# Run development server +npm run dev --prefix ./ui/dashboard +``` + +The dashboard will be available at `http://localhost:5173` by default. + +### Production Build + +```bash +# Build for production +npm run build --prefix ./ui/dashboard + +# The built files will be in ui/dashboard/build/ +``` + +### With Teranode + +The dashboard is typically run alongside Teranode using make targets: + +```bash +# Run both Teranode and dashboard in development mode +make dev + +# Run only the dashboard +make dev-dashboard +``` + +## 5. Configuration + +### Environment Variables + +The dashboard uses environment variables for configuration: + +- **Asset Server URL**: URL of the Teranode Asset Server for API calls +- **WebSocket URL**: WebSocket endpoint for live updates + +### API Endpoints + +The dashboard connects to these Asset Server endpoints: + +**Authentication:** + +- `POST /api/auth/login`: Login +- `GET /api/auth/check`: Session validation +- `POST /api/auth/logout`: Logout + +**FSM Management:** + +- `GET /fsm/state`: Get current state +- `GET /fsm/events`: Get available events +- `POST /fsm/state`: Send custom event +- `POST /fsm/run`: Start blockchain +- `POST /fsm/idle`: Stop blockchain +- `POST /fsm/catchup`: Enter catchup mode +- `POST /fsm/legacysync`: Enter legacy sync mode + +**Block Operations:** + +- `GET /blockstats`: Block statistics +- `GET /blockgraphdata/{period}`: Graph data +- `GET /lastblocks`: Recent blocks +- `GET /blocks`: Paginated blocks +- `GET /block/{hash}/json`: Block details +- `GET /block/{hash}/subtrees/json`: Block subtrees +- `GET /block/{hash}/forks`: Fork tree +- `POST /block/invalidate`: Invalidate block +- `POST /block/revalidate`: Revalidate block +- `GET /blocks/invalid`: Get invalid blocks + +**Transaction and UTXO:** + +- `GET /tx/{hash}/json`: Transaction details +- `GET /utxo/{hash}:{index}/json`: UTXO details +- `GET /merkle_proof/{hash}/json`: Merkle proof (BUMP format) +- `GET /search`: Search functionality + +**Subtrees:** + +- `GET /subtree/{hash}/json`: Subtree details +- `GET /subtree/{hash}/txs/json`: Subtree transactions + +**Peer Management:** + +- `GET /api/p2p/peers`: Peer registry +- `POST /api/p2p/reset-reputation`: Reset peer reputations +- `GET /api/catchup/status`: Catchup progress + +**Blockchain:** + +- `GET /api/blockchain/locator`: Block locator for consensus + +## 6. Related Documentation + +- [Asset Server Documentation](services/assetServer.md) - Backend API server +- [P2P Service Documentation](services/p2p.md) - P2P networking +- [Peer Registry and Reputation System](features/peer_registry_reputation.md) - Peer management details +- [Block Validation Service](services/blockValidation.md) - Catchup and sync information +- [Blockchain Service](services/blockchain.md) - FSM and state management + +--- + +## Appendix: Project Structure + +```text +ui/dashboard/ +├── src/ +│ ├── lib/ # Shared library components +│ ├── internal/ # Project-specific components +│ └── routes/ # SvelteKit file-based routing +│ ├── admin/ # Admin operations page +│ ├── ancestors/ # Common ancestors page +│ ├── api/ # API route handlers +│ ├── forks/ # Fork visualization page +│ ├── home/ # Home/overview page +│ ├── login/ # Authentication page +│ ├── network/ # Network status page +│ ├── p2p/ # P2P message monitor page +│ ├── peers/ # Peer management page +│ ├── viewer/ # Block/transaction/UTXO viewer +│ └── wstest/ # WebSocket testing +├── static/ # Static assets +└── package.json # Dependencies and scripts +``` diff --git a/docs/topics/datamodel/utxo_data_model.md b/docs/topics/datamodel/utxo_data_model.md index 7606144969..4d11d2bc91 100644 --- a/docs/topics/datamodel/utxo_data_model.md +++ b/docs/topics/datamodel/utxo_data_model.md @@ -750,3 +750,39 @@ key = aerospike.NewKey(namespace, setName, keySource) - Pagination automatically triggered at 20K output threshold - `RECORD_TOO_BIG` error triggers retry with external storage - No application-level size restrictions on individual transactions + +**Multi-Record Transaction Consistency**: + +When transactions require multiple Aerospike records (>20K outputs), the system uses a lock record pattern to ensure atomic creation: + +1. **Lock Record**: A temporary record prevents concurrent creation attempts for the same transaction +2. **Creating Flag**: Each record has a `creating` flag that prevents UTXO spending until all records exist +3. **Two-Phase Commit**: Records are created with `creating=true`, then flags are cleared after all records succeed +4. **Auto-Recovery**: If creation fails partially, the system automatically recovers on next encounter + +**Record Layout for Large Transactions**: + +```text +Transaction with N batches (>20K outputs): + +Master Record (index 0): + + - Transaction metadata (TxID, version, fees, etc.) + - First 20,000 UTXOs + - TotalExtraRecs field indicating additional records + - Creating flag (cleared when complete) + +Child Records (indices 1 to N-1): + + - Additional UTXOs in batches of 20,000 + - Common metadata fields + - Creating flag (cleared when complete) + +Lock Record (index 0xFFFFFFFF): + + - Temporary, TTL-based (30-300 seconds) + - Prevents concurrent creation + - Released after all records created +``` + +For detailed documentation on the lock record pattern, see [UTXO Lock Record Pattern](../features/utxo_lock_records.md). diff --git a/docs/topics/features/img/peer_selection_sequence.puml b/docs/topics/features/img/peer_selection_sequence.puml new file mode 100644 index 0000000000..0f474a8453 --- /dev/null +++ b/docs/topics/features/img/peer_selection_sequence.puml @@ -0,0 +1,58 @@ +@startuml peer_selection_sequence +!theme plain +skinparam backgroundColor white +skinparam sequenceMessageAlign center + +title Two-Phase Peer Selection Process + +participant "Block Validation\nService" as BV +participant "Peer Selector" as PS +participant "Peer Registry" as PR + +BV -> PS: SelectSyncPeer(criteria) +activate PS + +PS -> PR: GetAllPeers() +activate PR +PR --> PS: []PeerInfo +deactivate PR + +== Phase 1: Full Node Selection == + +PS -> PS: Filter eligible peers +note right + - Not banned + - Has DataHub URL + - Height > 0 + - Reputation >= 20.0 + - Not in cooldown +end note + +PS -> PS: Filter for "full" storage mode + +alt Full nodes available + PS -> PS: Sort by:\n1. Reputation (desc)\n2. Ban score (asc)\n3. Height (desc)\n4. Peer ID + PS -> PS: Select top candidate + PS --> BV: Selected full node peer +else No full nodes available + + == Phase 2: Pruned Node Fallback == + + alt Fallback enabled + PS -> PS: Filter non-full peers + PS -> PS: Sort by:\n1. Reputation (desc)\n2. Ban score (asc)\n3. Height (asc)\n4. Peer ID + note right + Prefer youngest pruned + nodes to minimize + UTXO pruning risk + end note + PS -> PS: Select top candidate + PS --> BV: Selected pruned node peer + else Fallback disabled + PS --> BV: No peer available + end +end + +deactivate PS + +@enduml diff --git a/docs/topics/features/img/peer_selection_sequence.svg b/docs/topics/features/img/peer_selection_sequence.svg new file mode 100644 index 0000000000..f45f7489bb --- /dev/null +++ b/docs/topics/features/img/peer_selection_sequence.svg @@ -0,0 +1 @@ +Two-Phase Peer Selection ProcessBlock ValidationServiceBlock ValidationServicePeer SelectorPeer SelectorPeer RegistryPeer RegistrySelectSyncPeer(criteria)GetAllPeers()[]PeerInfoPhase 1: Full Node SelectionFilter eligible peers- Not banned- Has DataHub URL- Height > 0- Reputation >= 20.0- Not in cooldownFilter for "full" storage modealt[Full nodes available]Sort by:1. Reputation (desc)2. Ban score (asc)3. Height (desc)4. Peer IDSelect top candidateSelected full node peer[No full nodes available]Phase 2: Pruned Node Fallbackalt[Fallback enabled]Filter non-full peersSort by:1. Reputation (desc)2. Ban score (asc)3. Height (asc)4. Peer IDPrefer youngest prunednodes to minimizeUTXO pruning riskSelect top candidateSelected pruned node peer[Fallback disabled]No peer available \ No newline at end of file diff --git a/docs/topics/features/img/reputation_score_calculation.puml b/docs/topics/features/img/reputation_score_calculation.puml new file mode 100644 index 0000000000..91414c542f --- /dev/null +++ b/docs/topics/features/img/reputation_score_calculation.puml @@ -0,0 +1,59 @@ +@startuml reputation_score_calculation +!theme plain +skinparam backgroundColor white +skinparam activityBackgroundColor #f5f5f5 +skinparam activityBorderColor #333333 + +title Reputation Score Calculation Algorithm + +start + +:Receive peer data; + +if (MaliciousCount > 0?) then (yes) + :Set score = 5.0; + stop +else (no) +endif + +:Calculate success rate; +note right + successRate = + (successes / attempts) * 100 +end note + +:Apply weighted success rate; +note right + weightedSuccess = + successRate * 0.6 +end note + +:Add weighted base score; +note right + score = weightedSuccess + + (50.0 * 0.4) +end note + +if (Failure within last hour?) then (yes) + :Apply failure penalty; + note right + score = score - 15.0 + end note +else (no) +endif + +if (Success within last hour?) then (yes) + :Add recency bonus; + note right + score = score + 10.0 + end note +else (no) +endif + +:Clamp score to 0-100; + +:Return final score; + +stop + +@enduml diff --git a/docs/topics/features/img/reputation_score_calculation.svg b/docs/topics/features/img/reputation_score_calculation.svg new file mode 100644 index 0000000000..931b020e38 --- /dev/null +++ b/docs/topics/features/img/reputation_score_calculation.svg @@ -0,0 +1 @@ +Reputation Score Calculation AlgorithmReceive peer dataSet score = 5.0yesMaliciousCount > 0?nosuccessRate =(successes / attempts) * 100Calculate success rateweightedSuccess =successRate * 0.6Apply weighted success ratescore = weightedSuccess +(50.0 * 0.4)Add weighted base scorescore = score - 15.0Apply failure penaltyyesFailure within last hour?noscore = score + 10.0Add recency bonusyesSuccess within last hour?noClamp score to 0-100Return final score \ No newline at end of file diff --git a/docs/topics/features/img/sync_coordination_sequence.puml b/docs/topics/features/img/sync_coordination_sequence.puml new file mode 100644 index 0000000000..981a3b6277 --- /dev/null +++ b/docs/topics/features/img/sync_coordination_sequence.puml @@ -0,0 +1,77 @@ +@startuml sync_coordination_sequence +!theme plain +skinparam backgroundColor white +skinparam sequenceMessageAlign center + +title Sync Coordination: Block Validation and Peer Registry Integration + +participant "Block Validation\nService" as BV +participant "Peer Selector" as PS +participant "Peer Registry" as PR +participant "Selected Peer" as SP + +== Catchup Initialization == + +BV -> PS: SelectSyncPeer(localHeight) +PS -> PR: GetAllPeers() +PR --> PS: Peer list +PS -> PS: Apply selection algorithm +PS --> BV: Best peer for sync + +== Block Retrieval == + +BV -> PR: RecordSyncAttempt(peerID) +activate PR +PR -> PR: Update attempt timestamp +deactivate PR + +BV -> SP: RequestBlock(height) +activate SP + +alt Success + SP --> BV: Block data + deactivate SP + + BV -> BV: Validate block + + alt Block valid + BV -> PR: ReportSuccess(peerID, responseTime) + activate PR + PR -> PR: Increment successes + PR -> PR: Update avg response time + PR -> PR: Increment blocks received + PR -> PR: Recalculate reputation + deactivate PR + else Block invalid + BV -> PR: ReportMalicious(peerID) + activate PR + PR -> PR: Increment malicious count + PR -> PR: Set reputation = 5.0 + deactivate PR + BV -> PS: SelectSyncPeer(criteria)\nwith previous peer rotation + end + +else Failure/Timeout + SP --> BV: Error + deactivate SP + + BV -> PR: ReportFailure(peerID, error) + activate PR + PR -> PR: Increment failures + PR -> PR: Record error details + PR -> PR: Apply failure penalty + deactivate PR + + BV -> PS: SelectSyncPeer(criteria)\nwith previous peer rotation +end + +== Periodic Recovery == + +BV -> PR: ReconsiderBadPeers() +activate PR +PR -> PR: Find peers with score < 20.0 +PR -> PR: Check cooldown periods +PR -> PR: Reset eligible peers to 30.0 +deactivate PR + +@enduml diff --git a/docs/topics/features/img/sync_coordination_sequence.svg b/docs/topics/features/img/sync_coordination_sequence.svg new file mode 100644 index 0000000000..d865939872 --- /dev/null +++ b/docs/topics/features/img/sync_coordination_sequence.svg @@ -0,0 +1 @@ +Sync Coordination: Block Validation and Peer Registry IntegrationBlock ValidationServiceBlock ValidationServicePeer SelectorPeer SelectorPeer RegistryPeer RegistrySelected PeerSelected PeerCatchup InitializationSelectSyncPeer(localHeight)GetAllPeers()Peer listApply selection algorithmBest peer for syncBlock RetrievalRecordSyncAttempt(peerID)Update attempt timestampRequestBlock(height)alt[Success]Block dataValidate blockalt[Block valid]ReportSuccess(peerID, responseTime)Increment successesUpdate avg response timeIncrement blocks receivedRecalculate reputation[Block invalid]ReportMalicious(peerID)Increment malicious countSet reputation = 5.0SelectSyncPeer(criteria)with previous peer rotation[Failure/Timeout]ErrorReportFailure(peerID, error)Increment failuresRecord error detailsApply failure penaltySelectSyncPeer(criteria)with previous peer rotationPeriodic RecoveryReconsiderBadPeers()Find peers with score < 20.0Check cooldown periodsReset eligible peers to 30.0 \ No newline at end of file diff --git a/docs/topics/features/peer_registry_reputation.md b/docs/topics/features/peer_registry_reputation.md new file mode 100644 index 0000000000..a7bdbe4dbe --- /dev/null +++ b/docs/topics/features/peer_registry_reputation.md @@ -0,0 +1,384 @@ +# Peer Registry and Reputation System + +## Index + +1. [Overview](#1-overview) +2. [Purpose and Benefits](#2-purpose-and-benefits) +3. [Core Components](#3-core-components) + - [3.1. Peer Registry](#31-peer-registry) + - [3.2. Peer Selector](#32-peer-selector) + - [3.3. Reputation Scoring](#33-reputation-scoring) +4. [Reputation Algorithm](#4-reputation-algorithm) + - [4.1. Score Calculation](#41-score-calculation) + - [4.2. Scoring Events](#42-scoring-events) + - [4.3. Malicious Behavior Detection](#43-malicious-behavior-detection) +5. [Peer Selection Strategy](#5-peer-selection-strategy) + - [5.1. Selection Criteria](#51-selection-criteria) + - [5.2. Two-Phase Selection](#52-two-phase-selection) + - [5.3. Fallback to Pruned Nodes](#53-fallback-to-pruned-nodes) +6. [Integration with Other Services](#6-integration-with-other-services) + - [6.1. Block Validation Service](#61-block-validation-service) + - [6.2. Subtree Validation Service](#62-subtree-validation-service) +7. [Persistence and Recovery](#7-persistence-and-recovery) + - [7.1. Cache File Format](#71-cache-file-format) + - [7.2. Reputation Recovery](#72-reputation-recovery) +8. [Configuration Options](#8-configuration-options) +9. [Dashboard Monitoring](#9-dashboard-monitoring) +10. [Related Documentation](#10-related-documentation) + +## 1. Overview + +The Peer Registry and Reputation System is a comprehensive peer management framework introduced in Teranode to track, evaluate, and select the most reliable peers for network operations. This system replaces the previous ad-hoc peer health checking with a centralized registry that maintains detailed metrics about each peer's behavior, performance, and reliability. + +The system consists of three main components: + +- **Peer Registry**: A thread-safe data store that tracks all peer information and interaction history +- **Peer Selector**: A stateless component that selects optimal peers based on reputation and other criteria +- **Reputation Scoring**: An algorithm that calculates peer reliability scores (0-100) based on success rates, response times, and behavior patterns + +This architecture enables intelligent peer selection for critical operations like blockchain synchronization (catchup), ensuring that Teranode preferentially interacts with reliable peers while avoiding problematic ones. + +## 2. Purpose and Benefits + +The Peer Registry and Reputation System addresses several critical needs in Teranode's P2P networking: + +### Network Reliability + +- **Intelligent Peer Selection**: Rather than randomly selecting peers, the system chooses the most reliable peers based on historical performance data +- **Malicious Peer Isolation**: Peers that provide invalid data are quickly identified and deprioritized +- **Graceful Degradation**: The system can recover from temporary peer issues through reputation recovery mechanisms + +### Performance Optimization + +- **Response Time Tracking**: The system maintains weighted averages of peer response times to prefer faster peers +- **Reduced Wasted Effort**: By avoiding unreliable peers, the system minimizes failed network operations +- **Efficient Catchup**: Block synchronization preferentially uses peers with proven track records + +### Operational Visibility + +- **Comprehensive Metrics**: Detailed tracking of interactions, successes, failures, and malicious behavior +- **Persistent History**: Peer metrics survive node restarts through cache persistence +- **Dashboard Integration**: Real-time peer monitoring through the UI dashboard + +### Security Benefits + +- **Sybil Attack Mitigation**: Reputation requirements make it costly for attackers to establish trusted peer identities +- **Invalid Block Protection**: Peers providing invalid blocks see immediate and severe reputation penalties +- **Automatic Recovery**: The system can reconsider previously bad peers after cooldown periods + +## 3. Core Components + +### 3.1. Peer Registry + +The `PeerRegistry` is a thread-safe data store that maintains comprehensive information about all known peers. It acts as a pure data store with no business logic, providing atomic operations for peer data management. + +**Key Data Tracked:** + +| Field | Type | Description | +|-------|------|-------------| +| `ID` | `peer.ID` | Unique peer identifier | +| `ClientName` | `string` | Human-readable client software name | +| `Height` | `int32` | Peer's reported blockchain height | +| `BlockHash` | `string` | Hash of peer's best block | +| `DataHubURL` | `string` | URL for fetching blocks/subtrees from peer | +| `Storage` | `string` | Storage mode: "full", "pruned", or empty | +| `ReputationScore` | `float64` | Overall reliability score (0-100) | +| `IsConnected` | `bool` | Whether peer is directly connected | +| `IsBanned` | `bool` | Whether peer is currently banned | + +**Interaction Metrics:** + +| Field | Type | Description | +|-------|------|-------------| +| `InteractionAttempts` | `int64` | Total interactions attempted | +| `InteractionSuccesses` | `int64` | Successful interactions | +| `InteractionFailures` | `int64` | Failed interactions | +| `MaliciousCount` | `int64` | Detected malicious behaviors | +| `AvgResponseTime` | `time.Duration` | Weighted average response time | +| `BlocksReceived` | `int64` | Blocks successfully received | +| `SubtreesReceived` | `int64` | Subtrees successfully received | +| `TransactionsReceived` | `int64` | Transactions received | + +### 3.2. Peer Selector + +The `PeerSelector` is a stateless, pure-function component that implements the peer selection algorithm. It takes a list of peers and selection criteria, returning the optimal peer for a given operation. + +**Selection Criteria:** + +```go +type SelectionCriteria struct { + LocalHeight int32 // Current local blockchain height + ForcedPeerID peer.ID // Force selection of specific peer + PreviousPeer peer.ID // Previously selected peer (for rotation) + SyncAttemptCooldown time.Duration // Cooldown before retrying a peer +} +``` + +### 3.3. Reputation Scoring + +The reputation scoring system assigns each peer a score between 0 and 100, where: + +- **100**: Perfect reliability +- **50**: Neutral (default for new peers) +- **20**: Minimum threshold for eligibility +- **5**: Malicious peer score +- **0**: Completely unreliable + +## 4. Reputation Algorithm + +### 4.1. Score Calculation + +The reputation algorithm calculates scores based on multiple factors: + +```text +Score = (SuccessRate * 0.6) + (BaseScore * 0.4) - RecentFailurePenalty + RecencyBonus +``` + +![Reputation Score Calculation Algorithm](img/reputation_score_calculation.svg) + +**Algorithm Constants:** + +| Constant | Value | Description | +|----------|-------|-------------| +| `baseScore` | 50.0 | Starting neutral score | +| `successWeight` | 0.6 | Weight for success rate component | +| `maliciousPenalty` | 20.0 | Penalty per malicious detection | +| `recencyBonus` | 10.0 | Bonus for recent success | +| `recencyWindow` | 1 hour | Time window for recency calculations | + +**Calculation Steps:** + +1. If peer has malicious count > 0, score is immediately set to 5.0 +2. Calculate success rate: `(successes / total_attempts) * 100` +3. Apply weighted success rate: `successRate * 0.6` +4. Add weighted base score: `50.0 * 0.4` +5. Apply recent failure penalty (-15.0) if failure within last hour +6. Add recency bonus (+10.0) if success within last hour +7. Clamp final score to 0-100 range + +### 4.2. Scoring Events + +Different events affect reputation scores differently: + +**Positive Events:** + +| Event | Effect | +|-------|--------| +| Successful block received | Increases success count, updates avg response time | +| Successful subtree received | Increases success count, updates avg response time | +| Transaction received | Increases success count | +| Successful catchup | Increases success count, updates avg response time | + +**Negative Events:** + +| Event | Effect | +|-------|--------| +| Interaction failure | Increases failure count | +| Multiple recent failures | Score drops to 15.0 (harsh penalty) | +| Catchup error | Tracks error message and time | + +### 4.3. Malicious Behavior Detection + +When a peer provides invalid data (e.g., invalid blocks), they are marked as malicious: + +1. `MaliciousCount` is incremented +2. `InteractionFailures` is incremented +3. Reputation score is immediately set to 5.0 +4. Peer becomes ineligible for selection (below 20.0 threshold) + +**Recovery from Malicious Status:** + +Malicious peers can be reconsidered after a cooldown period through the `ReconsiderBadPeers` function, which: + +- Only affects peers with reputation < 20.0 +- Requires sufficient time since last failure +- Applies exponential cooldown based on reset count (triples each time) +- Resets reputation to 30.0 (below neutral but above threshold) +- Clears malicious count for fresh start + +## 5. Peer Selection Strategy + +### 5.1. Selection Criteria + +Peers must meet all eligibility criteria to be selected: + +1. **Not Banned**: Peer must not be in the ban list +2. **Has DataHub URL**: Required for fetching blocks/subtrees (excludes listen-only nodes) +3. **URL Responsive**: DataHub URL must be accessible +4. **Valid Height**: Must report a positive blockchain height +5. **Minimum Reputation**: Score must be >= 20.0 +6. **Cooldown Period**: Must not have been attempted recently (if cooldown is set) + +### 5.2. Two-Phase Selection + +The peer selector uses a two-phase approach for optimal selection: + +![Two-Phase Peer Selection Process](img/peer_selection_sequence.svg) + +#### Phase 1: Full Node Selection + +1. Filter for peers that explicitly announce as "full" storage mode +2. Sort candidates by: + + - Reputation score (highest first) - **primary** + - Ban score (lowest first) - **secondary** + - Block height (highest first) - **tertiary** + - Peer ID (for deterministic ordering) - **quaternary** +3. Select the top candidate (or second if top was previous peer) + +#### Phase 2: Pruned Node Fallback + +If no full nodes are available and fallback is enabled: + +1. Filter for peers not in "full" mode but meeting other criteria +2. Sort by: + + - Reputation score (highest first) + - Ban score (lowest first) + - Block height (lowest first) - **prefer youngest pruned nodes** + - Peer ID + +The preference for younger pruned nodes minimizes UTXO pruning risk during catchup. + +### 5.3. Fallback to Pruned Nodes + +Pruned node fallback is controlled by the `p2p_allow_pruned_node_fallback` setting: + +- **Enabled (default)**: Falls back to pruned nodes when no full nodes available +- **Disabled**: Only uses full nodes, fails if none available + +When using pruned nodes: + +- Warning is logged about potential UTXO pruning risk +- Youngest (lowest height) pruned node is preferred +- Reputation still prioritized over height + +## 6. Integration with Other Services + +![Sync Coordination Sequence](img/sync_coordination_sequence.svg) + +### 6.1. Block Validation Service + +The Block Validation service uses the peer registry extensively during catchup operations: + +**Sync Coordination:** + +- Records sync attempts via `RecordSyncAttempt` +- Uses `SelectSyncPeer` to choose optimal peers for block fetching +- Reports successes/failures to update reputation +- Handles malicious peer detection for invalid blocks + +**Catchup Status:** + +The service tracks catchup status including: + +- Currently syncing peer +- Peer selection metrics +- Available peer counts by storage mode + +### 6.2. Subtree Validation Service + +The Subtree Validation service reports valid subtree receipts to improve peer reputation: + +- Calls `ReportValidSubtree` after successful subtree validation +- Increases peer's reputation for providing valid data +- Tracks subtrees received per peer + +## 7. Persistence and Recovery + +### 7.1. Cache File Format + +The peer registry persists its data to `teranode_peer_registry.json`: + +```json +{ + "version": "1.0", + "last_updated": "2025-11-19T10:30:00Z", + "peers": { + "QmPeerID1...": { + "interaction_attempts": 150, + "interaction_successes": 145, + "interaction_failures": 5, + "reputation_score": 87.5, + "avg_response_ms": 250, + "blocks_received": 100, + "height": 850000, + "data_hub_url": "https://peer1.example.com", + "storage": "full" + } + } +} +``` + +**Persistence Triggers:** + +- Periodic saves during operation +- Graceful shutdown +- Significant state changes + +**Load on Startup:** + +- Reads cache file if present +- Validates version compatibility +- Restores peer metrics to registry +- Handles legacy field formats for backward compatibility + +### 7.2. Reputation Recovery + +The system includes mechanisms for recovering peers from low reputation: + +**Automatic Recovery (`ReconsiderBadPeers`):** + +- Called periodically by sync coordinator +- Resets reputation to 30.0 after cooldown period +- Uses exponential backoff: cooldown * 3^(reset_count) +- Clears malicious count for fresh start + +**Manual Reset (`ResetReputation`):** + +- Available via gRPC API and dashboard UI +- Can reset specific peer or all peers +- Clears all interaction metrics +- Resets score to neutral 50.0 + +## 8. Configuration Options + +| Setting | Default | Description | +|---------|---------|-------------| +| `p2p_allow_pruned_node_fallback` | `true` | Allow fallback to pruned nodes during sync | +| `p2p_cache_dir` | `.` | Directory for peer registry cache file | + +**Related Settings:** + +Additional settings that interact with the peer system are documented in the [P2P Settings Reference](../../references/settings/services/p2p_settings.md). + +## 9. Dashboard Monitoring + +The Teranode dashboard provides real-time visibility into peer status: + +**Peer List View:** + +- All connected peers with their metrics +- Reputation scores and storage modes +- Interaction history (successes, failures) +- Response time statistics + +**Admin Operations:** + +- **Reset Reputation**: Clear metrics for specific peer or all peers +- **Ban/Unban**: Manage peer bans +- **Force Disconnect**: Remove problematic peers + +**Catchup Status:** + +- Current sync peer and progress +- Available peer counts by storage mode +- Error tracking for sync failures + +## 10. Related Documentation + +- [P2P Service Documentation](../services/p2p.md) +- [P2P Service Reference](../../references/services/p2p_reference.md) +- [P2P Settings Reference](../../references/settings/services/p2p_settings.md) +- [Block Validation Service Documentation](../services/blockValidation.md) +- [Subtree Validation Service Documentation](../services/subtreeValidation.md) diff --git a/docs/topics/features/utxo_lock_records.md b/docs/topics/features/utxo_lock_records.md new file mode 100644 index 0000000000..68e34d9f83 --- /dev/null +++ b/docs/topics/features/utxo_lock_records.md @@ -0,0 +1,372 @@ +# UTXO Lock Record Pattern for Multi-Record Transactions + +## Index + +1. [Overview](#1-overview) +2. [Purpose and Benefits](#2-purpose-and-benefits) +3. [Architecture](#3-architecture) + - [3.1. Lock Record Structure](#31-lock-record-structure) + - [3.2. Creating Flag](#32-creating-flag) + - [3.3. Record Layout](#33-record-layout) +4. [Two-Phase Commit Protocol](#4-two-phase-commit-protocol) + - [4.1. Phase 1: Record Creation](#41-phase-1-record-creation) + - [4.2. Phase 2: Flag Clearing](#42-phase-2-flag-clearing) + - [4.3. Atomicity Guarantees](#43-atomicity-guarantees) +5. [Error Handling and Recovery](#5-error-handling-and-recovery) + - [5.1. Partial Failure Scenarios](#51-partial-failure-scenarios) + - [5.2. Auto-Recovery Mechanisms](#52-auto-recovery-mechanisms) + - [5.3. StorageError Usage](#53-storageerror-usage) +6. [TTL and Resource Management](#6-ttl-and-resource-management) +7. [Integration with Block Processing](#7-integration-with-block-processing) +8. [Configuration Options](#8-configuration-options) +9. [Monitoring and Debugging](#9-monitoring-and-debugging) +10. [Related Documentation](#10-related-documentation) + +## 1. Overview + +The Lock Record Pattern is a distributed consistency mechanism used by Teranode's UTXO store to safely handle transactions with more than 20,000 outputs. When a transaction exceeds the Aerospike record size limit, it must be split across multiple records. The lock record pattern ensures these multi-record operations complete atomically, preventing data corruption from partial writes or concurrent access. + +The pattern uses two key mechanisms: + +1. **Lock Records**: Temporary Aerospike records that prevent concurrent creation attempts for the same transaction +2. **Creating Flag**: A per-record flag that prevents UTXO spending until all records are fully committed + +This architecture ensures that even in failure scenarios, UTXOs cannot be spent prematurely, and the system self-heals through automatic recovery. + +## 2. Purpose and Benefits + +The Lock Record Pattern addresses several critical challenges in handling large transactions: + +### Atomic Multi-Record Operations + +- **Record Size Limits**: Aerospike limits individual records to ~1MB; large transactions must span multiple records +- **Consistency Guarantee**: All records for a transaction either exist completely or not at all (from a spendability perspective) +- **No Partial Spending**: UTXOs cannot be spent until the entire transaction is committed + +### Concurrent Access Protection + +- **Duplicate Prevention**: Lock record prevents multiple processes from creating the same transaction simultaneously +- **Race Condition Safety**: Lock acquisition is atomic via CREATE_ONLY policy +- **Clear Ownership**: Lock records include process ID and hostname for debugging + +### Failure Recovery + +- **Self-Healing**: System automatically recovers from partial failures without manual intervention +- **No Data Loss**: Worst case is temporary inability to spend (not lost funds) +- **Multiple Recovery Paths**: Recovery can occur through retry, re-encounter, or mining operations + +### Performance Optimization + +- **External Storage**: Large transaction data stored in blob storage, reducing Aerospike load +- **Batch Operations**: Multiple records created in single batch for efficiency +- **TTL-Based Cleanup**: Lock records automatically expire, preventing resource leaks + +## 3. Architecture + +### 3.1. Lock Record Structure + +Lock records are special Aerospike records identified by a unique index (`0xFFFFFFFF`) that cannot conflict with actual sub-records: + +```go +const LockRecordIndex = uint32(0xFFFFFFFF) +``` + +**Lock Record Bins:** + +| Bin Name | Type | Description | +|----------|------|-------------| +| `created_at` | `int64` | Unix timestamp of lock creation | +| `lock_type` | `string` | Always "tx_creation" | +| `process_id` | `int` | OS process ID that holds the lock | +| `hostname` | `string` | Host where lock was acquired | +| `expected_recs` | `int` | Number of records to be created | + +### 3.2. Creating Flag + +The `creating` flag is a boolean bin present on each transaction record during the two-phase commit: + +- **True**: Record exists but is part of an incomplete multi-record transaction +- **False/Absent**: Record is fully committed and UTXOs are spendable + +The Lua UDF script checks this flag before allowing UTXO spending: + +```lua +-- From teranode.lua (spend operation) +if record[creating] then + return error("UTXO_LOCKED") +end +``` + +### 3.3. Record Layout + +For a transaction with >20,000 outputs, records are organized as: + +```text +Transaction with N batches: + +┌─────────────────────┐ +│ Lock Record │ Index: 0xFFFFFFFF (temporary) +│ TTL: 30-300s │ +└─────────────────────┘ + +┌─────────────────────┐ +│ Master Record │ Index: 0 +│ - Metadata │ - TxID, version, fees, etc. +│ - UTXOs 0-19999 │ - First batch of outputs +│ - TotalExtraRecs │ - Count of additional records +│ - Creating flag │ +└─────────────────────┘ + +┌─────────────────────┐ +│ Child Record 1 │ Index: 1 +│ - UTXOs 20000+ │ - Second batch of outputs +│ - Creating flag │ +└─────────────────────┘ + +┌─────────────────────┐ +│ Child Record N-1 │ Index: N-1 +│ - Final UTXOs │ - Last batch of outputs +│ - Creating flag │ +└─────────────────────┘ +``` + +## 4. Two-Phase Commit Protocol + +### 4.1. Phase 1: Record Creation + +The first phase creates all transaction records with the `creating` flag set to `true`: + +1. **Acquire Lock** + - Create lock record with CREATE_ONLY policy + - If lock exists, return `TxExistsError` (another process is creating) + - Calculate dynamic TTL based on number of records + +2. **Store External Data** + - Write transaction bytes to blob storage (S3/filesystem) + - Use atomic write with existence check + +3. **Create Aerospike Records** + - Prepare all record keys upfront (fail fast on key errors) + - Add `creating=true` to all bins + - Execute batch write with CREATE_ONLY policy + - Handle KEY_EXISTS_ERROR as recovery case + +4. **Release Lock** + - Delete lock record (always, even on partial failure) + - Partial records remain for next attempt to complete + +### 4.2. Phase 2: Flag Clearing + +The second phase removes the `creating` flag in a specific order: + +1. **Clear Child Records First** (indices 1, 2, ..., N-1) + - Batch operation with expression filter + - Only updates records where `creating` bin exists + - Use UPDATE_ONLY policy + +2. **Clear Master Record Last** (index 0) + - Single record operation + - Master's flag absence = atomic completion indicator + +This ordering ensures: + +- If Phase 2 fails midway, master still has flag (incomplete) +- Checking only master is sufficient to determine completion +- Recovery can identify incomplete transactions by master's flag + +### 4.3. Atomicity Guarantees + +The protocol provides the following guarantees: + +| Scenario | State | UTXOs Spendable | Recovery | +|----------|-------|-----------------|----------| +| Phase 1 incomplete | Lock held, partial records | No | Next attempt completes | +| Phase 1 complete, Phase 2 not started | All records with `creating=true` | No | Auto-recovery on retry | +| Phase 2 incomplete | Children cleared, master has flag | No | Master flag checked | +| Phase 2 complete | No `creating` flags | Yes | N/A | + +## 5. Error Handling and Recovery + +### 5.1. Partial Failure Scenarios + +**Lock Acquisition Failure:** + +- Another process holds the lock +- Return `TxExistsError` immediately +- No cleanup needed + +**Blob Storage Failure:** + +- Release lock +- No Aerospike records created +- Clean retry on next attempt + +**Partial Record Creation:** + +- Some records created, some failed +- Release lock +- Return error but do NOT delete partial records +- Next attempt will find existing records and complete them + +**Phase 2 Failure:** + +- All records exist with `creating=true` +- Return success (transaction is persisted) +- Log error for monitoring +- Auto-recovery will clear flags + +### 5.2. Auto-Recovery Mechanisms + +The system self-heals through multiple paths: + +1. **Retry Path** + - When transaction is re-submitted + - Finds all records exist (KEY_EXISTS_ERROR) + - Attempts Phase 2 to clear creating flags + +2. **Re-Encounter Path** + - When transaction appears in block or subtree + - `processTxMetaUsingStore.go` checks for `creating` flag + - Triggers re-processing to complete commit + +3. **Mining Path** + - When block is mined containing the transaction + - `SetMined` operation clears creating flags + - Normal mining flow completes the commit + +4. **TTL-Based Lock Release** + - Lock records automatically expire (30-300 seconds) + - Prevents permanent lock on process crash + - Allows other processes to retry + +### 5.3. StorageError Usage + +The `StorageError` type is used specifically for external storage failures: + +```go +errors.NewStorageError("[sendStoreBatch] error writing transaction to external store [%s]", txHash.String()) +``` + +This error type: + +- Indicates recoverable storage failures +- Distinguishes from processing errors +- Used by callers to decide retry strategy + +## 6. TTL and Resource Management + +Lock record TTL is calculated dynamically based on transaction complexity: + +```go +TTL = BaseTTL + (PerRecordTTL * NumRecords) +``` + +**Constants:** + +| Constant | Value | Description | +|----------|-------|-------------| +| `LockRecordBaseTTL` | 30 seconds | Minimum lock duration | +| `LockRecordPerRecordTTL` | 2 seconds | Additional time per record | +| `LockRecordMaxTTL` | 300 seconds | Maximum lock duration (5 minutes) | + +**Example Calculations:** + +- 1 record: 30 + (2 × 1) = 32 seconds +- 10 records: 30 + (2 × 10) = 50 seconds +- 100 records: 30 + (2 × 100) = 230 seconds +- 200+ records: Capped at 300 seconds + +The TTL ensures: + +- Sufficient time for batch operations to complete +- Automatic cleanup on process crash +- No indefinite locks from abandoned operations + +## 7. Integration with Block Processing + +### Block Validation Flow + +When a block is validated containing a large transaction: + +1. Check if transaction exists in UTXO store +2. If `creating` flag is set, transaction is incomplete +3. Re-process transaction to complete Phase 2 +4. Continue with block validation + +### SetMined Operation + +The `SetMined` operation (called when block is accepted) includes: + +1. Update block IDs and heights on all records +2. Clear `creating` flag if present +3. Set `UnminedSince` to 0 + +This provides a final recovery path for any transactions that failed Phase 2. + +### Subtree Validation + +Subtree validation checks the `creating` flag: + +```go +// From processTxMetaUsingStore.go +if txMeta.Creating { + // Re-process to complete the two-phase commit + return processTxMetaWithRetry(...) +} +``` + +## 8. Configuration Options + +The lock record pattern uses these configuration settings: + +| Setting | Default | Description | +|---------|---------|-------------| +| `utxo_store_batch_size` | 20000 | UTXOs per record (triggers multi-record) | +| `utxo_store_externalize_all_transactions` | false | Force external storage for all transactions | +| `utxo_store_max_tx_size_in_store` | 1MB | Size threshold for external storage | + +**Batch Size Impact:** + +- Smaller batch = More records = Longer TTL +- Larger batch = Fewer records = Risk of hitting size limits + +## 9. Monitoring and Debugging + +### Prometheus Metrics + +- `utxo_create_batch_size`: Distribution of batch sizes +- `utxo_create_external`: Duration of external storage writes +- `utxo_store_errors`: Error counts by type + +### Log Messages + +Key log patterns for debugging: + +```text +[StoreTransactionExternally] Record N already exists for tx HASH (completing previous attempt) +[StoreTransactionExternally] Transaction HASH created but creating flag not cleared +[clearCreatingFlag] Failed to clear creating flag for child record N +``` + +### Lock Record Inspection + +Lock records can be queried directly in Aerospike: + +```sql +SELECT * FROM teranode.utxos WHERE PK = calculateLockKey(txHash) +``` + +The lock record contains debugging information: + +- `process_id`: Which process holds/held the lock +- `hostname`: Which host created the lock +- `expected_recs`: How many records should exist +- `created_at`: When the lock was acquired + +## 10. Related Documentation + +- [Two-Phase Transaction Commit Process](two_phase_commit.md) - Related two-phase commit for transaction processing +- [UTXO Store Documentation](../stores/utxo.md) - Main UTXO store documentation +- [UTXO Data Model](../datamodel/utxo_data_model.md) - Data structures and fields +- [UTXO Store Reference](../../references/stores/utxo_reference.md) - API reference +- [Error Handling Reference](../../references/errorHandling.md) - StorageError and other error types diff --git a/docs/topics/img.png b/docs/topics/img.png new file mode 100644 index 0000000000..f46ae73d64 Binary files /dev/null and b/docs/topics/img.png differ diff --git a/docs/topics/img/Dashboard_Admin.png b/docs/topics/img/Dashboard_Admin.png new file mode 100644 index 0000000000..547e36ffa0 Binary files /dev/null and b/docs/topics/img/Dashboard_Admin.png differ diff --git a/docs/topics/img/Dashboard_Blocks.png b/docs/topics/img/Dashboard_Blocks.png new file mode 100644 index 0000000000..a5bf714ab1 Binary files /dev/null and b/docs/topics/img/Dashboard_Blocks.png differ diff --git a/docs/topics/img/Dashboard_Invalidate_Revalidate.png b/docs/topics/img/Dashboard_Invalidate_Revalidate.png new file mode 100644 index 0000000000..c1246a1029 Binary files /dev/null and b/docs/topics/img/Dashboard_Invalidate_Revalidate.png differ diff --git a/docs/topics/img/Dashboard_Main.png b/docs/topics/img/Dashboard_Main.png new file mode 100644 index 0000000000..6370b06e9b Binary files /dev/null and b/docs/topics/img/Dashboard_Main.png differ diff --git a/docs/topics/img/Dashboard_Network.png b/docs/topics/img/Dashboard_Network.png new file mode 100644 index 0000000000..06c726f1a1 Binary files /dev/null and b/docs/topics/img/Dashboard_Network.png differ diff --git a/docs/topics/img/Dashboard_P2P.png b/docs/topics/img/Dashboard_P2P.png new file mode 100644 index 0000000000..f7bf9ee781 Binary files /dev/null and b/docs/topics/img/Dashboard_P2P.png differ diff --git a/docs/topics/img/Dashboard_Peers.png b/docs/topics/img/Dashboard_Peers.png new file mode 100644 index 0000000000..b4a240f856 Binary files /dev/null and b/docs/topics/img/Dashboard_Peers.png differ diff --git a/docs/topics/services/blockAssembly.md b/docs/topics/services/blockAssembly.md index 45cae5af40..e29614cb14 100644 --- a/docs/topics/services/blockAssembly.md +++ b/docs/topics/services/blockAssembly.md @@ -145,6 +145,21 @@ This recovery mechanism ensures that: - The server then checks if the subtree already exists in the Subtree Store. Otherwise, the server persists the new subtree in the store with a specified (and settings-driven) TTL (Time-To-Live). - Finally, the server sends a notification to the BlockchainClient to announce the new subtree. This will be propagated to other nodes via the P2P service. +**Periodic Subtree Announcements:** + +To ensure mining candidates remain up-to-date, the Subtree Processor implements a timer-based announcement mechanism: + +- Current subtree is announced at a minimum every 10 seconds (configurable) +- This ensures miners receive updates even during low transaction periods +- The timer triggers announcements of the current subtree state, regardless of completion status +- Prevents stale mining candidates when transaction volume is low + +This periodic announcement complements the size-based announcements, ensuring: + +- Consistent mining candidate freshness +- Reduced latency for mining operations +- Better network synchronization during varying load conditions + ### 2.3.1 Dynamic Subtree Size Adjustment The Block Assembly service can dynamically adjust the subtree size based on real-time performance metrics when enabled via configuration: @@ -172,6 +187,21 @@ This self-tuning mechanism helps maintain consistent processing rates and optima - The Block Assembly Server makes status announcements, using the Status Client, about the mining candidate's height and previous hash. - Finally, the Server tracks the current candidate in the JobStore within a new "job" and its TTL. This information will be retrieved at a later stage, if and when the miner submits a solution to the mining challenge for this specific mining candidate. +**Mining Candidate Caching:** + +To optimize performance for frequent GetMiningCandidate requests, the service implements a caching mechanism: + +- Mining candidates are cached for a configurable timeout period (default: a few seconds) +- Subsequent requests within the timeout period return the cached candidate +- Cache is invalidated when: + + - New subtrees are completed + - A new block is received from the network + - The timeout expires +- This reduces computation overhead for high-frequency mining requests + +The caching strategy balances freshness against performance, ensuring miners receive recent candidates without overloading the system during rapid polling. + ### 2.5. Submit Mining Solution Once a miner solves the mining challenge, it submits a solution to the Block Assembly Service. The solution includes the nonce required to solve the mining challenge. diff --git a/docs/topics/services/blockValidation.md b/docs/topics/services/blockValidation.md index e1bcef4adb..d5ea407c7f 100644 --- a/docs/topics/services/blockValidation.md +++ b/docs/topics/services/blockValidation.md @@ -105,6 +105,40 @@ Notice that, when catching up, the Block Validator will set the machine state of During the catchup process, the system tracks invalid blocks. If a block fails validation during catchup, it is marked as invalid in the blockchain store. This prevents invalid blocks from corrupting the chain state and allows the system to avoid reprocessing known invalid blocks. The system also maintains metrics on peer quality to identify and avoid peers that provide invalid blocks. +**Sync Coordination and Peer Selection:** + +The catchup process integrates with the P2P service's peer registry and reputation system to select optimal peers for block retrieval: + +1. **Peer Selection**: The sync coordinator uses `SelectSyncPeer` to choose the best peer based on: + - Reputation score (minimum 20.0 threshold) + - Storage mode (full nodes preferred over pruned) + - Blockchain height (must be ahead of local node) + - Response time history + - Recent interaction success rate + +2. **Reputation Updates**: Peer reputation is updated based on catchup results: + - Successful block retrieval increases reputation + - Invalid blocks result in severe reputation penalty (malicious marking) + - Timeouts and failures decrease reputation + +3. **Peer Rotation**: If a peer consistently fails during catchup: + - System automatically rotates to the next best peer + - Failed peer enters cooldown period before retry + - Exponential backoff for repeated failures + +**Performance Optimizations:** + +The catchup process includes several performance optimizations: + +- **Concurrent Header Fetching**: Block headers are fetched in parallel before full block retrieval +- **Batch Block Processing**: Multiple blocks are processed in configurable batch sizes +- **Adaptive Concurrency**: Processing parallelism adjusts based on system load +- **Smart Peer Selection**: Preferentially uses peers with lowest latency and highest success rates + +For configuration of catchup performance settings, see the [Block Validation Settings Reference](../../references/settings/services/blockvalidation_settings.md). + +For details on the peer reputation system, see [Peer Registry and Reputation System](../features/peer_registry_reputation.md). + #### 2.2.3. Quick Validation for Checkpointed Blocks For blocks that are below known checkpoints in the blockchain, the Block Validation service employs an optimized quick validation path that significantly improves synchronization performance. This mechanism is particularly effective during initial blockchain synchronization. diff --git a/docs/topics/services/p2p.md b/docs/topics/services/p2p.md index a008a54e72..3f1b5af057 100644 --- a/docs/topics/services/p2p.md +++ b/docs/topics/services/p2p.md @@ -18,6 +18,13 @@ - [2.7.2. Ban Operations](#272-ban-operations) - [2.7.3. Ban Event Handling](#273-ban-event-handling) - [2.7.4. Configuration](#274-configuration) + - [2.8. Peer Registry and Reputation System](#28-peer-registry-and-reputation-system) + - [2.8.1. Overview](#281-overview) + - [2.8.2. Peer Information Tracking](#282-peer-information-tracking) + - [2.8.3. Reputation Algorithm](#283-reputation-algorithm) + - [2.8.4. Peer Selection](#284-peer-selection) + - [2.8.5. Persistence](#285-persistence) + - [2.8.6. Recovery Mechanisms](#286-recovery-mechanisms) - [3. Technology](#3-technology) - [4. Data Model](#4-data-model) - [5. Directory Structure and Main Files](#5-directory-structure-and-main-files) @@ -366,6 +373,78 @@ Ban-related settings in the configuration: - `ban_default_duration`: Default duration for bans (24 hours if not specified) - `ban_max_entries`: Maximum number of banned entries to maintain +### 2.8. Peer Registry and Reputation System + +The P2P service includes a comprehensive peer management system that tracks peer behavior, calculates reputation scores, and selects optimal peers for network operations. + +#### 2.8.1. Overview + +The system consists of three main components: + +- **Peer Registry**: A thread-safe data store maintaining all peer information and interaction history +- **Peer Selector**: A stateless component that selects optimal peers based on reputation and criteria +- **Reputation Scoring**: An algorithm calculating peer reliability scores (0-100) + +#### 2.8.2. Peer Information Tracking + +The peer registry tracks comprehensive information for each peer: + +- **Identity**: Peer ID, client name, connection status +- **Blockchain State**: Height, block hash, storage mode (full/pruned) +- **Network Info**: DataHub URL, URL responsiveness, bytes received +- **Reputation Metrics**: Interaction successes/failures, malicious behavior count, average response time +- **Interaction History**: Blocks received, subtrees received, transactions received + +#### 2.8.3. Reputation Algorithm + +Peers are assigned reputation scores from 0 to 100: + +- **50**: Default neutral score for new peers +- **20**: Minimum threshold for peer selection eligibility +- **5**: Score assigned to peers exhibiting malicious behavior + +The reputation calculation considers: + +- Success rate of interactions (60% weight) +- Base score component (40% weight) +- Recent failure penalties (-15 for failures within 1 hour) +- Recent success bonuses (+10 for successes within 1 hour) +- Malicious behavior (immediate drop to 5.0) + +#### 2.8.4. Peer Selection + +The peer selector uses a two-phase approach: + +1. **Phase 1 - Full Nodes**: Filter for peers announcing "full" storage mode, sort by reputation +2. **Phase 2 - Pruned Fallback**: If no full nodes available, select youngest pruned node + +Selection criteria include: + +- Not banned +- Has DataHub URL (excludes listen-only nodes) +- URL is responsive +- Valid blockchain height +- Reputation score >= 20.0 +- Passes cooldown period + +#### 2.8.5. Persistence + +The peer registry persists to `teranode_peer_registry.json`: + +- Saves on shutdown and periodically during operation +- Restores peer metrics on startup +- Maintains version compatibility + +#### 2.8.6. Recovery Mechanisms + +Peers can recover from low reputation through: + +- **Automatic Recovery**: `ReconsiderBadPeers` resets reputation after cooldown period +- **Manual Reset**: Via gRPC API or dashboard UI +- **Exponential Cooldown**: Reset cooldown triples for each subsequent reset + +For detailed documentation on the peer registry and reputation system, see [Peer Registry and Reputation System](../features/peer_registry_reputation.md). + ## 3. Technology 1. **Go Programming Language**: diff --git a/docs/topics/services/subtreeValidation.md b/docs/topics/services/subtreeValidation.md index 5d7d17e52b..9be9648d20 100644 --- a/docs/topics/services/subtreeValidation.md +++ b/docs/topics/services/subtreeValidation.md @@ -9,6 +9,8 @@ - [2.3. Validating the Subtrees](#23-validating-the-subtrees) - [2.4. Subtree Locking Mechanism](#24-subtree-locking-mechanism) - [2.5. Distributed Pause Mechanism](#25-distributed-pause-mechanism) + - [2.6. Orphanage Management](#26-orphanage-management) + - [2.7. Level Calculation for Merkle Trees](#27-level-calculation-for-merkle-trees) 3. [gRPC Protobuf Definitions](#3-grpc-protobuf-definitions) 4. [Data Model](#4-data-model) 5. [Technology](#5-technology) @@ -185,6 +187,71 @@ The distributed pause mechanism uses existing subtree validation settings: - `subtree_quorum_path`: Path to shared storage for lock files - `subtree_quorum_absolute_timeout`: Timeout for lock staleness (default: 30 seconds) +### 2.6. Orphanage Management + +The Subtree Validation service implements an orphanage mechanism to handle transactions that arrive before their parent transactions are available. This is essential for maintaining processing continuity when transactions arrive out of order. + +**What are Orphaned Transactions?** + +A transaction becomes "orphaned" when: + +- It references inputs from parent transactions that are not yet in the UTXO store +- The parent transactions are expected to arrive shortly (e.g., in the same subtree or recent subtrees) + +**Orphanage Workflow:** + +1. **Detection**: During subtree validation, if a transaction's inputs cannot be found, it's placed in the orphanage +2. **Tracking**: The orphanage tracks which parent transaction IDs are needed +3. **Resolution**: When parent transactions are processed, orphaned children are automatically resolved +4. **Timeout**: Orphaned transactions that remain unresolved after the timeout period are cleaned up + +**Configuration:** + +- `subtreevalidation_orphanageTimeout`: Duration before orphaned transactions are cleaned up (default: 30 seconds) + +**Benefits:** + +- Handles out-of-order transaction arrival gracefully +- Prevents transaction validation failures due to timing issues +- Maintains high throughput during burst traffic +- Automatically recovers when parent transactions arrive + +### 2.7. Level Calculation for Merkle Trees + +The Subtree Validation service performs level calculation to optimize merkle tree processing. This feature improves performance by pre-calculating the hierarchical structure of subtrees. + +**What is Level Calculation?** + +Each subtree contains transactions organized in a merkle tree structure. Level calculation determines: + +- The number of levels in the merkle tree +- The number of transactions at each level +- Memory allocation requirements for processing + +**How It Works:** + +1. **Tree Analysis**: The service analyzes the subtree structure to determine the total number of transactions +2. **Level Computation**: Calculates the number of levels needed based on transaction count +3. **Pre-allocation**: Allocates memory slices based on calculated level sizes +4. **Optimized Processing**: Processes the subtree level by level for optimal memory usage + +**Benefits:** + +- **Memory Efficiency**: Pre-allocates exact memory needed, reducing allocations +- **Performance**: Avoids repeated slice growth during processing +- **Predictability**: Known memory requirements before processing begins + +**Example:** + +For a subtree with 1,000 transactions: + +- Level 0 (leaves): 1,000 transaction hashes +- Level 1: 500 intermediate nodes +- Level 2: 250 nodes +- ... and so on until the root + +The level calculation pre-determines this structure, allowing optimal memory allocation. + ## 3. gRPC Protobuf Definitions The Subtree Validation Service uses gRPC for communication between nodes. The protobuf definitions used for defining the service methods and message formats can be seen in the [Subtree Validation Protobuf Reference](../../references/protobuf_docs/subtreevalidationProto.md). diff --git a/docs/topics/stores/utxo.md b/docs/topics/stores/utxo.md index fbc41bf59d..c6e9c80025 100644 --- a/docs/topics/stores/utxo.md +++ b/docs/topics/stores/utxo.md @@ -355,6 +355,33 @@ To optimize performance when reading externally stored transactions, the UTXO st The cache handles concurrent reads efficiently, preventing multiple simultaneous fetches of the same external transaction data. +#### Lock Record Pattern for Multi-Record Transactions + +When a transaction has more than 20,000 outputs (configurable via `utxo_store_batch_size`), it must be split across multiple Aerospike records. The lock record pattern ensures these multi-record operations complete atomically, preventing data corruption from partial writes or concurrent access. + +**Key Components:** + +1. **Lock Records**: Temporary Aerospike records that prevent concurrent creation attempts for the same transaction. They use a special index (`0xFFFFFFFF`) that cannot conflict with actual sub-records. + +2. **Creating Flag**: A per-record boolean flag that prevents UTXO spending until all records are fully committed. When `creating=true`, the UTXO's outputs cannot be spent. + +**Two-Phase Commit Protocol:** + +- **Phase 1**: Acquire lock, store external data, create all Aerospike records with `creating=true` +- **Phase 2**: Clear `creating` flag from children first, then master (master's flag absence indicates completion) + +**Error Handling and Recovery:** + +The system automatically recovers from partial failures through multiple paths: + +- Retry attempts complete Phase 2 via existing record detection +- Re-encounter during block/subtree processing triggers completion +- Mining operations clear flags as part of `SetMined` + +Lock records have dynamic TTL (30-300 seconds based on record count) to prevent permanent locks on process crashes. + +For detailed documentation, see [UTXO Lock Record Pattern for Multi-Record Transactions](../features/utxo_lock_records.md). + ### 4.8. Alert System and UTXO Management The UTXO Store supports advanced UTXO management features, which can be utilized by an alert system. diff --git a/docs/tutorials/miners/minersGettingStarted.md b/docs/tutorials/miners/minersGettingStarted.md index 95427f15da..fcfcf5bb1f 100644 --- a/docs/tutorials/miners/minersGettingStarted.md +++ b/docs/tutorials/miners/minersGettingStarted.md @@ -109,14 +109,14 @@ docker compose up -d Force the node to transition to Run mode: -**Option 1: Using Admin Dashboard (Easiest)** +#### Option 1: Using Admin Dashboard (Easiest) ```bash -# Access the dashboard at http://localhost:8090/admin +# Access the dashboard at http://localhost:8090/admin (default credentials bitcoin:bitcoin) # Navigate to FSM State section and select RUNNING or LEGACYSYNCING ``` -**Option 2: Using teranode-cli** +#### Option 2: Using teranode-cli ```bash # Transition to Run mode @@ -158,7 +158,7 @@ curl http://localhost:8090/health - Access monitoring dashboard: - - Open Grafana: http://localhost:3005 + - Open Grafana: - Login with the default credentials: admin/admin - Navigate to the "Teranode - Service Overview" dashboard for key metrics - Explore other dashboards for detailed service metrics. For example, you can check the Legacy sync metrics in the "Teranode - Legacy Service" dashboard. @@ -183,37 +183,37 @@ curl http://localhost:8090/health 1. View all services status: -```bash -docker compose ps -``` + ```bash + docker compose ps + ``` 2. Check blockchain sync: -```bash -curl http://localhost:8090/api/v1/blockstats -``` + ```bash + curl http://localhost:8090/api/v1/blockstats + ``` 3. Monitor specific service logs: -```bash -docker compose logs -f legacy -docker compose logs -f blockchain -docker compose logs -f asset -``` + ```bash + docker compose logs -f legacy + docker compose logs -f blockchain + docker compose logs -f asset + ``` ### Working with Transactions 1. Get transaction details: -```bash -curl http://localhost:8090/api/v1/tx/ -``` + ```bash + curl http://localhost:8090/api/v1/tx/ + ``` ### Monitoring Your Node 1. Access Grafana dashboards: - - Open http://localhost:3005 + - Open - Navigate to "TERANODE Service Overview" 2. Key metrics to watch: @@ -227,52 +227,52 @@ curl http://localhost:8090/api/v1/tx/ 1. View logs: -```bash -# All services -docker compose logs + ```bash + # All services + docker compose logs -# Specific service -docker compose logs blockchain -``` + # Specific service + docker compose logs blockchain + ``` 2. Check disk usage: -```bash -df -h -``` + ```bash + df -h + ``` 3. Restart a specific service: -```bash -docker compose restart blockchain -``` + ```bash + docker compose restart blockchain + ``` 4. Restart all services: -```bash -docker compose down -docker compose up -d -``` + ```bash + docker compose down + docker compose up -d + ``` ### Common Operations 1. Check current block height: -```bash -curl http://localhost:8090/api/v1/bestblockheader/json -``` + ```bash + curl http://localhost:8090/api/v1/bestblockheader/json + ``` 2. Get block information: -```bash -curl http://localhost:8090/api/v1/block/ -``` + ```bash + curl http://localhost:8090/api/v1/block/ + ``` 3. Check UTXO status: -```bash -curl http://localhost:8090/api/v1/utxo/ -``` + ```bash + curl http://localhost:8090/api/v1/utxo/ + ``` ### Next Steps diff --git a/mkdocs.yml b/mkdocs.yml index f23d5e109d..1887a99ccc 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -80,6 +80,9 @@ nav: - Functional Requirement Tests: topics/functionalRequirementTests.md - Features: - Two Phase Commit: topics/features/two_phase_commit.md + - Peer Registry and Reputation: topics/features/peer_registry_reputation.md + - UTXO Lock Records: topics/features/utxo_lock_records.md + - Dashboard: topics/dashboard.md - Core Services: - Alert Service: topics/services/alert.md - Asset Server: topics/services/assetServer.md