Skip to content

Aggregator Guide

JohnnyFFM edited this page Oct 28, 2025 · 1 revision

PoCX Aggregator User Guide

Version: 1.0.0 Last Updated: 2025-10-28

Wiki Navigation: Plotter Guide | Miner Guide | Aggregator Guide (you are here) | Plot Format | Technical Details


Table of Contents


Introduction

What is PoCX Aggregator?

The PoCX Aggregator is a high-performance proxy server that sits between multiple miners and a single upstream pool or wallet. It aggregates submissions from multiple miners, intelligently filters duplicate/inferior submissions, and forwards only the best qualities to reduce upstream load.

How It Works

The aggregator:

  1. Listens for miner connections via JSON-RPC 2.0
  2. Caches mining information from upstream to reduce requests
  3. Filters submissions using per-account or global best tracking
  4. Forwards only the best submissions to upstream pool/wallet
  5. Tracks statistics and submission history in SQLite database
  6. Displays real-time statistics via optional web dashboard

Key Features

  • High Concurrency: Async/await architecture handling thousands of simultaneous miner connections
  • Intelligent Filtering: Per-account (Pool mode) or global best (Wallet mode) submission tracking
  • Submission Queue: Exponential backoff retry for reliable pool submissions
  • Statistics Tracking: Real-time mining statistics with SQLite persistence
  • Web Dashboard: Optional HTTP dashboard for monitoring performance
  • Database Retention: Configurable automatic cleanup of old submissions
  • PoC Time Calculation: Accurate time-bending formula for deadline estimation

Use Cases

Pool Mining Aggregator:

  • Aggregate submissions from multiple miners to a single pool
  • Reduce pool load by filtering duplicate submissions per account
  • Track best submissions per account across last 3 blocks
  • Automatic retry with exponential backoff for failed submissions

Solo Mining Aggregator:

  • Aggregate submissions from multiple miners to local wallet
  • Track global best quality across all accounts
  • No retry logic (fail fast for local wallet)
  • Optimize for minimal latency

Requirements

Operating Systems

  • Linux: Fully supported (kernel 2.6+, glibc 2.17+)
  • Windows: Fully supported (Windows 10 1809+ or Windows 11)
  • macOS: Fully supported (macOS 10.15 Catalina or later)

Hardware Requirements

Minimum

  • CPU: x86_64 processor
  • RAM: 512 MB
  • Storage: 100 MB for database and logs

Recommended

  • CPU: Multi-core processor for handling many concurrent miners
  • RAM: 1-2 GB for large miner fleets
  • Storage: SSD for database performance
  • Network: Low-latency connection to upstream pool/wallet

Software Requirements

  • Rust: 1.91.0-nightly or later (for building from source)
  • Network: Internet connection to upstream pool or local wallet
  • Upstream: Pool URL or wallet RPC endpoint

Installation

Option 1: Building from Source

Step 1: Install Rust Nightly Toolchain

If you don't have Rust installed, visit rustup.rs and follow the installation instructions.

# Install nightly toolchain
rustup toolchain install nightly --component rustfmt clippy

# Set nightly as default for PoCX project
cd pocx
rustup override set nightly

Step 2: Clone the Repository

git clone https://github.com/PoC-Consortium/pocx.git
cd pocx

Step 3: Build the Aggregator

# Build release version
cargo build --release -p pocx_aggregator

# Binary will be in target/release/pocx_aggregator

Option 2: Pre-built Binaries

Download the latest release from GitHub Releases.

Linux/macOS:

tar -xzf pocx-v*.tar.gz
chmod +x pocx_aggregator
./pocx_aggregator --version

Windows:

# Extract pocx-v*.zip
.\pocx_aggregator.exe --version

Configuration

Configuration File

The aggregator uses a YAML configuration file. By default, it looks for aggregator_config.yaml in the current directory.

Basic Configuration File

Create aggregator_config.yaml:

# PoCX Aggregator Configuration

# Listen address for miner connections
listen_address: "0.0.0.0:8080"

# Expected block time in seconds
block_time_secs: 120

# Upstream pool or wallet configuration
upstream:
  name: "primary_pool"
  url: "http://pool.example.com:8080/pocx"
  # Optional authentication token
  # auth_token: "your_secret_token"
  submission_mode: pool  # or "wallet" for solo mining

# Cache settings
cache:
  mining_info_ttl_secs: 5
  pool_timeout_secs: 30

# Database settings
database:
  path: "aggregator.db"
  retention_days: 7

# Dashboard settings (optional)
dashboard:
  enabled: true
  listen_address: "0.0.0.0:8081"

# Logging configuration
logging:
  level: "info"
  file: "aggregator.log"

Configuration File Structure

Network Settings

listen_address: "0.0.0.0:8080"
  • listen_address: Address to listen on for miner connections
    • "0.0.0.0:8080": Listen on all interfaces, port 8080
    • "127.0.0.1:8080": Listen only on localhost
    • Default: "0.0.0.0:8080"
block_time_secs: 120
  • block_time_secs: Expected block time in seconds
    • Used for network capacity calculation and database retention
    • PoCX: 120 seconds, Burst: 240 seconds
    • Default: 120

Upstream Configuration

upstream:
  name: "primary_pool"
  url: "http://pool.example.com:8080/pocx"
  auth_token: "your_secret_token"  # Optional
  submission_mode: pool  # or "wallet"
  • name: Friendly name for the upstream (used in logs)
  • url: Full URL to upstream pool or wallet JSON-RPC endpoint
  • auth_token (optional): Bearer token for authentication
  • submission_mode: Submission filtering mode
    • pool: Per-account best tracking (recommended for pools)
    • wallet: Global best tracking (recommended for solo mining)
    • Default: pool

Cache Settings

cache:
  mining_info_ttl_secs: 5
  pool_timeout_secs: 30
  • mining_info_ttl_secs: How long to cache mining info (seconds)

    • Lower values: More upstream requests, fresher data
    • Higher values: Less upstream load, potential staleness
    • Default: 5
  • pool_timeout_secs: Upstream request timeout (seconds)

    • Default: 30

Database Settings

database:
  path: "aggregator.db"
  retention_days: 7
  • path: SQLite database file path

    • Stores submission history for statistics
    • Default: "aggregator.db"
  • retention_days: Automatic cleanup period (days)

    • Submissions older than this are deleted
    • 0 = keep forever
    • Default: 7
    • Example: 7 days at 120s blocks = 5,040 blocks retained

Dashboard Settings

dashboard:
  enabled: true
  listen_address: "0.0.0.0:8081"
  • enabled: Enable/disable web dashboard

    • Default: true
  • listen_address: Dashboard listen address

    • Default: "0.0.0.0:8081"

Logging Settings

logging:
  level: "info"
  file: "aggregator.log"
  • level: Log verbosity level

    • Options: "trace", "debug", "info", "warn", "error"
    • Default: "info"
  • file: Log file path

    • Default: "aggregator.log"

Running the Aggregator

Basic Usage

# Start aggregator (looks for aggregator_config.yaml in current directory)
./pocx_aggregator

# Start with custom config file
./pocx_aggregator -c /path/to/my_config.yaml

Command-Line Options

pocx_aggregator [OPTIONS]

OPTIONS:
    -c, --config <FILE>    Path to configuration file
                           [default: aggregator_config.yaml]
    -h, --help             Print help information
    -V, --version          Print version information

Startup Messages

[INFO] Starting PoCX Aggregator v1.0.0
[INFO] Loaded configuration from aggregator_config.yaml
[INFO] Listening on 0.0.0.0:8080
[INFO] Upstream: primary_pool (http://pool.example.com:8080/pocx)
[INFO] Using Pool submission mode (per-account best tracking)
[INFO] Starting dashboard on 0.0.0.0:8081
[INFO] Dashboard listening on 0.0.0.0:8081

Connecting Miners

Point your miners to the aggregator by updating their miner_config.yaml:

chains:
  - name: "aggregator"
    base_url: "http://localhost:8080"
    api_path: "/"
    accounts:
      - account: "your_account_id"

Advanced Configuration

Submission Modes

Pool Mode (Per-Account Best)

Best for aggregating to mining pools:

upstream:
  submission_mode: pool

Behavior:

  • Tracks best quality per account for last 3 blocks
  • Filters submissions if quality ≥ known best for that account
  • Forwards submissions with retry (exponential backoff: 1s, 2s, 4s, 8s, 16s)
  • Max 5 retries, 4-minute staleness check
  • Ideal for reducing pool load while ensuring all accounts submit

Example:

Block 1000, Account A: Quality 50 → Forward ✓
Block 1000, Account A: Quality 60 → Drop (worse than 50)
Block 1000, Account B: Quality 55 → Forward ✓ (different account)
Block 1001: Reset tracking for new block

Wallet Mode (Global Best)

Best for solo mining to local wallet:

upstream:
  submission_mode: wallet

Behavior:

  • Tracks global best quality across ALL accounts for last 3 blocks
  • Filters submissions if quality ≥ known best globally
  • No retry logic (fail fast for local wallet)
  • Ideal for minimizing wallet load and latency

Example:

Block 1000, Account A: Quality 50 → Forward ✓
Block 1000, Account B: Quality 60 → Drop (worse than global best 50)
Block 1000, Account C: Quality 40 → Forward ✓ (better than 50)
Block 1001: Reset tracking for new block

Database Retention

Control automatic cleanup:

database:
  retention_days: 0  # Keep forever
  # retention_days: 7  # Keep 7 days (default)
  # retention_days: 30  # Keep 30 days

Retention calculation:

Blocks retained = (retention_days × 86400) / block_time_secs
Example: (7 × 86400) / 120 = 5,040 blocks

Authentication

Protect upstream with authentication:

upstream:
  auth_token: "your_secret_bearer_token"

Token is sent as Authorization: Bearer your_secret_bearer_token header.


Dashboard

Accessing the Dashboard

Open your web browser to: http://localhost:8081

Dashboard Features

Real-time Statistics:

  • Current Block: Height, base target, generation signature
  • Best Submission: Current block's best quality and estimated PoC time
  • Network Capacity: Calculated from current best quality
  • 24h Statistics: Submission count, best quality, average quality
  • Account Statistics: Per-account submission tracking

Historical Data:

  • Submission history with timestamps
  • Quality distribution graphs
  • Account participation tracking

Dashboard API

The dashboard provides a JSON API:

# Get current statistics
curl http://localhost:8081/stats

# Response format:
{
  "current_block": {
    "height": 1000,
    "base_target": 4398046511104,
    "generation_signature": "abcd...",
    "best_quality": 50000000,
    "best_poc_time": 240,
    "network_capacity": 1099511627776
  },
  "statistics_24h": {
    "total_submissions": 1440,
    "best_quality": 10000000,
    "average_quality": 50000000
  },
  "accounts": [
    {
      "account_id": "...",
      "submissions": 720,
      "best_quality": 15000000
    }
  ]
}

Troubleshooting

Common Issues

Miners Can't Connect

Symptom:

Miner: Failed to connect to http://localhost:8080

Solutions:

  1. Check aggregator is running: ps aux | grep pocx_aggregator
  2. Verify listen address: Check listen_address in config
  3. Check firewall: Ensure port 8080 is open
  4. Test connection: curl http://localhost:8080 should respond

Upstream Connection Failures

Symptom:

[ERROR] Failed to get mining info from 'primary_pool': Connection refused

Solutions:

  1. Verify upstream URL is correct
  2. Check upstream is running and accessible
  3. Test upstream: curl http://pool.example.com:8080/pocx
  4. Check authentication token if required

No Submissions Forwarded

Symptom: Aggregator receives submissions but doesn't forward to upstream.

Solutions:

  1. Check logs for filter messages:
    [DEBUG] Filtered submission: quality 60 not better than best 50
    
  2. Verify submission mode is appropriate (pool vs wallet)
  3. Check upstream timeout settings
  4. Enable debug logging: level: "debug" in config

Database Errors

Symptom:

[ERROR] Database error: unable to open database file

Solutions:

  1. Check file permissions: ls -l aggregator.db
  2. Ensure parent directory exists and is writable
  3. Check disk space: df -h
  4. Try deleting database to recreate: rm aggregator.db

Debug Logging

Enable detailed logging for troubleshooting:

logging:
  level: "debug"  # Very verbose
  # level: "trace"  # Extremely verbose (includes all protocol messages)

View logs:

# Follow log file
tail -f aggregator.log

# Search for errors
grep ERROR aggregator.log

# Filter specific account
grep "account_id_here" aggregator.log

Performance Issues

High CPU Usage:

  • Check miner count: Too many concurrent miners
  • Reduce mining info TTL: Less frequent cache updates
  • Review log level: Debug/trace logging is CPU intensive

High Memory Usage:

  • Check database size: Large retention period
  • Reduce retention days: Smaller database
  • Check miner count: Each connection uses memory

Slow Response Times:

  • Check upstream latency: Network issues
  • Increase pool timeout: Slow upstream
  • Check database performance: Use SSD for database

FAQ

How many miners can one aggregator handle?

The aggregator is designed to handle thousands of concurrent miner connections. Actual capacity depends on:

  • Server hardware (CPU, RAM, network)
  • Submission rate per miner
  • Database performance
  • Network latency to upstream

Typical deployment: 100-1000 miners per aggregator instance.

Should I use Pool mode or Wallet mode?

Use Pool mode when:

  • Aggregating to a mining pool
  • Multiple accounts mining
  • Want retry logic for failed submissions
  • Pool may have downtime

Use Wallet mode when:

  • Solo mining to local wallet
  • Want minimal latency
  • Wallet is reliable (local/same machine)
  • Want to minimize wallet load

How does the 3-block tracking work?

The aggregator tracks best submissions for the last 3 blocks to handle:

  • Network latency: New block announcements may be delayed
  • Reorgs: Chain reorganizations may reference older blocks
  • Late submissions: Miners may submit for previous blocks

After 3 blocks, tracking for older blocks is discarded to prevent memory growth.

What happens if upstream goes offline?

Pool mode:

  • Submissions are queued with retry (up to 5 attempts)
  • Exponential backoff: 1s, 2s, 4s, 8s, 16s (~31s total)
  • After 4 minutes, stale submissions are dropped
  • Miners receive success response immediately (queued)

Wallet mode:

  • Submissions fail immediately (no retry)
  • Miners receive error response
  • No queuing or retry logic

Can I run multiple aggregators?

Yes! Common scenarios:

Load balancing:

  • Run multiple aggregators behind a load balancer
  • Each aggregator connects to same upstream
  • Configure different listen ports per aggregator

Redundancy:

  • Run multiple aggregators with different upstreams
  • Point different miner groups to different aggregators
  • Provides failover capability

How is PoC time calculated?

The aggregator uses the time-bending formula:

poc_time = block_time × (quality / base_target)^(1/3)

This provides accurate deadline estimation based on:

  • Current block's base target
  • Submitted quality (adjusted by miner's base target)
  • Network block time

Does the aggregator validate plot files?

No. The aggregator:

  • Trusts miners to provide valid qualities
  • Does not verify nonce calculations
  • Does not check compression levels
  • Forwards submissions to upstream for validation

Upstream pool/wallet performs validation.

What data is stored in the database?

The SQLite database stores:

  • Submissions: Account, nonce, quality, height, timestamp
  • Blocks: Height, base target, generation signature, timestamp
  • Statistics: Best qualities, submission counts, account participation

Database is NOT used for:

  • Mining info caching (in-memory only)
  • Submission queue (in-memory only)
  • Plot file data
  • Account keys/secrets

Wiki Navigation: Plotter Guide | Miner Guide | Aggregator Guide (you are here) | Plot Format | Technical Details

Clone this wiki locally