Proof of Optimization: Block generation triggered by verified ML improvements, not wasted hash computations.
DOIN is a decentralized system where nodes collaboratively optimize machine learning models using blockchain consensus. Instead of proof-of-work, blocks are generated when the weighted sum of verified optimization improvements exceeds a dynamic threshold β making every unit of compute count toward real progress.
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ
β doin-core β β doin-node β β doin-plugins β
β β β β β β
β β’ Consensus βββββββ β’ Transport ββββββΊβ β’ Quadratic β
β β’ Models β β β’ Chain β β β’ Predictor β
β β’ Crypto β β β’ Sync β β β’ (custom) β
β β’ Protocol β β β’ Flooding β β β
β β’ Plugins β β β’ Unified β β β
β β’ Coin β β Node β β β
β β’ Difficulty β β β β β
ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ
5 packages:
| Package | Description | Tests |
|---|---|---|
| doin-core | Consensus, models, crypto, protocol, coin, difficulty | 278 |
| doin-node | Unified node: transport, chain, sync, flooding, OLAP | 290 |
| doin-optimizer | Standalone optimizer runner | 5 |
| doin-evaluator | Standalone evaluator service | 7 |
| doin-plugins | Domain plugins (quadratic reference + predictor ML) | 43 |
Total: 623 tests passing
curl -sSL https://raw.githubusercontent.com/harveybc/doin-core/master/scripts/install.sh | bash# Prerequisites: Python 3.10+, pip, git
sudo apt update && sudo apt install -y python3 python3-pip python3-venv git
# Create virtual environment (recommended)
python3 -m venv ~/doin-venv && source ~/doin-venv/bin/activate
# Install packages (order matters β core first)
pip install git+https://github.com/harveybc/doin-core.git
pip install git+https://github.com/harveybc/doin-node.git
pip install git+https://github.com/harveybc/doin-optimizer.git
pip install git+https://github.com/harveybc/doin-evaluator.git
pip install git+https://github.com/harveybc/doin-plugins.gitmkdir ~/doin && cd ~/doin
# Clone all repos
for pkg in doin-core doin-node doin-optimizer doin-evaluator doin-plugins; do
git clone https://github.com/harveybc/$pkg.git
done
# Install in editable mode
pip install -e doin-core
pip install -e doin-node
pip install -e doin-optimizer
pip install -e doin-evaluator
pip install -e doin-plugins
# Install dev dependencies
pip install pytest pytest-asyncio pytest-cov
# Run all tests
for pkg in doin-core doin-node doin-optimizer doin-evaluator doin-plugins; do
echo "=== $pkg ===" && cd $pkg && python -m pytest tests/ -q && cd ..
donecat > config.json << 'EOF'
{
"host": "0.0.0.0",
"port": 8470,
"data_dir": "./doin-data",
"bootstrap_peers": [],
"domains": [{
"domain_id": "quadratic",
"optimize": true,
"evaluate": true,
"has_synthetic_data": true
}]
}
EOF
doin-node --config config.json# Launch 3 nodes automatically
./scripts/deploy-testnet.sh 3
# Or 5 nodes with clean data
./scripts/deploy-testnet.sh 5 --clean# Deploy evaluator node to a remote server
./scripts/deploy-remote.sh user@server --port 8470 --peers seed1.doin.net:8470,seed2.doin.net:8470
# Deploy optimizer node
./scripts/deploy-remote.sh user@gpu-server --optimize --port 8470 --peers seed1.doin.net:8470curl http://localhost:8470/status | python3 -m json.tool
curl http://localhost:8470/chain/statusOptimizer Network Evaluators
β β β
β 1. Optimize ML model β β
β 2. Commit hash(params) βββΊβ Flood to all nodes β
β β β
β 3. Reveal params + nonceβββΊβ Verify hash matches β
β β β
β β 4. Select random quorum βββΊβ
β β β
β β 5. Generate synthetic βββΊβ (different per evaluator)
β β 6. Evaluate model βββΊβ
β β 7. Vote on performanceβββΊβ
β β β
β β 8. Quorum decides β
β β 9. Distribute coin reward β
β β 10. Update reputation β
ββββ Coins + reputation βββββ β
β ββββ Coins + reputation βββββββ
- Block reward: 50 DOIN (halves every 210,000 blocks)
- Max supply: 21,000,000 DOIN
- Distribution: 65% optimizers, 30% evaluators, 5% block generator
- Proportional to work: optimizer share scaled by
effective_increment Γ reward_fraction
- Epoch-based (every 100 blocks): Major correction, clamped to 4Γ max change
- Per-block EMA (Ξ±=0.1): Smooth inter-epoch corrections, Β±2% max per block
- Target: 10-minute block time (configurable)
- Commit-reveal (anti-front-running)
- Random quorum selection (anti-collusion)
- Asymmetric reputation penalties (3Γ penalty vs 1Γ reward)
- Resource limits + bounds validation (anti-DoS)
- Finality checkpoints (anti-rewrite)
- Reputation decay β EMA (anti-farming)
- Min reputation threshold (anti-sybil)
- External checkpoint anchoring (51% defense)
- Fork choice β heaviest chain (anti-selfish-mining)
- Per-evaluator deterministic seeds (anti-overfitting)
OPTIMAE_ACCEPTED transactions carry experiment tracking metadata:
experiment_id,round_number,time_to_this_result_secondsoptimization_config_hash,data_hash(hashes only β no raw data on-chain)
The blockchain itself becomes a distributed OLAP cube. Every node syncing the chain gets the full experiment history of all participants, enabling L3 meta-optimizer training across the entire network.
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Health check |
/status |
GET | Full node status (chain, peers, tasks, security, coin, difficulty) |
/chain/status |
GET | Chain height, tip hash, finalized height |
/chain/blocks?from=X&to=Y |
GET | Fetch blocks by range (max 50) |
/chain/block/{index} |
GET | Fetch single block |
/tasks/pending |
GET | List pending tasks |
/tasks/claim |
POST | Claim a task |
/tasks/complete |
POST | Complete a task |
/inference |
POST | Submit inference request |
/stats |
GET | Experiment tracker stats + OLAP data |
/stats/experiments |
GET | List all experiments with summaries |
/stats/rounds?experiment_id=X&limit=N |
GET | Round history for an experiment |
/stats/chain-metrics?domain_id=X |
GET | On-chain experiment metrics |
/stats/export |
GET | Download OLAP database |
/fees |
GET | Fee market stats |
/peers |
GET | Peer list |
| Document | Description |
|---|---|
| NETWORK.md | Network architecture & protocol |
| SECURITY.md | 25 attack vectors & defenses |
| SCALABILITY.md | Scalability analysis & roadmap |
| INSTALL.md | Detailed installation guide |
| doin-paper.pdf | IEEE-style academic paper |
Create custom domains by implementing three plugin interfaces:
from doin_core.plugins.base import OptimizationPlugin, InferencePlugin, SyntheticDataPlugin
class MyOptimizer(OptimizationPlugin):
def optimize(self, current_best_params, current_best_performance):
# Your ML training logic here
return new_params, new_performance
class MyInferencer(InferencePlugin):
def evaluate(self, parameters, data=None):
# Evaluate the model
return performance_score
class MySyntheticData(SyntheticDataPlugin):
def generate(self, seed=None):
# Generate synthetic test data
return {"data": [...], "labels": [...]}Register via pyproject.toml entry points:
[project.entry-points."doin.optimization"]
my_domain = "my_package:MyOptimizer"
[project.entry-points."doin.inference"]
my_domain = "my_package:MyInferencer"
[project.entry-points."doin.synthetic_data"]
my_domain = "my_package:MySyntheticData"Real multi-node results on consumer hardware (LAN, no cloud):
| Setup | Rounds | Speedup |
|---|---|---|
| Single node | 39 | 1Γ |
| Dragon (RTX 4090) + Omega (RTX 4070) | 5β6 | ~7Γ |
| Setup | Rounds | Time |
|---|---|---|
| Omega solo | 95 | 1592s |
| Dragon solo | 100 | 1681s |
| Combined | 78 (Omega) | 1292s β 19% faster |
Speedup comes from champion migration: when one node finds a better solution, it shares parameters on-chain and other nodes adopt them (island model). A simple random-step optimizer was used β real GA/NEAT with crossover would benefit significantly more.
- Fork the relevant repo
- Create a feature branch
- Make changes + add tests
- Run full test suite
- Submit a pull request
MIT License β see each package for details.
Harvey Bastidas β harveybc