A local testing framework for Bittensor subnets. Simulate miners and validators without spending TAO.
Every subnet developer faces the same workflow:
- Write miner/validator logic
- Spin up a local subtensor node
- Register wallets (costs TAO even on testnet setup)
- Run miner + validator scripts manually
- Read logs to figure out what happened
- Repeat for every change
There's no automated way to test whether your incentive mechanism actually works before you deploy it.
bt-test fixes this.
pip install bt-testOr from source:
git clone https://github.com/YOUR_USERNAME/bt-test
cd bt-test
pip install -e .# Run a quick simulation: 3 miners, 5 rounds
bt-test run
# More miners, more rounds, 20% failure rate
bt-test run --miners 5 --rounds 10 --failure-rate 0.2
# Simulate bad network conditions
bt-test run --miners 4 --timeout-rate 0.3 --latency 500Example output:
bt-test — Bittensor Subnet Simulator
Miners: 3 | Rounds: 5 | Latency: 100ms | Failure: 0%
╭──────────────── bt-test results ────────────────╮
│ ✅ PASSED | 5 rounds | 3 miners │
╰──────────────────────────────────────────────────╯
UID Name Success Rate Timeout Rate Avg Latency Final Weight
0 miner-0 100% 0% 101ms 0.3412
1 miner-1 100% 0% 111ms 0.3312
2 miner-2 100% 0% 122ms 0.3276
🏆 Top miners by weight:
1. miner-0 (uid=0) → weight=0.3412
2. miner-1 (uid=1) → weight=0.3312
3. miner-2 (uid=2) → weight=0.3276
import asyncio
from bt_test.neuron import MockMiner, MockValidator, MockSynapse
from bt_test.runner import SubnetRunner
# 1. Define your miner logic (mirrors your real neurons/miner.py)
def my_miner_logic(synapse: MockSynapse) -> MockSynapse:
synapse.response = synapse.data * 2 # multiply-by-2 subnet
return synapse
# 2. Define your scoring logic (mirrors your real validator scoring)
def my_score_fn(uid: int, synapse: MockSynapse) -> float:
if not synapse.success or synapse.timeout:
return 0.0
return 1.0 if synapse.response == synapse.data * 2 else 0.0
# 3. Wire it together and run
async def main():
miners = [
MockMiner(uid=0, forward_fn=my_miner_logic, latency_ms=80),
MockMiner(uid=1, forward_fn=my_miner_logic, latency_ms=200, failure_rate=0.3),
MockMiner(uid=2, forward_fn=lambda s: s, latency_ms=50), # broken miner
]
validator = MockValidator(score_fn=my_score_fn)
runner = SubnetRunner(miners=miners, validator=validator, rounds=10)
summary = await runner.run()
print(f"Top miner: uid={summary.top_miners(1)[0][0]}")
print(f"Weights: {summary.final_weights}")
# Use in pytest / CI assertions
assert summary.success_rate[0] >= 0.8
assert summary.final_weights[0] > summary.final_weights[2]
asyncio.run(main())| Parameter | Type | Default | Description |
|---|---|---|---|
uid |
int | required | Unique miner ID (0-255) |
forward_fn |
callable | echo input | Your miner logic |
latency_ms |
float | 100.0 | Simulated response time |
failure_rate |
float | 0.0 | Probability of bad response (0.0-1.0) |
timeout_rate |
float | 0.0 | Probability of timing out (0.0-1.0) |
name |
str | miner-{uid} |
Display name |
| Parameter | Type | Default | Description |
|---|---|---|---|
score_fn |
callable | success=1.0 | Your scoring logic: (uid, synapse) -> float |
query_fn |
callable | random int | Query generator: () -> Any |
# .github/workflows/subnet-test.yml
name: Subnet Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- run: pip install bt-test
- run: python examples/example_multiply.py-
--watchmode: re-run on file changes - JSON output for CI pipelines
- Configurable Yuma Consensus weight normalization
- Multi-validator support
- Historical round export (CSV)
- Plugin system for subnet-specific protocols
PRs welcome. See CONTRIBUTING.md.
MIT