The bridge between PyTorch research and the Bittensor decentralized intelligence network.
Turn any torch.nn.Module into a revenue-generating Bittensor miner — zero boilerplate.
- Inspect — Analyzes your model's
forward()signature via reflection - Synthesize — Generates
protocol.py,miner.py, andDockerfileusing Python 3.14 t-strings - Deploy — Drop the output into any GPU host and start mining
uv add torch2btimport torch2bt as t2b
from my_models import SuperNeuralNet
t2b.package(
model=SuperNeuralNet(),
target_subnet=18,
optimization="fp16",
wallet_name="mining_key",
)Output: torch2bt_output/protocol.py, miner.py, Dockerfile, pyproject.toml
| NetUID | Name | Optimizations |
|---|---|---|
| 1 | Text Prompting | FP32/FP16/BF16/INT8/INT4 |
| 18 | Cortex | FP16/BF16 |
from torch2bt.testing import MockValidator
validator = MockValidator(MySynapse, subnet_id=18, forward_fn=my_forward)
result = validator.query({"prompt": "a red cat"})See examples/ for full runnable scripts:
| Script | Subnet | Description |
|---|---|---|
sn1_text_prompting.py |
SN1 | Transformer LM → Text Prompting miner |
sn18_image_generation.py |
SN18 | Diffusion model → Cortex image miner |
-
inspector.py— extract modelforward()signature via reflection -
codegen.py— generateprotocol.py,miner.py,Dockerfile,pyproject.tomlusing Python 3.14 t-strings -
subnets/— protocol registry for SN1 (Text Prompting) and SN18 (Cortex) -
testing/—MockValidator+MockSynapsefor offline miner testing -
t2b.package()— end-to-end packaging API - CI — ruff lint/format, zuban type check, pytest
- PyPI metadata — version
0.1.0a1, classifiers, license, URLs - Publish
0.1.0a1to PyPI
-
t2b.deploy(platform="runpod")— provision GPU instance via RunPod API -
t2b.deploy(platform="lambda")— Lambda Labs GPU support - Auto-register hotkey with
btclipost-deploy - Dynamic TAO (dTAO / BIT001) profitability dashboard integration
- Auto-quantization — convert FP32 models to INT4/INT8 on the fly with bitsandbytes
-
uv.lockgeneration for deterministic miner environments
- Multi-subnet mining — host multiple models on a single Axon
- Self-healing miners — auto-restart on OOM or network failure
- Expand subnet registry beyond SN1 + SN18
-
t2b.benchmark()— measure model latency vs subnet timeout requirements