Your PC has unused power. We turn it into AI.
Cellule.ai is an open-source distributed LLM inference network. Anyone can contribute computing power (CPU, GPU) to run AI models and earn $IAMINE tokens. Two modes: plug & play or bring your own LLMs.
pip install iamine-ai -i https://cellule.ai/pypi --extra-index-url https://pypi.org/simple
python -m iamine worker --autoThat's it. Your machine auto-detects hardware, discovers the best pool, downloads the right model, and starts earning.
Bring your own LLM backends (llama-server, vLLM, Ollama). Full control over models and hardware.
pip install iamine-ai -i https://cellule.ai/pypi --extra-index-url https://pypi.org/simple
python -m iamine proxy -c proxy.jsonExample proxy.json:
{
"pool_url": "wss://cellule.ai/ws",
"backends": [
{
"name": "Reasoning",
"url": "http://127.0.0.1:8080",
"model": "Qwen3-30B-A3B",
"model_path": "models/Qwen3-30B-A3B-Instruct-Q4_K_M.gguf",
"worker_id": "MyWorker-reasoning",
"bench_tps": 60.0
}
]
}Full guide: cellule.ai/docs/proxy-mode.html
Workers (your PC) Pool (cellule.ai) Users
+--------------+ +-------------------+ +--------+
| Auto worker |<------------>| Smart Router |<-------->| API |
| or Proxy | WebSocket | - gap detection | HTTP | REST |
| + GGUF model | | - load balancing | +--------+
+--------------+ | - RAG memory |
+-------------------+
^ ^
+------+ +------+
| |
+--------+------+ +--------+------+
| Federated | | Federated |
| Pool (Docker) | | Pool (Docker) |
+---------------+ +---------------+
- You share your PC's power — CPU or GPU runs AI models (GGUF format)
- M12 intelligent placement — the network detects where you're most useful
- Pools federate — multiple pools form a molecule (RAID-like resilience)
- Workers auto-migrate — if a pool goes down, workers move to the best available
- You earn $IAMINE tokens — every token generated earns credits (60% to worker)
# 1. Create .env
echo "DB_PASS=your-strong-password
POOL_NAME=my-pool
POOL_URL=http://my-public-ip:8080
ADMIN_PASSWORD=my-admin-pass" > .env
# 2. Launch
docker compose up -d
# 3. Register with the federation
# Contact the Cellule.ai community for trust promotionDocker image: celluleai/pool:0.2.55
- Two modes — Auto (plug & play) or Proxy (bring your own LLMs)
- Multi-platform — Linux, macOS, Windows (CPU, NVIDIA CUDA, AMD ROCm, Apple Metal)
- M12 recruitment — pools detect capability gaps and attract the right workers
- Intelligent placement — workers discover pools and join where they're most useful
- Federation — pools communicate via Ed25519-signed protocol
- Auto-migration — workers failover to the best pool in ~35 seconds
- Auto-update — pool pushes updates to workers via WebSocket
- Infinite memory — 3-level compaction (RAM -> LLM summary -> encrypted archive)
- RAG memory — persistent vectorized facts across conversations (pgvector)
- Zero-knowledge — conversations encrypted with user token
- OpenAI-compatible API — drop-in replacement for
/v1/chat/completions - $IAMINE economy — 60% worker / 20% exec pool / 10% origin pool / 10% treasury
| Component | Role | Stack |
|---|---|---|
| worker | Loads GGUF model, runs inference | Python, llama-cpp-python |
| proxy | Connects existing llama-servers to pool | Python, aiohttp, websockets |
| pool | Routes requests, manages workers, federates | Python, FastAPI, PostgreSQL |
| Docker pool | Self-contained pool + postgres | Docker Compose, pgvector |
curl -X POST https://cellule.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{"messages":[{"role":"user","content":"Hello!"}],"max_tokens":200}'- 3 federated pools (VPS + master + gladiator)
- 8 workers across heterogeneous hardware
- Models: Qwen3.5 2B/4B/9B/35B, Qwen3 Coder 30B, Qwen3 30B Instruct
- Throughput: 8 — 105 tokens/sec per worker
- Python 3.10+
- 4 GB RAM minimum (8 GB recommended)
- No GPU required (but CUDA/ROCm/Metal supported)
MIT
- Website: cellule.ai
- Pool status: cellule.ai/v1/status
- Proxy guide: cellule.ai/docs/proxy-mode.html
- Docker pool: cellule.ai/docs/pool-docker.html
- Docker Hub: celluleai/pool