AI belongs to those who run it.
Cellule.ai is a community-powered distributed LLM inference network. Anyone can contribute computing power (CPU, GPU) to run AI models. No subscription, no payment — the network belongs to its contributors.
pip install iamine-ai -i https://cellule.ai/pypi --extra-index-url https://pypi.org/simple
python -m iamine worker --autoYour machine auto-detects hardware, discovers the best pool, downloads the right model, and starts contributing.
Workers (your PC) Pool (cellule.ai) Users / Agents
+--------------+ +-------------------+ +-------------+
| Auto worker |<------------>| Smart Router |<-------->| API / MCP |
| or Proxy | WebSocket | - load balancing | HTTP | OpenCode |
| + GGUF model | | - gap detection | | ClawCode |
+--------------+ | - auto-migration | | Cursor |
+-------------------+ +-------------+
^ ^
+------+ +------+
| |
+--------+------+ +--------+------+
| Federated | | Federated |
| Pool (Docker) | | Pool (Docker) |
+---------------+ +---------------+
- You share your PC's power — CPU or GPU runs AI models (GGUF format)
- Intelligent placement — the network detects where you're most useful
- Pools federate — multiple pools form a molecule (RAID-like resilience)
- Workers auto-migrate — if a pool goes down, workers move to the best available
- Agents remember — 4-tier memory persists across sessions and pools
Interactive SVG diagrams (EN/FR) covering the 4 layers — atom, pool, federation, economy:
- Live embedded: cellule.ai/#how
- Standalone: docs/architecture/en/index.html · docs/architecture/fr/index.html
- Source: docs/architecture/ — plain HTML + inline SVG, no JS framework
- Multi-platform — Linux, macOS, Windows (CPU, NVIDIA CUDA, AMD ROCm, Apple Metal)
- Two modes — Auto (plug & play) or Proxy (bring your own LLMs)
- Sub-agents — auto-review, security audit, test generation, documentation (parallel pipeline)
- Ed25519-signed protocol — every cross-pool message is cryptographically signed, nonce-protected, and trust-gated
- No master pool — each pool is sovereign. Cross-pool admin actions require explicit manual approval from the target pool's admin
- Read-by-default / write-opt-in — Phase 2.1 split:
query_events(read-only) enabled by default,circuit_reset(write) OFF by default to protect worker economic state - Auto-migration — workers failover to the best pool in ~35 seconds
- RAID-like resilience — lose a pool, lose no data
- Instant kill switch —
touch /etc/iamine/fed_disablestops all federation traffic immediately
→ Full explanation : cellule.ai/docs/federation.html
As a pool operator, two toggles in /admin → Federation tab control what peer pools can ask yours to do:
| Type | Example | Default | Why |
|---|---|---|---|
| READ | query_events (read filtered molecule event log) |
ON | No economic impact. Helps the community see network health. |
| WRITE | circuit_reset (clear blacklist + reset worker scores) |
OFF | Can indirectly affect wallets / slashing. Opt-in only. |
Every request is:
- Ed25519-verified (signature + nonce anti-replay)
- Trust-gated (peer must be bonded at trust≥3)
- Rate-limited (10 pending per peer, 6h cooldown between approved writes)
- Manually approved by the target pool admin — no auto-execution, ever
- Logged append-only on both sides
Every design choice is made with the future on-chain economy in mind: we protect wallets today so the system holds when real value flows tomorrow. Validated by project's economic guardian (token-guardian) with 13 load-bearing invariants.
- 4-tier memory system — observations, episodes, semantic facts, procedural patterns
- Hybrid retrieval — vector similarity + relationship graph + procedures
- Ebbinghaus decay — stale memories fade, frequently accessed ones strengthen
- MCP server — any MCP-compatible agent (Claude Code, OpenCode, Cursor) can read/write collective pool memory
- Zero-knowledge — all content encrypted with user token (PBKDF2 + Fernet), pools cannot read your data
- Federation sync — semantic facts and procedures replicate across bonded pools
# Connect any MCP-compatible agent to pool memory
iamine mcp-server --pool-url https://cellule.ai --token acc_xxxxx- OpenAI-compatible — drop-in replacement for
/v1/chat/completions - Persistent memory — 3-level compaction + encrypted RAG (pgvector)
- SSE streaming — real-time token streaming with sub-agent review metadata
docker compose up -dDocker image: celluleai/pool — see Docker Hub
curl -X POST https://cellule.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"messages":[{"role":"user","content":"Hello!"}],"max_tokens":200}'# Check memory status
curl "https://cellule.ai/v1/memory/status?api_token=acc_xxxxx"
# Search across all memory tiers
curl "https://cellule.ai/v1/memory/search?q=routing&api_token=acc_xxxxx"
# Record an observation
curl -X POST "https://cellule.ai/v1/memory/observe?api_token=acc_xxxxx" \
-H "Content-Type: application/json" \
-d '{"content":"Found a bug in auth flow","source_type":"tool_call"}'Any MCP-compatible coding agent can connect to the pool's collective memory:
{
"mcpServers": {
"cellule-memory": {
"command": "python",
"args": ["-m", "iamine", "mcp-server", "--pool-url", "https://cellule.ai", "--token", "YOUR_TOKEN"]
}
}
}8 MCP tools: memory_status, memory_search, memory_observe, memory_episodes, memory_procedures, memory_graph, memory_consolidate, memory_forget_all
- Python 3.10+
- 4 GB RAM minimum (8 GB recommended)
- No GPU required (but CUDA/ROCm/Metal supported)
The project is in alpha. The network is live with federated pools and active workers.
- $IAMINE token — participation token for contributors (ALPHA, not yet deployed). Not a financial instrument.
- Website: cellule.ai
- Try the AI: cellule.ai (6 messages, no account needed)
- Pool status: cellule.ai/v1/status
- Docker Hub: celluleai/pool
MIT