BitLlama is a cutting-edge decentralized protocol that enables distributed AI inference through a network of miners running Large Language Models (LLMs) on consumer hardware, gaming consoles, and mobile devices. Users earn BLMA tokens by contributing compute power and bandwidth to the network.
- Decentralized AI Infrastructure: Run and access LLMs on a distributed network
- Multi-Platform Mining: Support for browsers, desktops, mobile devices, and gaming consoles
- Token Incentives: BLMA token rewards for compute providers
- Developer-Friendly SDKs: React and Python SDKs for easy integration
- WebLLM Integration: Browser-based light mining capabilities
- Ollama Support: Heavy mining with desktop GPU acceleration
- Oracle Network: Decentralized validation and consensus
graph TB
subgraph "Client Layer"
WA[Web App]
MA[Mobile App]
EXT[Browser Extension]
TOWER[Tower Desktop]
end
subgraph "SDK Layer"
RS[React SDK]
PS[Python SDK]
TS[TypeScript SDK]
end
subgraph "Protocol Layer"
SC[Smart Contracts]
COORD[Coordinator Service]
OR[Oracle Network]
VM[Validation Module]
end
subgraph "Mining Layer"
LM[Light Miners - WebLLM]
HM[Heavy Miners - Ollama]
VM2[Validator Miners]
end
subgraph "Storage Layer"
IPFS[IPFS Network]
BASE[Base Network]
end
WA --> RS
MA --> RS
EXT --> TS
TOWER --> TS
RS --> SC
PS --> COORD
TS --> SC
SC --> OR
COORD --> VM
OR --> LM
OR --> HM
VM --> VM2
LM --> IPFS
HM --> BASE
VM2 --> BASE
style WA fill:#e1f5fe
style MA fill:#e1f5fe
style EXT fill:#fff3e0
style TOWER fill:#fff3e0
style RS fill:#e8f5e9
style PS fill:#e8f5e9
style TS fill:#e8f5e9
style SC fill:#f3e5f5
style COORD fill:#f3e5f5
style OR fill:#f3e5f5
style LM fill:#fce4ec
style HM fill:#fce4ec
style VM2 fill:#fce4ec
sequenceDiagram
participant User
participant Extension/Tower
participant Coordinator
participant SmartContract
participant Miner
participant Oracle
User->>Extension/Tower: Request Inference
Extension/Tower->>Coordinator: Submit Job
Coordinator->>SmartContract: Register Job
SmartContract->>Miner: Assign Task
Miner->>Miner: Process LLM
Miner->>Oracle: Submit Result
Oracle->>Oracle: Validate Consensus
Oracle->>SmartContract: Confirm Result
SmartContract->>Miner: Release Rewards
SmartContract->>Coordinator: Update Status
Coordinator->>Extension/Tower: Return Result
Extension/Tower->>User: Display Response
graph LR
subgraph "BLMA Token Distribution"
TS[Total Supply: 1B]
TS --> MR[Mining: 40% - 400M]
TS --> DEV[Development: 20% - 200M]
TS --> ECO[Ecosystem: 15% - 150M]
TS --> TEAM[Team: 15% - 150M]
TS --> RES[Reserve: 10% - 100M]
end
subgraph "Mining Rewards"
MR --> CM[Compute Mining]
MR --> SM[Storage Mining]
MR --> BM[Bandwidth Mining]
MR --> VM3[Validation Mining]
end
subgraph "Token Utility"
BLMA[BLMA Token]
BLMA --> INF[Inference Credits]
BLMA --> STAKE[Staking Rewards]
BLMA --> GOV[Governance]
BLMA --> FEE[Fee Discounts]
end
style TS fill:#fff9c4
style MR fill:#e8f5e9
style BLMA fill:#e3f2fd
| Package | Path | Description | Technologies | Links |
|---|---|---|---|---|
@bitllama/contracts |
/packages/contracts |
Smart contracts for rewards, staking, markets | Solidity, Foundry, Base Network | - |
@bitllama/coordinator |
/packages/coordinator |
Job matching and orchestration service | Python, FastAPI, Redis | - |
bitllama-react-sdk |
/packages/react-sdk |
React hooks and components | React 18, TypeScript, Vite | npm |
bitllama |
/packages/python-sdk |
Python SDK for backend integration | Python 3.9+, AsyncIO, Web3.py | PyPI |
bitllama-core |
/packages/core |
Shared utilities and types | TypeScript, Ethers.js | npm |
| Application | Path | Description | Stack |
|---|---|---|---|
extension |
/apps/extension |
Chrome MV3 extension for light mining | WebLLM, TypeScript, React |
tower |
/apps/tower |
Desktop app for heavy mining | Electron, Ollama, Node.js |
web |
/apps/web |
Main web application | Next.js 14, TailwindCSS |
api |
/apps/api |
REST API gateway | Express, GraphQL, Prisma |
admin |
/apps/admin |
Admin dashboard | React, Material-UI |
# Required
- Node.js >= 18.0.0
- Python >= 3.9
- Git
- Chrome Browser (for extension)
# Optional
- Docker & Docker Compose
- Ollama (for heavy mining)
- CUDA toolkit (for GPU acceleration)# Clone the monorepo
git clone https://github.com/yourusername/bitllama.git
cd bitllama
# Install all dependencies
npm install
# Build all packages
npm run build
# Set up environment variables
cp .env.example .env
# Edit .env with your configuration# Start all services in development mode
npm run dev
# Or start specific services
npm run dev:contracts # Deploy local contracts
npm run dev:coordinator # Start coordinator service
npm run dev:extension # Build and watch extension
npm run dev:tower # Start tower desktop app
npm run dev:web # Start web applicationimport { BitLlamaProvider, useBitLlama, useMining } from 'bitllama-react-sdk';
function App() {
return (
<BitLlamaProvider
config={{
network: 'base-mainnet',
coordinatorUrl: process.env.NEXT_PUBLIC_COORDINATOR_URL,
contractAddress: process.env.NEXT_PUBLIC_CONTRACT_ADDRESS
}}
>
<MiningDashboard />
</BitLlamaProvider>
);
}
function MiningDashboard() {
const { account, balance, connect } = useBitLlama();
const { startMining, stopMining, miningStatus, earnings } = useMining();
return (
<div>
<h2>BitLlama Mining Dashboard</h2>
{!account ? (
<button onClick={connect}>Connect Wallet</button>
) : (
<>
<p>Account: {account}</p>
<p>BLMA Balance: {balance}</p>
<p>Mining Status: {miningStatus}</p>
<p>Total Earnings: {earnings} BLMA</p>
{miningStatus === 'idle' ? (
<button onClick={startMining}>Start Mining</button>
) : (
<button onClick={stopMining}>Stop Mining</button>
)}
</>
)}
</div>
);
}from bitllama import BitLlama, MiningClient
import asyncio
# Initialize client
client = BitLlama(
coordinator_url="https://api.bitllama.ai",
private_key="your_private_key",
network="base-mainnet"
)
# Mining client for heavy mining
miner = MiningClient(
client=client,
model_provider="ollama",
model_name="llama3:70b"
)
async def main():
# Start mining
await miner.start()
# Check mining status
status = await miner.get_status()
print(f"Mining Status: {status}")
# Process inference request
response = await client.inference.create(
model="llama3:70b",
prompt="Explain quantum computing",
max_tokens=500
)
print(response.text)
# Check earnings
earnings = await client.rewards.get_earnings()
print(f"Total Earnings: {earnings.total_blma} BLMA")
# Claim rewards
tx_hash = await client.rewards.claim()
print(f"Rewards claimed: {tx_hash}")
# Run
asyncio.run(main())graph TD
subgraph "Light Mining - Browser Extension"
WEB[WebLLM Engine]
WEB --> M1[Llama 2 7B]
WEB --> M2[Phi-2]
WEB --> M3[Gemma 2B]
end
subgraph "Heavy Mining - Tower Desktop"
OLL[Ollama Engine]
OLL --> L1[Llama 3 70B]
OLL --> L2[Mixtral 8x7B]
OLL --> L3[CodeLlama 34B]
end
subgraph "Requirements"
LIGHT[Light: 8GB RAM, Chrome]
HEAVY[Heavy: 32GB+ RAM, GPU]
end
style WEB fill:#e3f2fd
style OLL fill:#fff3e0
style LIGHT fill:#e8f5e9
style HEAVY fill:#fce4ec
# config/networks.yaml
networks:
base-mainnet:
rpc: https://mainnet.base.org
chainId: 8453
contracts:
token: "0x..."
mining: "0x..."
rewards: "0x..."
base-sepolia:
rpc: https://sepolia.base.org
chainId: 84532
contracts:
token: "0x..."
mining: "0x..."
rewards: "0x..."
mining:
light:
minMemory: 8GB
models:
- llama2-7b-webllm
- phi-2-webllm
rewardMultiplier: 1.0
heavy:
minMemory: 32GB
requireGPU: true
models:
- llama3:70b
- mixtral:8x7b
rewardMultiplier: 5.0graph LR
subgraph "Light Mining Performance"
L1[Llama 2 7B: 10 tok/s]
L2[Phi-2: 25 tok/s]
L3[Gemma 2B: 30 tok/s]
end
subgraph "Heavy Mining Performance"
H1[Llama 3 70B: 50 tok/s]
H2[Mixtral 8x7B: 80 tok/s]
H3[CodeLlama 34B: 60 tok/s]
end
subgraph "Earnings (BLMA/day)"
E1[Light: 10-50]
E2[Heavy: 100-500]
E3[Validator: 50-200]
end
style L1 fill:#e3f2fd
style H1 fill:#fff3e0
style E1 fill:#e8f5e9
graph TB
subgraph "Security Layers"
AL[Application Security]
NL[Network Security]
CL[Contract Security]
ML[Mining Security]
end
AL --> A1[Input Validation]
AL --> A2[Rate Limiting]
AL --> A3[API Authentication]
NL --> N1[TLS 1.3]
NL --> N2[DDoS Protection]
NL --> N3[IP Whitelisting]
CL --> C1[Audited Contracts]
CL --> C2[Multi-sig Admin]
CL --> C3[Pausable]
ML --> M1[Proof of Work]
ML --> M2[Stake Slashing]
ML --> M3[Result Validation]
style AL fill:#ffebee
style NL fill:#fce4ec
style CL fill:#f8bbd0
style ML fill:#f48fb1
We welcome contributions! Please see our Contributing Guide for details.
graph LR
F[Fork] --> B[Branch]
B --> D[Develop]
D --> T[Test]
T --> C[Commit]
C --> PR[Pull Request]
PR --> R[Review]
R --> M[Merge]
style F fill:#e3f2fd
style B fill:#fff3e0
style D fill:#f3e5f5
style T fill:#e8f5e9
style C fill:#fce4ec
style PR fill:#e0f2f1
style R fill:#fff9c4
style M fill:#c8e6c9
- Core smart contracts
- Basic coordinator service
- Chrome extension MVP
- Tower desktop app
- Mobile mining apps
- Enhanced SDK features
- Multi-chain support
- Advanced oracle network
- Console mining (Xbox, PlayStation)
- Enterprise API
- Decentralized governance
- Cross-chain bridges
- Custom model training
- Federated learning
- Privacy-preserving inference
- AI marketplace
This project is licensed under the MIT License - see the LICENSE file for details.
Special thanks to:
- WebLLM team for browser-based inference
- Ollama team for local LLM infrastructure
- Base Network for scalable blockchain
- Our amazing community of miners and developers
Join the future of distributed AI computing with BitLlama! ๐ฆโก
Built with โค๏ธ by the BitLlama Team
Built with โค๏ธ by the BitLlama Team
