A peer-to-peer client for sharing Large Language Models (LLMs) across the LLMule network. Run your local LLMs and share them with the community.
🌐 Official Website: https://llmule.xyz 💬 Join our Community: Discord Channel
- Automatic detection of local LLM models (Ollama & LM Studio)
- Real-time connection to the LLMule network
- Model tier categorization (Tiny, Small, Medium)
- Health monitoring and automatic reconnection
- Secure API key authentication
- Clone the repository:
git clone https://github.com/cm64-studio/LLMule-client.git
cd LLMule-client- Install dependencies:
npm install- Create a configuration file:
cp .env.example .envEdit .env file with your settings. Don't worry about the API key - you'll get it automatically during the first run registration process at llmule.xyz.
# Server Configuration
API_URL=https://api.llmule.xyz
SERVER_URL=wss://api.llmule.xyz/llm-network
# LLM Provider URLs (defaults)
OLLAMA_URL=http://localhost:11434
LMSTUDIO_URL=http://localhost:1234/v1
# Advanced
LOG_LEVEL=info
MAX_RETRIES=5- TinyLlama
- Minimum Requirements: 4GB RAM
- Mistral 7B
- Minimum Requirements: 8GB RAM
- Microsoft Phi-4
- Minimum Requirements: 16GB RAM
LLMule supports the following LLM providers:
- Ollama: Run models like Llama, Mistral, and more locally
- LM Studio: Run various open-source models with a nice UI
- EXO: Run distributed models across multiple devices
Set up your providers in your .env file:
OLLAMA_URL=http://localhost:11434
LMSTUDIO_URL=http://localhost:1234/v1
EXO_URL=http://localhost:52415-
Start your LLM backend (Ollama or LM Studio)
-
Run the client:
npm start- First-time setup:
- On first run, you'll be guided through the registration process at llmule.xyz
- Your API key will be automatically configured after registration
- Select the models you want to share
- The client will automatically connect to the LLMule network
- Create service file:
sudo nano /etc/systemd/system/llmule-client.service- Add configuration:
[Unit]
Description=LLMule Client
After=network.target ollama.service
[Service]
Type=simple
User=ubuntu
WorkingDirectory=/path/to/llmule-client
ExecStart=/usr/bin/npm start
Restart=always
Environment=NODE_ENV=production
[Install]
WantedBy=multi-user.target- Enable and start:
sudo systemctl enable llmule-client
sudo systemctl start llmule-clientCheck client status:
npm run statusView logs:
# Live logs
npm run logs
# Error logs
npm run logs:errorCommon issues and solutions:
- Connection Issues
# Check network connectivity
curl -v $SERVER_URL
# Verify Ollama is running
curl http://localhost:11434/api/tags- Model Detection Issues
# List Ollama models
ollama list
# Check LM Studio API
curl http://localhost:1234/v1/models- Fork the repository
- Create feature branch
- Commit changes
- Push to branch
- Create Pull Request
- API keys are stored securely
- All network traffic is encrypted
- Models are sandboxed
- Resource limits enforced
- GitHub Issues: Report Bug
- Email: andres@cm64.studio
MIT License - see LICENSE for details