Expose LLM-powered assistants through messaging platforms such as Telegram and Discord
🤖 ← 🐈 ← 🌐 ← 📱 ← 🧙♂️
llm-expose gives you a channel-first CLI workflow: configure providers, attach channels, control pairings, and optionally integrate MCP servers for tool-aware completions.
- Multi-channel support (Telegram and Discord).
- LiteLLM provider support for broad model compatibility.
- Local OpenAI-compatible endpoint support.
- MCP server integration for tool-aware responses.
- Pairing-based access control per channel.
- CLI-first setup and operations.
Linux & macOS:
curl -fsSL https://raw.githubusercontent.com/edo0xff/llm-expose/main/scripts/install.sh | bashWindows (PowerShell as Administrator):
powershell -ExecutionPolicy Bypass -Command "iex (New-Object Net.WebClient).DownloadString('https://raw.githubusercontent.com/edo0xff/llm-expose/main/scripts/install-windows.ps1')"pip install llm-exposegit clone https://github.com/edo0xff/llm-expose.git
cd llm-expose
pip install -e .pip install -e '.[dev]'See scripts/README.md for detailed installation instructions and troubleshooting.
llm-expose is interactive by default, which is usually the fastest path for humans.
Use --no-input for headless automation and add -y when the command can require confirmation.
- Configure a model:
llm-expose add model- Configure a channel (interactive):
llm-expose add channel- Pair an allowed user/chat ID:
llm-expose add pair 123456789 --channel my-telegram- Start the channel runtime:
llm-expose startHeadless equivalent (CI/scripts):
llm-expose add model --name gpt4o-mini --provider openai --model-id gpt-4o-mini -y --no-input
llm-expose add channel --name my-telegram --client-type telegram --bot-token "123456789:AAExampleTelegramToken" --model-name gpt4o-mini -y --no-input
llm-expose add pair 123456789 --channel my-telegram --no-input
llm-expose start --channel my-telegram -y --no-inputIf you are unsure about available options, run:
llm-expose --help
llm-expose add --help
llm-expose start --helpIncoming chat/channel IDs must be explicitly paired before the service replies.
When an unpaired ID sends a message, the service returns:
This instance is not paired. Run llm-expose add pair <channel-id>
Pairings are stored per channel configuration.
Common pairing commands:
llm-expose add pair <id> --channel <channel-name>llm-expose list pairsllm-expose list pairs --channel <channel-name>llm-expose delete pair <id> --channel <channel-name>
llm-expose currently uses CLI commands to persist configuration (models, channels, and MCP settings).
Recommended setup order:
- Add one or more models (
llm-expose add model ...). - Add one or more channels (
llm-expose add channel ...). - Add optional MCP servers (
llm-expose add mcp ...). - Pair allowed IDs (
llm-expose add pair ...). - Run exposure service (
llm-expose start ...).
Run quality checks:
ruff check .
black --check .
mypy llm_expose
pytest- PyPI release automation.
- Hosted docs site with architecture and API references.
- More channel adapters and provider presets.
See CONTRIBUTING.md.
MIT. See LICENSE.
