Modern IRC bot with AI capabilities powered by LiteLLM.
- Multi-provider AI: OpenAI, Anthropic, Google Gemini, and more via LiteLLM
- Conversation context: Follow-up questions remember previous messages
- Vision support: Automatically detects image URLs in prompts
- Code generation: Smart HTTP link generation for long code
- Image generation: Text-to-image via Vertex AI Imagen
- Abuse protection: Uses Limnoria's built-in flood protection
- Modern Python: Python 3.12+ with full type hints
- Quality tools: Ruff for linting/formatting, ty for type checking
# Install uv (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh
make install
make runConfigure API keys via bot commands:
%config plugins.LLM.askApiKey YOUR_KEY
Build and run locally:
make docker-build
make docker-runOr pull from GHCR:
docker pull ghcr.io/rdrake/vibebot-v8:latestInstall as a systemd user service:
make install-serviceThen follow the printed instructions to copy your bot.conf and enable the service.
Install the auto-update timer to automatically pull new images from GHCR:
make install-timerThis checks for updates every 15 minutes and restarts the bot if a new version is found.
# Check timer status
systemctl --user status vibebot-updater.timer
# View update logs
journalctl --user -u vibebot-updater.service -f
# Disable auto-updates
make uninstall-timerWhen serving code/images via Nginx or Apache, set the public URL:
%config supybot.servers.http.publicUrl https://example.com
The bot will generate URLs like https://example.com/llm/filename.py.
| Command | Description |
|---|---|
%ask <question> |
Ask AI a question (supports vision with image URLs, remembers context) |
%code <request> |
Generate code (remembers context for iterating on code) |
%draw <prompt> |
Generate an image (no context) |
%forget [channel] |
Clear your conversation context |
| Command | Description |
|---|---|
%llmkeys |
Check API key status (shows first 3 chars only, sent privately) |
Configure models in bot.conf:
# Free tier (Gemini Flash)
supybot.plugins.LLM.askModel: gemini/gemini-1.5-flash
supybot.plugins.LLM.codeModel: gemini/gemini-1.5-flash
# Paid tier (Vertex Imagen)
supybot.plugins.LLM.drawModel: vertex_ai/imagen-4.0-generate-001
See LiteLLM docs for supported models.
supybot.plugins.LLM.contextEnabled: True
supybot.plugins.LLM.contextMaxMessages: 20
supybot.plugins.LLM.contextTimeoutMinutes: 30
Context is per-user per-channel. Cleared after 30 minutes of inactivity or when max messages exceeded.
supybot.plugins.LLM.httpRoot: /var/www/llm
supybot.plugins.LLM.httpUrlBase: https://example.com/llm
If httpRoot is empty (default), uses Limnoria's built-in HTTP server at data/web/llm/.
make testmake lint # Check code
make format # Format code
make typecheck # Check types
make check # Run all checksThis project uses:
- uv: Fast Python package manager
- prek: Fast Rust-based pre-commit hooks
- Ruff: Fast Python linter and formatter
- deptry: Dependency issue detection
- ty: Astral's static type checker
- pytest: Testing framework with 80% coverage threshold
- Dependabot: Automated dependency updates (weekly)
All code must pass linting, formatting, type checking, and tests with ≥80% coverage.
vibebot-v8/
├── plugins/llm/
│ ├── src/llm/
│ │ ├── plugin.py # IRC command handlers
│ │ ├── service.py # LiteLLM business logic
│ │ ├── config.py # Configuration definitions
│ │ └── context.py # Conversation history
│ └── tests/ # Unit tests
├── bot.conf # Bot configuration
└── pyproject.toml # Dependencies and tools
-
Security First
- API keys never logged (sanitized in all error paths)
- Malicious URLs blocked (javascript:, data:, file:, path traversal)
- Thread-safe API key handling (passed directly, never env vars)
-
Separation of Concerns
plugin.py: IRC protocol and command routingservice.py: AI API calls and business logiccontext.py: Conversation history management
-
Modern Python
- Python 3.12+ type hints throughout
- Type checking with ty
- Modern patterns (dataclasses, context managers)
Check configuration:
%llmkeys
Should show AIz...(36 chars hidden) or similar.
Clear and retry:
%forget
%ask Your new question here
-
Check directory exists and is writable:
ls -la /var/www/llm
-
Check web server is serving the directory
-
Check logs:
tail -f logs/messages.log
See LICENSE file for details.