Skip to content

rdrake/vibebot-v8

Repository files navigation

VibeBot v8

Modern IRC bot with AI capabilities powered by LiteLLM.

Features

  • Multi-provider AI: OpenAI, Anthropic, Google Gemini, and more via LiteLLM
  • Conversation context: Follow-up questions remember previous messages
  • Vision support: Automatically detects image URLs in prompts
  • Code generation: Smart HTTP link generation for long code
  • Image generation: Text-to-image via Vertex AI Imagen
  • Abuse protection: Uses Limnoria's built-in flood protection
  • Modern Python: Python 3.12+ with full type hints
  • Quality tools: Ruff for linting/formatting, ty for type checking

Quick Start

# Install uv (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh

make install
make run

Configure API keys via bot commands:

%config plugins.LLM.askApiKey YOUR_KEY

Docker

Build and run locally:

make docker-build
make docker-run

Or pull from GHCR:

docker pull ghcr.io/rdrake/vibebot-v8:latest

Production Deployment

Install as a systemd user service:

make install-service

Then follow the printed instructions to copy your bot.conf and enable the service.

Auto-Updates

Install the auto-update timer to automatically pull new images from GHCR:

make install-timer

This checks for updates every 15 minutes and restarts the bot if a new version is found.

# Check timer status
systemctl --user status vibebot-updater.timer

# View update logs
journalctl --user -u vibebot-updater.service -f

# Disable auto-updates
make uninstall-timer

Static Assets (Reverse Proxy)

When serving code/images via Nginx or Apache, set the public URL:

%config supybot.servers.http.publicUrl https://example.com

The bot will generate URLs like https://example.com/llm/filename.py.

Commands

User Commands

Command Description
%ask <question> Ask AI a question (supports vision with image URLs, remembers context)
%code <request> Generate code (remembers context for iterating on code)
%draw <prompt> Generate an image (no context)
%forget [channel] Clear your conversation context

Admin Commands

Command Description
%llmkeys Check API key status (shows first 3 chars only, sent privately)

Configuration

Models

Configure models in bot.conf:

# Free tier (Gemini Flash)
supybot.plugins.LLM.askModel: gemini/gemini-1.5-flash
supybot.plugins.LLM.codeModel: gemini/gemini-1.5-flash

# Paid tier (Vertex Imagen)
supybot.plugins.LLM.drawModel: vertex_ai/imagen-4.0-generate-001

See LiteLLM docs for supported models.

Conversation Context

supybot.plugins.LLM.contextEnabled: True
supybot.plugins.LLM.contextMaxMessages: 20
supybot.plugins.LLM.contextTimeoutMinutes: 30

Context is per-user per-channel. Cleared after 30 minutes of inactivity or when max messages exceeded.

HTTP Output

supybot.plugins.LLM.httpRoot: /var/www/llm
supybot.plugins.LLM.httpUrlBase: https://example.com/llm

If httpRoot is empty (default), uses Limnoria's built-in HTTP server at data/web/llm/.

Development

Run Tests

make test

Lint and Format

make lint        # Check code
make format      # Format code
make typecheck   # Check types
make check       # Run all checks

Code Quality

This project uses:

  • uv: Fast Python package manager
  • prek: Fast Rust-based pre-commit hooks
  • Ruff: Fast Python linter and formatter
  • deptry: Dependency issue detection
  • ty: Astral's static type checker
  • pytest: Testing framework with 80% coverage threshold
  • Dependabot: Automated dependency updates (weekly)

All code must pass linting, formatting, type checking, and tests with ≥80% coverage.

Architecture

vibebot-v8/
├── plugins/llm/
│   ├── src/llm/
│   │   ├── plugin.py       # IRC command handlers
│   │   ├── service.py      # LiteLLM business logic
│   │   ├── config.py       # Configuration definitions
│   │   └── context.py      # Conversation history
│   └── tests/              # Unit tests
├── bot.conf                # Bot configuration
└── pyproject.toml          # Dependencies and tools

Design Principles

  1. Security First

    • API keys never logged (sanitized in all error paths)
    • Malicious URLs blocked (javascript:, data:, file:, path traversal)
    • Thread-safe API key handling (passed directly, never env vars)
  2. Separation of Concerns

    • plugin.py: IRC protocol and command routing
    • service.py: AI API calls and business logic
    • context.py: Conversation history management
  3. Modern Python

    • Python 3.12+ type hints throughout
    • Type checking with ty
    • Modern patterns (dataclasses, context managers)

Troubleshooting

API Key Not Working

Check configuration:

%llmkeys

Should show AIz...(36 chars hidden) or similar.

Context Not Working

Clear and retry:

%forget
%ask Your new question here

Code Not Saving to HTTP

  1. Check directory exists and is writable:

    ls -la /var/www/llm
  2. Check web server is serving the directory

  3. Check logs:

    tail -f logs/messages.log

License

See LICENSE file for details.

Credits

  • Built with Limnoria
  • Powered by LiteLLM
  • Developed for AfterNET IRC (irc.afternet.org)

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors 2

  •  
  •