Open-source AI companion for Chatwoot. Provides AI-powered customer support without requiring Chatwoot Enterprise.
- Multi-provider LLM support: Claude, OpenAI, OpenRouter, Ollama
- RAG-based knowledge channels: Files, URLs, Vector stores (pgvector, Qdrant)
- Smart handoff: Automatic escalation to human agents
- Conversation memory: Context-aware responses
- Servel Hub integration: Deploy alongside Chatwoot with one command
# Deploy Chatwoot first
servel add chatwoot --name mychatwoot --var Domain=chat.example.com
# Deploy Hooty
servel add hooty --name myhooty \
--var ChatwootUrl=http://mychatwoot-rails:3000 \
--var ChatwootApiKey=YOUR_API_KEY \
--var ChatwootAccountId=1 \
--var LlmProvider=claude \
--var LlmApiKey=YOUR_CLAUDE_KEY# Build
go build -o hooty ./cmd/hooty
# Run with config file
./hooty serve -c config.yaml
# Or run with environment variables
export CHATWOOT_URL=https://chat.example.com
export CHATWOOT_API_KEY=your-api-key
export CHATWOOT_ACCOUNT_ID=1
export LLM_PROVIDER=claude
export LLM_API_KEY=your-claude-key
./hooty servedocker run -d \
-e CHATWOOT_URL=https://chat.example.com \
-e CHATWOOT_API_KEY=your-api-key \
-e CHATWOOT_ACCOUNT_ID=1 \
-e LLM_PROVIDER=claude \
-e LLM_API_KEY=your-claude-key \
-p 8080:8080 \
ghcr.io/servel-dev/hooty:latest| Variable | Description | Default |
|---|---|---|
PORT |
HTTP server port | 8080 |
CHATWOOT_URL |
Chatwoot instance URL | required |
CHATWOOT_API_KEY |
Chatwoot API token | required |
CHATWOOT_ACCOUNT_ID |
Chatwoot account ID | 1 |
LLM_PROVIDER |
LLM provider (claude/openai/openrouter/ollama) | claude |
LLM_MODEL |
Model name | provider default |
LLM_API_KEY |
LLM API key | required |
LLM_BASE_URL |
Custom LLM endpoint (for Ollama) | - |
LLM_SYSTEM_PROMPT |
System prompt for AI | default support prompt |
HANDOFF_ENABLED |
Enable human handoff | true |
HANDOFF_TEAM_ID |
Team ID for handoff | 1 |
HANDOFF_MESSAGE |
Message on handoff | default message |
MAX_AUTO_RESPONSES |
Max AI responses before handoff | 10 |
KNOWLEDGE_PATH |
Path to knowledge files | - |
KNOWLEDGE_URLS |
Comma-separated URLs for knowledge | - |
See config.example.yaml for full configuration options.
Hooty supports multiple knowledge sources for RAG-based responses:
Load markdown/text files from a directory:
knowledge:
channels:
- type: file
name: docs
path: ./knowledge/
extensions: [".md", ".txt"]
recursive: trueFetch and index web pages:
knowledge:
channels:
- type: url
name: website
urls:
- https://docs.example.com/
refresh_interval: "24h"Semantic search with vector stores:
knowledge:
channels:
- type: vector
name: embeddings
provider: pgvector
connection: "postgres://user:pass@localhost/db"
collection: documents- Go to Chatwoot Settings → Integrations → Webhooks
- Add webhook URL:
http://hooty:8080/webhook - Select events:
message_created
- Go to Settings → Bots → Add Bot
- Set webhook URL to Hooty endpoint
- Assign bot to inbox
export LLM_PROVIDER=claude
export LLM_MODEL=claude-sonnet-4-20250514
export LLM_API_KEY=sk-ant-...export LLM_PROVIDER=openai
export LLM_MODEL=gpt-4o
export LLM_API_KEY=sk-...export LLM_PROVIDER=openrouter
export LLM_MODEL=anthropic/claude-sonnet-4
export LLM_API_KEY=sk-or-...export LLM_PROVIDER=ollama
export LLM_MODEL=llama3.2
export LLM_BASE_URL=http://localhost:11434┌─────────────────┐ webhook ┌──────────────────────┐
│ Chatwoot │ ───────────────► │ Hooty │
│ │ │ │
│ │ ◄─────────────── │ ┌────────────────┐ │
│ │ Chatwoot API │ │ Knowledge Mgr │ │
└─────────────────┘ │ └───────┬────────┘ │
│ │ │
│ ┌───────▼────────┐ │
│ │ LLM Provider │ │
│ └────────────────┘ │
└──────────────────────┘
# Build
go build -o hooty ./cmd/hooty
# Run
./hooty serve -c config.yaml
# Test
go test ./...MIT License - see LICENSE file.
Built with ❤️ for the Chatwoot community by Servel.