A Telegram bot for monitoring Docker containers and Unraid servers. Get real-time alerts, check container status, view logs, and control containers - all from Telegram.
- Interactive Setup Wizard - Guided first-run setup via Telegram with auto-classification of containers
- Container Monitoring - Status, health checks, crash detection, and recovery notifications
- Resource Alerts - CPU/memory usage with configurable thresholds
- Log Watching - Automatic alerts when errors appear in container logs
- AI Diagnostics - LLM-powered log analysis and troubleshooting (Anthropic, OpenAI, or Ollama)
- Smart Ignore Patterns - AI-generated patterns to filter known errors, with interactive toggle selection
- Multi-Provider LLM - Switch between Anthropic Claude, OpenAI GPT, or local Ollama models at runtime
- Container Control - Start, stop, restart, and pull containers with inline confirmation buttons
- Unraid Server Monitoring - CPU/memory, temperatures, UPS status, and array health
- Memory Pressure Management - Automatic container priority handling during high memory
- Mute System - Temporarily silence alerts per container, server, or array
- Natural Language Chat - Ask questions naturally instead of using commands
- Interactive Dashboard -
/managehub for status, resources, ignores, and mutes - Sectioned Help -
/helpwith navigable category buttons instead of a text wall
This release overhauls the bot's UX with inline keyboard buttons throughout:
- Button confirmations -
/restart,/stop,/start,/pullnow show β Confirm / β Cancel buttons - Diagnose details button - After AI diagnosis, tap π More Details instead of typing
- Toggle ignore selection -
/ignoreshows β/β checkboxes per error with Select All - Manage with delete buttons - Remove ignores and mutes with per-item π buttons
- Recovery alerts - Get notified when a crashed container comes back online
- Sectioned /help - Browse commands by category (Containers, Server, Alerts, Setup)
- Back navigation - All sub-views include β¬ οΈ Back buttons
- Smarter mute display - Shows "until tomorrow 14:30" instead of just a time
See the changelog for full details.
The easiest way to install on Unraid.
-
Install from Community Apps
- Open the Unraid web UI
- Go to Apps tab
- Search for "Unraid Monitor Bot"
- Click Install
-
Configure the template
TELEGRAM_BOT_TOKEN- Your bot token (how to get one)TELEGRAM_ALLOWED_USERS- Your Telegram user ID (how to find it)ANTHROPIC_API_KEY(optional) - Enables AI features via ClaudeOPENAI_API_KEY(optional) - Enables AI features via OpenAIOLLAMA_HOST(optional) - Enables AI features via local Ollama (e.g.,http://192.168.1.100:11434)UNRAID_API_KEY(optional) - Enables server monitoring
-
Start the container
-
Message your bot on Telegram - send
/startto begin the setup wizard- The wizard will guide you through connecting to your Unraid server
- It auto-classifies your containers into categories (priority, protected, watched, killable, ignored)
- When an Anthropic API key is configured, AI assists with classifying unknown containers
- Review and adjust the categories, then confirm to save
- The bot restarts automatically and begins monitoring
-
Re-configure anytime (optional)
- Send
/setupto re-run the wizard (merges non-destructively with existing config) - Or edit
/mnt/user/appdata/unraid-monitor/config/config.yamldirectly and restart
- Send
If not using Community Apps, you can set it up manually.
mkdir -p /mnt/user/appdata/unraid-monitor/{config,data}Create /mnt/user/appdata/unraid-monitor/config/.env:
# Required
TELEGRAM_BOT_TOKEN=your_bot_token_here
TELEGRAM_ALLOWED_USERS=123456789
# Optional - AI features (configure at least one for /diagnose, NL chat, smart ignore)
ANTHROPIC_API_KEY=your_anthropic_api_key_here
OPENAI_API_KEY=your_openai_api_key_here
OLLAMA_HOST=http://localhost:11434
# Optional - enables Unraid server monitoring
UNRAID_API_KEY=your_unraid_api_key_hereGo to Docker β Add Container and configure:
| Field | Value |
|---|---|
| Name | unraid-monitor-bot |
| Repository | ghcr.io/dervish666/unraidmonitor:latest |
| Network Type | bridge or your preferred network |
Add these paths:
| Container Path | Host Path | Access |
|---|---|---|
/app/config |
/mnt/user/appdata/unraid-monitor/config |
Read/Write |
/app/data |
/mnt/user/appdata/unraid-monitor/data |
Read/Write |
/var/run/docker.sock |
/var/run/docker.sock |
Read Only |
Add these variables:
| Name | Value |
|---|---|
TELEGRAM_BOT_TOKEN |
Your bot token |
TELEGRAM_ALLOWED_USERS |
Your user ID |
ANTHROPIC_API_KEY |
(optional) Claude AI features |
OPENAI_API_KEY |
(optional) OpenAI AI features |
OLLAMA_HOST |
(optional) Ollama URL, e.g., http://192.168.1.100:11434 |
UNRAID_API_KEY |
(optional) Unraid server monitoring |
TZ |
Your timezone (e.g., Europe/London) |
Start the container and check the logs for any errors. Message your bot on Telegram with /start to begin the interactive setup wizard.
- Open Telegram and message @BotFather
- Send
/newbot - Follow the prompts to name your bot
- Copy the bot token (looks like
123456789:ABCdefGHIjklMNOpqrsTUVwxyz)
- Message @userinfobot on Telegram
- It will reply with your numeric user ID (e.g.,
123456789)
This ID is used to restrict who can control your bot. You can add multiple IDs separated by commas: 123456789,987654321
At least one provider is needed for AI-powered features (/diagnose, smart ignore patterns, natural language chat). You can configure multiple providers and switch between them at runtime with /model.
Option A: Anthropic Claude (recommended)
- Sign up at console.anthropic.com
- Go to API Keys and create a new key
- Add it as
ANTHROPIC_API_KEY
Option B: OpenAI
- Sign up at platform.openai.com
- Go to API Keys and create a new key
- Add it as
OPENAI_API_KEY
Option C: Ollama (free, runs locally)
- Install Ollama from ollama.com
- Pull a model:
ollama pull llama3.1:8b - Set
OLLAMA_HOSTto your Ollama URL (e.g.,http://192.168.1.100:11434)
Models are auto-discovered from Ollama at startup. Note: some local models don't support tool calling, so NL chat actions (restart, etc.) may be limited.
Required for Unraid server monitoring (CPU, memory, temps, array status).
- In Unraid web UI, go to Settings β Management Access
- Generate an API key
- Add it as
UNRAID_API_KEY
Configuration is stored in config/config.yaml. On first run, the interactive setup wizard creates this file. You can also run /setup anytime to reconfigure.
Location:
- Unraid:
/mnt/user/appdata/unraid-monitor/config/config.yaml - Docker:
./config/config.yaml(relative to project root)
# Containers to watch for log errors
log_watching:
containers:
- plex
- radarr
- sonarr
- lidarr
error_patterns:
- "error"
- "exception"
- "fatal"
- "failed"
- "critical"
ignore_patterns:
- "DeprecationWarning"
- "DEBUG"
cooldown_seconds: 900 # 15 min between alerts for same container
# Containers to hide from status reports
ignored_containers:
- some-temp-container
# Containers that cannot be controlled via Telegram (safety)
protected_containers:
- unraid-monitor-bot
- mariadb
- postgresql14resource_monitoring:
enabled: true
poll_interval_seconds: 60
sustained_threshold_seconds: 120 # Alert after 2 min exceeded
defaults:
cpu_percent: 80
memory_percent: 85
# Per-container overrides
containers:
plex:
cpu_percent: 95 # Plex often uses high CPU
memory_percent: 90
handbrake:
cpu_percent: 100 # Expected to max outAutomatically kills low-priority containers when system memory is critical.
memory_management:
enabled: false # Disabled by default - enable with caution
warning_threshold: 90 # Notify at this %
critical_threshold: 95 # Start killing at this %
safe_threshold: 80 # Offer restart when below this
kill_delay_seconds: 60 # Warning before killing
stabilization_wait: 180 # Wait between kills
# Never kill these (highest priority)
priority_containers:
- plex
- mariadb
# Kill these in order during memory pressure (lowest priority first)
killable_containers:
- handbrake
- tdarrunraid:
enabled: true
host: "192.168.1.100" # Your Unraid IP
port: 443
use_ssl: true
verify_ssl: false # Set true if using valid SSL cert
polling:
system: 30 # CPU/memory poll interval
array: 300 # Array status poll interval
ups: 60 # UPS status poll interval
thresholds:
cpu_temp: 80 # Alert above this temp (C)
cpu_usage: 95 # Alert above this %
memory_usage: 90 # Alert above this %
disk_temp: 50 # Alert above this temp (C)
array_usage: 85 # Alert above this %
ups_battery: 30 # Alert below this %| Command | Description |
|---|---|
/status |
Overview of all containers |
/status <name> |
Details for a specific container |
/resources |
CPU/memory usage for all containers |
/resources <name> |
Detailed stats with thresholds |
/logs <name> [n] |
Last n log lines (default 20) |
/diagnose <name> |
AI log analysis with π More Details button |
/restart <name> |
Restart with β Confirm / β Cancel buttons |
/stop <name> |
Stop with confirmation buttons |
/start <name> |
Start with confirmation buttons |
/pull <name> |
Pull latest image and recreate (with confirmation) |
Tip: Partial names work β /status rad matches radarr
| Command | Description |
|---|---|
/server |
Server overview (CPU, memory, temps) |
/server detailed |
Full metrics including per-core temps |
/array |
Array status and disk health |
/disks |
Detailed disk information |
| Command | Description |
|---|---|
/mute <name> <duration> |
Mute container (e.g., /mute plex 2h) |
/unmute <name> |
Unmute a container |
/mute-server <duration> |
Mute server alerts |
/unmute-server |
Unmute server alerts |
/mute-array <duration> |
Mute array alerts |
/unmute-array |
Unmute array alerts |
/mutes |
Show all active mutes |
/ignore |
Select errors to ignore with β/β toggle buttons |
/ignores |
List all ignore patterns |
/cancel-kill |
Cancel pending memory pressure kill |
Duration formats: 30m, 2h, 1d, 1w
| Command | Description |
|---|---|
/setup |
Re-run the setup wizard (merges with existing config) |
/cancel |
Exit the setup wizard mid-flow |
/manage |
Interactive dashboard β status, resources, ignores, mutes |
/health |
Bot version, uptime, and monitor status |
/model |
Switch LLM provider and model at runtime |
/help |
Browse commands by category with navigation buttons |
Instead of commands, you can ask questions naturally:
- "What's wrong with plex?"
- "Why is my server slow?"
- "Is anything crashing?"
- "Show me radarr logs"
- "Restart sonarr" (shows confirmation buttons)
Follow-up questions work too β say "restart it" after discussing a container.
Note: Requires at least one LLM provider to be configured (ANTHROPIC_API_KEY, OPENAI_API_KEY, or OLLAMA_HOST). Use /model to switch providers.
All alerts include tappable inline buttons for quick actions β no need to type commands.
π΄ CONTAINER CRASHED: radarr
Exit code: 137 (OOM killed)
Image: linuxserver/radarr:latest
Uptime: 2h 34m
[π Restart] [π Logs] [π Diagnose]
[π Mute 1h] [π Mute 24h]
Sent automatically when a previously crashed container starts successfully:
β
radarr recovered and is running again.
Recovery alerts include a 5-minute cooldown to prevent spam if a container is flapping.
ππ΄ RESTART LOOP: radarr
Crashed 5 times in the last 10 minutes!
Exit code: 137 (OOM killed)
Image: linuxserver/radarr:latest
[π Restart] [π Logs] [π Diagnose]
[π Mute 1h] [π Mute 24h]
β οΈ HIGH MEMORY USAGE: plex
Memory: 92% (threshold: 85%)
7.4GB / 8.0GB limit
Exceeded for: 3 minutes
CPU: 45% (normal)
[π Logs] [π Diagnose]
[π Mute 1h] [π Mute 24h]
β οΈ ERRORS IN: sonarr
Found 3 errors in the last 15 minutes
Latest: Database connection failed: timeout
[π Ignore Similar] [π Mute 1h]
[π Logs] [π Diagnose]
For a detailed walkthrough of all features, see the User Guide.
It covers:
- First-run setup and the interactive wizard
- Understanding each alert type and what to do
- Container management workflows (diagnose, restart, logs)
- Using the
/managedashboard - Muting alerts and creating ignore patterns
- AI features and switching LLM providers
- Tips and best practices
- Check the container is running:
docker ps | grep unraid-monitor - Check logs for errors:
docker logs unraid-monitor-bot - Verify
TELEGRAM_BOT_TOKENis correct - Verify your user ID is in
TELEGRAM_ALLOWED_USERS
This means the container can't access the Docker socket.
-
Check your Docker socket GID:
ls -ln /var/run/docker.sock
Look at the 4th column (e.g.,
281on Unraid,999on Ubuntu) -
If using docker-compose, set DOCKER_GID in
.env:echo "DOCKER_GID=999" > .env
-
Rebuild the container:
docker-compose build --no-cache docker-compose up -d
-
Last resort: Edit
docker-compose.ymland uncommentuser: root
- Verify at least one LLM key is set:
ANTHROPIC_API_KEY,OPENAI_API_KEY, orOLLAMA_HOST - Check logs for API errors
- Use
/modelto see which providers are configured and switch between them - If using Ollama, ensure the server is reachable and has models pulled
- The bot works without AI β you'll get basic alerts, but
/diagnoseand natural language chat won't work
- Verify
UNRAID_API_KEYis set - Check the
unraidsection inconfig.yamlhas correcthostandport - If using self-signed certs, set
verify_ssl: false
Check logs immediately after start:
docker logs unraid-monitor-botCommon issues:
- Missing
TELEGRAM_BOT_TOKENorTELEGRAM_ALLOWED_USERS - Invalid configuration in
config.yaml - Docker socket permission issues (see above)
Restart the container after editing config:
docker restart unraid-monitor-botAll persistent data is stored in mounted volumes:
config/
βββ config.yaml # Main configuration
βββ .env # Environment variables (secrets)
data/
βββ ignored_errors.json # Ignore patterns
βββ mutes.json # Container mutes
βββ server_mutes.json # Server mutes
βββ array_mutes.json # Array mutes
βββ model_selection.json # Active LLM provider/model choice
- Docker
- Telegram Bot Token
- (Optional) LLM provider for AI features: Anthropic API key, OpenAI API key, or Ollama instance
- (Optional) Unraid API key for server monitoring
MIT