Self-hosted news monitoring with LLM-powered novelty detection. Watches topics via RSS feeds, notifies you only when genuinely new information appears. BYOK (bring your own key).
- Define topics with RSS feed URLs, or let it auto-generate a Google News feed
- On a schedule, articles are fetched and compared against a knowledge state (a rolling summary of what's already known)
- An LLM decides if anything is actually new
- New info: notification with summary + sources. Nothing new: silence.
- Auto feeds (Google News) or manual RSS/Atom URLs
- Per-topic check intervals (10 min to 6 months, human-readable:
6h,1w 3d,2h 30m) - Topic tags
- 100+ notification services via Apprise (Discord, Slack, Telegram, email, ntfy, etc.)
- Custom JSON webhooks
- Notification retry queue
- Feed health dashboard
- Data export (JSON, CSV)
- Bulk check/delete
- 5 color themes (Nord, Dracula, Solarized Dark, High Contrast, Tokyo Night)
- In-app settings page
- CLI:
list,check,check-all,init
Topic Watch runs in Docker. If you don't have it yet:
- Linux:
curl -fsSL https://get.docker.com | sh - macOS: Download Docker Desktop and run the installer
- Windows: Download Docker Desktop and run the installer. WSL 2 backend is recommended.
Make sure Docker is running before continuing.
Linux / macOS:
curl -fsSL https://raw.githubusercontent.com/0xzerolight/topic_watch/main/scripts/install.sh | bashWindows (PowerShell):
irm https://raw.githubusercontent.com/0xzerolight/topic_watch/main/scripts/install.ps1 | iexPulls the image, starts the container, creates a desktop shortcut + auto-start, and opens the setup wizard. Set your LLM API key in the wizard.
Override install location and port:
# Linux / macOS
TOPIC_WATCH_DIR=~/my-path TOPIC_WATCH_PORT=9000 curl -fsSL .../scripts/install.sh | bash
# Windows (PowerShell)
$env:TOPIC_WATCH_DIR="C:\TopicWatch"; $env:TOPIC_WATCH_PORT="9000"; irm .../scripts/install.ps1 | iexManual setup
git clone https://github.com/0xzerolight/topic_watch.git
cd topic_watch
docker compose up -dgit clone https://github.com/0xzerolight/topic_watch.git
cd topic_watch
python -m venv .venv && source .venv/bin/activate
pip install .
mkdir -p data && cp config.example.yml data/config.yml
uvicorn app.main:app --host 0.0.0.0 --port 8000Then visit http://localhost:8000 to configure.
If you run Ollama locally, Topic Watch can use it for free LLM-powered novelty detection:
# 1. Start Ollama and pull a model (8B+ recommended for novelty detection)
ollama pull llama3.3
# 2. Start Topic Watch with Ollama config
cp docker-compose.override.example.yml docker-compose.override.yml
docker compose up -dThe override file sets ollama/llama3.3 as the model and points to your local Ollama instance. No API key required.
Model recommendations: Models with 8B+ parameters and 8K+ context windows work best. Tested with llama3.3 (8B), mistral (7B), and qwen2.5 (7B). Smaller models may miss subtle novelty signals.
Settings live in data/config.yml. First run auto-copies config.example.yml. Editable via the web UI Settings page or directly in the file.
| Key | Type | Default | Description |
|---|---|---|---|
llm.model |
string | - | LiteLLM model string (e.g. openai/gpt-5.4-nano) |
llm.api_key |
string | - | API key for your LLM provider |
llm.base_url |
string | - | Base URL for self-hosted providers (Ollama, etc.) |
notifications.urls |
list | [] |
Apprise notification URLs |
notifications.webhook_urls |
list | [] |
Webhook endpoints for JSON POST (see Webhooks) |
check_interval |
string | "6h" |
Default check interval. Units: m (minutes), h (hours), d (days), w (weeks), M (months). Combine: 1w 3d, 2h 30m. Min 10m, max 6M. |
max_articles_per_check |
int | 10 |
Articles to process per check per topic (1-100) |
knowledge_state_max_tokens |
int | 2000 |
Token budget for knowledge state (500-10,000) |
article_retention_days |
int | 90 |
Days to keep articles before cleanup (1-3,650) |
Advanced settings
| Key | Type | Default | Description |
|---|---|---|---|
db_path |
string | data/topic_watch.db |
SQLite database path (relative or absolute) |
feed_fetch_timeout |
float | 15.0 |
RSS feed fetch timeout (seconds) |
article_fetch_timeout |
float | 20.0 |
Article content fetch timeout (seconds) |
llm_analysis_timeout |
int | 60 |
LLM novelty analysis timeout (seconds) |
llm_knowledge_timeout |
int | 120 |
LLM knowledge generation timeout (seconds) |
web_page_size |
int | 20 |
Items per page in the web UI (5-200) |
feed_max_retries |
int | 2 |
RSS feed fetch retries (1-10) |
content_fetch_concurrency |
int | 3 |
Concurrent article content fetches (1-20) |
scheduler_misfire_grace_time |
int | 300 |
APScheduler misfire grace time (seconds, 30-3,600) |
scheduler_jitter_seconds |
int | 30 |
Random jitter per scheduler tick (seconds, 0-120) |
llm_max_retries |
int | 2 |
LLM API call retries (0-10) |
llm_temperature |
float | 0.2 |
LLM sampling temperature (0.0-2.0, lower = more factual) |
min_confidence_threshold |
float | 0.7 |
Minimum LLM confidence to send notifications (0.0-1.0) |
min_relevance_threshold |
float | 0.5 |
Minimum relevance to topic description to send notifications (0.0-1.0) |
All settings can be overridden with TOPIC_WATCH_ prefix. Double underscores for nested keys:
TOPIC_WATCH_LLM__API_KEY=sk-abc123
TOPIC_WATCH_LLM__MODEL=openai/gpt-5.4-nano
TOPIC_WATCH_CHECK_INTERVAL=4h
TOPIC_WATCH_NOTIFICATIONS__WEBHOOK_URLS='["https://example.com/hook"]'Environment-only settings:
| Variable | Default | Description |
|---|---|---|
TOPIC_WATCH_LOG_LEVEL |
INFO |
DEBUG, INFO, WARNING, ERROR |
TOPIC_WATCH_LOG_FORMAT |
text |
text or json |
- Dashboard > Add Topic
- Fill in: Name, Description (what you care about in plain English), Feed Source (Automatic/Manual), Feed URLs (if Manual, one per line), Check Interval, Tags
- Save
The topic enters a "Researching" phase where it fetches articles and builds an initial knowledge state. This takes under a minute. After that, it enters the normal check cycle.
- Try appending
/rss,/feed, or/atom.xmlto a site URL - Reddit:
https://www.reddit.com/r/SUBREDDIT/search.rss?q=QUERY&sort=new - Most blogs use
/feedor/index.xml
Uses LiteLLM. Anything LiteLLM supports works.
| Provider | Model String | Notes |
|---|---|---|
| OpenAI | openai/gpt-5.4-nano |
Cheapest OpenAI option |
| Anthropic | anthropic/claude-haiku-4-5 |
Fast, good quality |
| Ollama | ollama/llama3.3 |
Free, local. Set llm.base_url |
| Google Gemini | gemini/gemini-2.5-flash |
|
| Groq | groq/llama-3.3-70b-versatile |
Very fast inference |
| DeepSeek | deepseek/deepseek-chat |
Very cheap |
| Azure OpenAI | azure/your-deployment |
|
| Cohere | cohere_chat/command-a-03-2025 |
|
| Together AI | together_ai/meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8 |
llm:
model: "ollama/llama3.3"
api_key: "unused"
base_url: "http://host.docker.internal:11434" # or http://localhost:11434 outside Docker100+ services supported via Apprise URL format:
| Service | URL Format |
|---|---|
| Ntfy | ntfy://your-topic |
| Discord | discord://webhook_id/webhook_token |
| Telegram | tgram://bot_token/chat_id |
| Slack | slack://token_a/token_b/token_c/channel |
| Email (Gmail) | mailto://user:app_password@gmail.com |
| Pushover | pover://user_key@api_token |
Multiple URLs supported. Use the Test Notification button on the Settings page to verify.
notifications:
urls:
- "ntfy://my-news-tracker"
- "discord://webhook_id/webhook_token"POST a JSON payload to any endpoint when new info is found:
notifications:
webhook_urls:
- "https://your-server.com/webhook/topic-watch"Payload:
{
"topic": "Topic Name",
"reasoning": "Brief explanation of why this was flagged as new...",
"summary": "...",
"key_facts": ["...", "..."],
"source_urls": ["https://..."],
"confidence": 0.92,
"timestamp": "2026-04-01T12:00:00+00:00"
}10-second timeout per endpoint, concurrent delivery, failures logged but non-blocking.
| Endpoint | Description |
|---|---|
GET /export/topics/json |
All topics |
GET /topics/{id}/export/json |
Single topic with articles, checks, knowledge state |
GET /topics/{id}/export/csv |
Check history as CSV |
python -m app.cli list # List all topics
python -m app.cli check "Topic Name" # Check single topic
python -m app.cli check-all # Check all topics
python -m app.cli init "Topic Name" # Re-initialize knowledge stateDocker (one-line install):
cd ~/topic-watch # or your install directory
docker compose pull
docker compose up -dThe database is automatically backed up before any schema migration.
Docker (git clone):
cd topic_watch
git pull
docker compose up -d --buildWithout Docker:
cd topic_watch
git pull
pip install .
# Restart your uvicorn processCheck the CHANGELOG before upgrading for breaking changes.
No built-in authentication by design (single-user tool).
- Localhost: safe as-is
- Remote: put it behind a reverse proxy with auth (Authelia, Authentik, Nginx basic auth, Caddy
basicauth)
Caddy example
topic-watch.example.com {
basicauth {
admin $2a$14$YOUR_HASHED_PASSWORD
}
reverse_proxy localhost:8000
}
Generate hash: caddy hash-password
Nginx example
server {
listen 443 ssl;
server_name topic-watch.example.com;
auth_basic "Topic Watch";
auth_basic_user_file /etc/nginx/.htpasswd;
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}Create credentials: htpasswd -c /etc/nginx/.htpasswd admin
API keys stored in data/config.yml (gitignored) or env vars. All data stays on your machine; outbound connections only go to RSS feeds, your LLM provider, and notification services.
See SECURITY.md for vulnerability reporting.
Config file not found - Run mkdir -p data && cp config.example.yml data/config.yml.
LLM errors / checks failing - Check your API key, make sure the model string has the provider prefix (openai/gpt-5.4-nano, not gpt-5.4-nano), check logs with docker compose logs -f.
No notifications - Check notifications.urls in config. Use the Test Notification button on the Settings page. Verify the Apprise URL format.
0 articles found - Verify the RSS URL works in a browser. Check the Feed Health page. Some sites block bots.
Topic stuck in "Researching" - Auto-recovers after 15 minutes (set to Error). Retry from the topic page. Usually an LLM connectivity issue.
Docker container exits - docker compose logs for details. Check that data/config.yml exists and data/ is writable.
High memory - Lower max_articles_per_check or content_fetch_concurrency. Increase check intervals.
Cost? ~1,700 tokens per check. GPT-5.4 Nano: ~$0.0003/check. 5 topics, 4x/day = under $0.20/month. Ollama: free.
Why not Google Alerts? Google Alerts sends every mention. Topic Watch only notifies when something is actually new.
Data privacy? Everything runs locally. Only outbound traffic is to your LLM provider and notification services.
No API key? Use Ollama or any local LLM. Set llm.base_url and put any string for llm.api_key.
No RSS feeds? Pick "Automatic" when adding a topic. Uses Google News RSS.
See CONTRIBUTING.md.
