A personal CS2 daily briefing and live alert system, delivered via Telegram.
Example daily briefing — dark-themed mobile-first HTML UI
Overpass runs every morning, collecting CS2 data from multiple sources — match results, upcoming fixtures, news, Reddit highlights, YouTube uploads, podcasts, and Steam announcements. It feeds everything through an LLM editorial layer that writes a structured daily briefing, renders it as a self-contained HTML file, and pushes a Telegram notification with a one-line summary and a link.
It also includes a "This Day in CS" section: a daily historic moment drawn from a hand-curated YAML dataset covering significant moments in Counter-Strike history. Live alerts for configurable triggers (team results, roster moves, etc.) are in progress.
| Source | Method | Status |
|---|---|---|
| HLTV | Playwright scraper (custom) | |
| Liquipedia | MediaWiki API | ✅ Stable |
| Reddit (r/GlobalOffensive) | JSON endpoint | ✅ Stable |
| YouTube | Data API v3 | ✅ Stable |
| Podcasts | RSS / feedparser | ✅ Stable |
| Steam | ISteamNews API | ✅ Stable |
| Twitter/X | Nitter RSS | config.yaml |
| This Day in CS | Curated YAML | ✅ Stable |
- Python 3.12+
- A Telegram bot token and chat ID
- API keys: Gemini (default LLM), YouTube Data API v3, Reddit (no OAuth needed — JSON endpoint only)
- Liquipedia contact info (required by their API terms)
- Playwright browsers installed (
playwright install chromium) — needed for HLTV scraping
-
Clone the repo:
$ git clone https://github.com/lindhammer/overpass.git $ cd overpass -
Create and activate a virtual environment:
$ python -m venv .venv $ source .venv/bin/activate # Linux/macOS $ .venv\Scripts\activate # Windows
-
Install the package:
(.venv) $ pip install -e . -
Install the Playwright browser:
(.venv) $ playwright install chromium
-
Copy the config template and fill in your settings:
$ cp config.example.yaml config.yaml
-
Copy the env template and fill in your API keys and tokens:
$ cp .env.example .env
-
Run:
(.venv) $ overpass # or: (.venv) $ python -m overpass.main
To see what a generated briefing looks like without configuring anything:
(.venv) $ overpass --demoThis generates a demo briefing at output/briefings/demo.html using hardcoded mock data. No API keys or config required.
config.yaml controls your watchlist teams, tracked channels, schedule, and LLM provider. See config.example.yaml for a fully annotated reference — every field is documented there.
Required environment variables (set in .env):
GEMINI_API_KEY
YOUTUBE_API_KEY
TELEGRAM_BOT_TOKEN
TELEGRAM_CHAT_ID
ANTHROPIC_API_KEY # optional — Claude support is scaffolded but not yet selectable as a provider
Overpass is a three-layer pipeline: Collectors pull raw data from external sources in parallel; the Editorial layer passes collected items through an LLM to produce structured, readable summaries; the Delivery layer renders a self-contained HTML briefing and sends a Telegram notification. The LLM layer is provider-agnostic and defaults to Gemini (free tier). Claude support is scaffolded but not yet selectable as a provider.
Docker Compose runs the private overpass-worker service alongside public Caddy static hosting. The worker generates briefings into a shared output volume, and Caddy serves that volume over HTTPS.
For public Telegram links, set web_base_url in config.yaml and replace the example domain in deploy/Caddyfile. Use config.example.yaml as the configuration reference.
Key commands:
docker compose build overpass-worker
docker compose up -d
docker compose logs -f overpass-worker
docker compose run --rm overpass-worker overpass-worker --run-nowSee docs/DEPLOYMENT.md for full setup, DNS/router requirements, operations, verification, and troubleshooting.
Working:
- Daily digest pipeline end-to-end
- HTML briefing generation
- Telegram delivery
- Liquipedia, Reddit, YouTube, Steam, Podcast, and This Day in CS collectors
- HLTV scraper (brittle — see caveats)
- Docker Compose homeserver deployment
In progress / coming:
- Live alerts
- Briefing archive UI
- Historical stats for LLM context
- Twitter/X integration (evaluating options)
Not started:
- Claude as alternative LLM provider (interface exists, not wired up)
⚠️ HLTV scraping is fragile. The HLTV collector uses Playwright and may break at any time due to anti-scrape measures, rate limiting, or layout changes. When HLTV is unavailable, Liquipedia is used as an automatic fallback for match data.
⚠️ Nitter (Twitter/X) availability is unreliable. The social collector will fail gracefully if no reachable Nitter instance is configured — it won't take the rest of the pipeline down. It is disabled by default; enable it inconfig.yaml.
This tool is built for personal use. Please be respectful of rate limits and API terms of service.
Contributions are welcome. For anything beyond small fixes, please open an issue first so we can discuss the approach. This is a personal hobby project — response times may vary.
