Local AI triage for Tesla Sentry events. Watches your SentryClips folder, sends each event's keyframes to a vision-language model, and classifies the event so you only get notified about the ones that actually matter.
Sentry mode is great in theory and exhausting in practice. A busy parking lot generates dozens of events per day; the vast majority are leaves, cats, or cars driving past. Existing OSS for Sentry footage handles either viewing (Sentry Studio, exportdash.cam) or search (SentrySearch — 4k★ in two months). The triage slot — "ignore the noise, surface the real events, give me a daily highlight reel" — is empty.
sentrytriage fills it.
TeslaUSB / USB drive
│
▼
SentryClips/
├── 2026-05-08_14-22-31/
│ ├── front.mp4
│ ├── back.mp4
│ ├── left_repeater.mp4 (and pillars on HW4)
│ └── ...
│
▼ folder watcher
sentrytriage daemon
│
├── ffmpeg keyframe extract (4 frames × 2-6 cams)
├── VLM classify (gpt-4o-mini / Qwen2.5-VL via Ollama / Gemini Flash)
│ → {interesting: bool, category, subjects, caption, confidence}
├── persist to SQLite
└── (v0.2) suppress boring → BoringClips/, build daily highlight reel, push notify
The classification prompt and Pydantic schema are designed for low false-positive rates: defaults to "not interesting" unless there's a real reason. Every threshold and the prompt itself live in editable files (prompts/classify.md, config.example.toml) so you can tune to your driveway.
What works today:
- 4 source plugins (added in v0.4 via
tesla-clip-toolsv0.2):--source-type tesla|wyze|reolink|unifi. The triage engine is unchanged across sources; only the folder-layout reader differs. - Walk a
SentryClips/directory and parse the canonical TeslaCam folder layout (4-cam HW3 + 6-cam HW4) - Walk a Wyze SD-card layout (
<YYYYMMDD>/<HH>/<MM>.mp4) - Walk a flat Reolink-export directory (3 filename patterns supported)
- Walk a UniFi Protect export (flat or date-partitioned, with optional event-type tag)
- Extract evenly-spaced keyframes per camera via imageio (bundles ffmpeg, no system dep)
- Two VLM backends: OpenAI (gpt-4o-mini default) and Ollama (Qwen2.5-VL local — no API costs)
triage classify <event-folder>— one-off classify (Tesla layout), prints JSON verdicttriage watch <root> --source-type wyze --notify pushover— polling daemon that classifies new events, persists to SQLite, optionally pushes to Pushover or Telegram on every interesting eventtriage reel— concat all interesting events from the last 24h into a single reel mp4 with caption overlaystriage suppress— move boring events (high-confidence false) into a siblingBoringClips/folder. Never deletes.triage notify-test— verify your notifier credentials before deployingtriage demo(v0.6) —triage demo-seedpopulates the local SQLite with ~60 deterministic synthetic events;triage demoseeds-and-serves the FastAPI dashboard athttp://127.0.0.1:8001/, opens your browser, and lets you click through interesting/boring filters and per-event drilldowns. Lets anyone (no Tesla, no API keys, no real cameras) see what triage looks like in 30 seconds.- Web dashboard (v0.6) —
sentrytriage.web:app(FastAPI + Jinja) renders a clean, dark dashboard with header stats, category histograms, recent-events tables, and per-event detail pages. JSON API at/api/categoriesand/api/events. Reads from the same SQLite the daemon writes, so it's live duringtriage watch. - Thumbs feedback (v0.7) — every event detail page has 👍 / 👎 buttons that POST to
/events/{id}/feedback. Feedback lands in a siblingFeedbacktable (the VLM's verdict is never overwritten) so you keep a clean record of where you and the model disagreed. The dashboard now shows your agreement rate (Agreement: 86%) and per-class override counts; the new/api/feedback/statsendpoint exposes the same data for scripts. - Events-per-day chart (v0.7) — inline SVG bar chart on the dashboard for the last 14 days; no JS chart dep, prints fine, hover for exact counts.
- Overrides export (v0.8) —
/overridesHTML page lists every event you disagreed with the VLM on;/api/overridesreturns the same as JSON;triage export-overrides --out training-data.jsonldumps a clean training-data file (one JSON object per override with caption, subjects, source folder, both verdicts) ready for the v0.9 prompt-tuning workflow. Dashboard now has a "Recent overrides" panel with the 5 most-recent disagreements. - Prompt tuning (v0.9) —
triage tune-promptreads your overrides and appends them as few-shot "you got this wrong" examples under a fresh## Examples from your overridessection in the prompt. Default writes toprompts/classify.tuned.mdso you can diff first;--applyoverwritesprompts/classify.mddirectly. Idempotent — re-running after more overrides replaces the previous section instead of duplicating it. - A/B evaluation (v0.10) —
triage evaluate --backend mocksplits your overrides into a TRAIN set (used to tune the prompt) and a held-out TEST set (used only for evaluation). Reports baseline-vs-tuned accuracy on each, plus a per-category breakdown. The included mock classifier simulates a VLM that learns from few-shot examples in the prompt, so you can demo the entire feedback loop with no API key. The split is deterministic (--random-seed 42) and the CLI warns when the test set is too small to be meaningful — this stops the "100% accuracy" headline from being misleading overfit.
- Real
--backend openai|ollamaontriage evaluate. Until v0.10 the evaluate command only ran against the deterministic mock classifier — now it can also score the baseline-vs-tuned prompt against the actual VLM you'll deploy. The flow is hybrid: keyframes-first, text-only fallback:- If the event's source folder still exists on disk, the classifier samples keyframes (same code path as
triage classify) and calls the VLM with images + per-image captions. This is the apples-to-apples comparison. - If the folder is missing (e.g. you ran
triage demo-seedwith synthetic events, or you wipedSentryClips/since classification), the classifier falls back to a text-only call: it hands the VLM the stored caption + subjects + category and asks it to re-derive the verdict using the sameEventClassificationschema. Useful for prompt-tuning iteration without paying to re-encode every video. - If both paths fail (no folder, no caption, or a transient API error), the classifier returns
Nonewith awarn[openai]: ...line on stdout, andcompare_promptscounts that event as a miss for both prompts (so the delta isn't poisoned).
- If the event's source folder still exists on disk, the classifier samples keyframes (same code path as
- The CLI help text on
triage evaluate --backendnow lists all three backends and what each one needs. - v0.12 — embedded video playback. The per-event detail page now embeds an HTML5
<video>element per cam (with a lazy thumbnail strip up top, generated once viaimageioand cached in a.thumbs/sibling dir). Two new routes —GET /clips/{event_id}/{filename}andGET /clips/{event_id}/{filename}/thumb.jpg— stream files viaFileResponse, and both reject any path that resolves outsideSENTRYTRIAGE_CLIPS_ROOT(default~/Tesla/SentryClips) with a 403. Demo mode keeps working: synthetic events with non-existent folders render a graceful "no playable videos found" placeholder instead of broken<video>tags.
- Embedded
<video>playback in the per-event detail (currently shows source-folder path only) - Anthropic + Gemini VLM backends
- Discord + email notifiers
- SEI metadata-aware triage (suppress events recorded while moving, etc.)
- Multi-source dispatch:
--sources tesla,wyze --roots /Tesla/SentryClips,/wyze/SDso one daemon triages everything
uv sync
uv run triage demoThis seeds the local SQLite with ~60 synthetic Sentry events (mix of interesting / boring across all categories) and opens http://127.0.0.1:8001/ in your browser. The data is deterministic — the same seed produces the same dashboard each time, so you can take screenshots that won't drift. The launcher scripts at the workspace root (start-triage-demo.ps1, start-triage-demo.sh) wrap this for one-double-click setup.
# Requires Python 3.12+ and ffmpeg on your PATH.
git clone https://github.com/Raymondriter/sentrytriage.git
cd sentrytriage
uv sync
cp config.example.toml config.toml # edit to taste
# --- Hosted (OpenAI, default) ---
export OPENAI_API_KEY=sk-...
uv run triage classify "/path/to/SentryClips/2026-05-08_14-22-31"
uv run triage watch "/path/to/SentryClips" --notify pushover
# --- Local (Ollama, free) ---
ollama pull qwen2.5vl:7b
uv run triage watch "/path/to/SentryClips" --backend ollama --model qwen2.5vl:7b
# --- Daily reel + suppression ---
uv run triage reel --since-hours 24 --duration-seconds 60
uv run triage suppress --threshold 0.7
# --- Test notifier credentials ---
export PUSHOVER_TOKEN=... PUSHOVER_USER=...
uv run triage notify-test --backend pushoverCost estimate (gpt-4o-mini, 4 keyframes × 2 cams = 8 images per event): roughly $0.001-0.003 per event. A busy day of 100 events is ~$0.10-0.30. The Ollama path (Qwen2.5-VL 7B on a Mac M2+) is free.
uv sync --extra fixture
uv run python tools/generate_fixture.py --root tests/fixtures/SentryClips
uv run triage classify "tests/fixtures/SentryClips/2026-05-08_14-22-31"The synthetic clips exercise the full pipeline (sampler → VLM → store → reel) but the VLM verdicts won't be meaningful — the frames are color-coded animations, not real Sentry scenes. See tests/fixtures/README.md.
- Default to "not interesting". The whole point is to suppress noise. Tune the prompt down, not up.
- Operate on output
.mp4files only. This is intentionally decoupled from the live Tesla / Fleet API surface so Tesla can't break it with a firmware push. The TeslaCam folder layout has been stable for 6+ years. - Source abstraction (
sources/base.py). v0.2 addssources/wyze.py,sources/reolink.py,sources/unifi.pyso the same triage engine works for any IP camera output. - VLM backend abstraction (
vlm/base.py). Swap OpenAI for Ollama / Gemini / Anthropic without touching the daemon. - Structured output via Pydantic. Every verdict has the same shape; the model never returns prose.
- Never auto-delete. Suppression only moves clips between folders.
| Tool | What it does | Composes with sentrytriage? |
|---|---|---|
| SentrySearch | Natural-language search across Sentry library | Yes — they're complementary; triage filters, search retrieves |
| SentryBlur | Single-clip face / plate redaction | Yes — pipe interesting=true clips to SentryBlur before sharing |
| Sentry Studio | Cross-platform 6-cam viewer with SEI dashboard | Yes — Studio is the viewer; triage is the notifier |
| exportdash.cam | Browser-only WebCodecs export | Yes — different layer |
This project does not compete with any of them; it sits one layer above and routes attention.
Open issues for false positives ("this should have been flagged interesting") and false negatives ("this was just a leaf"). The prompt in prompts/classify.md is meant to be edited, and PRs that add sources/* for other cameras (Wyze, Reolink, UniFi Protect, Ring) are very welcome.
The dashboard rendered against synthetic demo data (triage demo):
For an asciinema demo of the full classify → thumb → tune → evaluate loop, see docs/asciinema/demo.cast.
See CHANGELOG.md. Versions follow Keep a Changelog and the project uses SemVer.
MIT. See LICENSE.
GitHub Actions runs ruff + pytest on Python 3.12 and 3.13 against every push and PR. See .github/workflows/ci.yml. Until tesla-clip-tools is published to PyPI, the standalone CI strips the [tool.uv.sources] table and resolves it as a regular dependency; the workspace-level monorepo CI at C:\Dev\tesla\.github\workflows\ci.yml keeps using the path-editable sibling.


