Context
We're moving from a free/sponsored Sentry plan to the Team plan (~$300/yr). Current usage over the last billing period: 6k errors out of 50k quota. Every other included feature sits at zero usage:
| Feature |
Usage |
Included |
| Errors |
6k |
50k |
| Logs |
0 |
5 TB |
| Performance Units |
0 |
100k |
| Replays |
0 |
500 |
| Cron Monitors |
0 |
10 |
| Uptime Monitors |
0 |
25 |
| Attachments |
0 |
1 GB |
| Continuous Profile Hours |
0 |
750 hrs |
We should use what we're paying for.
1. Cron Monitors (high value, low effort)
What: Register the two background tickers as Sentry Cron Monitors so Sentry alerts us automatically when they miss a window or fail.
Where:
cmd/app/main.go:48 — Job scheduler (30s ticker, startJobScheduler)
cmd/app/main.go:163 — Health monitor (5min ticker, startHealthMonitoring)
How: Wrap each tick with sentry.CheckInJob using monitor_config slugs. ~20 lines each. Sentry will track expected schedule, alert on misses, and show execution duration.
Effort: ~30 min
2. Frontend error tracking (high value, medium effort)
What: Add the Sentry browser JS SDK to capture frontend errors, unhandled promise rejections, and console errors from the dashboard and other pages.
Where:
dashboard.html
settings.html
web/static/js/ — currently zero Sentry integration on the frontend
How: Add @sentry/browser via CDN script tag, initialise with DSN + environment. Wire up to the existing CSP allowlist already in internal/api/middleware.go:241-242 (*.sentry.io, *.ingest.sentry.io).
Effort: ~1 hr
3. Session Replays (medium value, trivial effort once #2 is done)
What: Enable session replays to see exactly what users did before hitting an error.
Where: Frontend Sentry init (from item #2 above).
How: Add replaysSessionSampleRate: 0.1 and replaysOnErrorSampleRate: 1.0 to the browser SDK config. Requires the @sentry/replay integration.
Effort: ~15 min (after frontend SDK is in place)
4. Uptime Monitors (medium value, zero code effort)
What: Configure Sentry to monitor our production endpoints externally.
Where: Sentry dashboard config (no code changes).
Candidates:
GET /health — basic liveness
GET / — homepage availability
GET /dashboard — authenticated app availability
How: Add via Sentry UI → Uptime Monitors. Set check intervals and alert thresholds.
Effort: ~10 min in Sentry UI
5. Increase tracing visibility (low effort, already wired)
What: We already have TracesSampleRate: 0.1 in production and StartSpan calls across the job manager (internal/jobs/manager.go), but we're showing 0 performance units used — either the spans aren't completing properly or the sample rate is too low to register.
Where:
cmd/app/main.go:445 — TracesSampleRate config
internal/jobs/manager.go:381,468,527,599,687,1047 — existing StartSpan calls
How: Investigate why perf units show as 0 despite configured tracing. Verify spans are being flushed. Consider bumping to 0.2 in production if volume permits (we have 100k units included).
Effort: ~30 min to investigate and fix
6. Structured log ingestion (lower priority, larger effort)
What: Pipe application logs into Sentry Logs so errors and log context live in one UI.
Where: cmd/app/main.go — zerolog is the current logger. Sentry Go SDK supports log integration.
How: Add a Sentry log hook to zerolog that forwards Warn+ level entries. Correlates automatically with error events.
Effort: ~1-2 hrs. Lower priority since we have 5 TB included but logs work fine in Fly dashboard currently.
Priority order
- Cron Monitors — highest signal-to-effort ratio, directly replaces hand-rolled stuck-job detection
- Uptime Monitors — zero code, immediate value
- Frontend SDK + Replays — unlocks visibility into user-facing issues we currently can't see
- Tracing investigation — we wired it up but it's not registering, worth fixing
- Log ingestion — nice-to-have, lowest priority
Context
We're moving from a free/sponsored Sentry plan to the Team plan (~$300/yr). Current usage over the last billing period: 6k errors out of 50k quota. Every other included feature sits at zero usage:
We should use what we're paying for.
1. Cron Monitors (high value, low effort)
What: Register the two background tickers as Sentry Cron Monitors so Sentry alerts us automatically when they miss a window or fail.
Where:
cmd/app/main.go:48— Job scheduler (30s ticker,startJobScheduler)cmd/app/main.go:163— Health monitor (5min ticker,startHealthMonitoring)How: Wrap each tick with
sentry.CheckInJobusingmonitor_configslugs. ~20 lines each. Sentry will track expected schedule, alert on misses, and show execution duration.Effort: ~30 min
2. Frontend error tracking (high value, medium effort)
What: Add the Sentry browser JS SDK to capture frontend errors, unhandled promise rejections, and console errors from the dashboard and other pages.
Where:
dashboard.htmlsettings.htmlweb/static/js/— currently zero Sentry integration on the frontendHow: Add
@sentry/browservia CDN script tag, initialise with DSN + environment. Wire up to the existing CSP allowlist already ininternal/api/middleware.go:241-242(*.sentry.io,*.ingest.sentry.io).Effort: ~1 hr
3. Session Replays (medium value, trivial effort once #2 is done)
What: Enable session replays to see exactly what users did before hitting an error.
Where: Frontend Sentry init (from item #2 above).
How: Add
replaysSessionSampleRate: 0.1andreplaysOnErrorSampleRate: 1.0to the browser SDK config. Requires the@sentry/replayintegration.Effort: ~15 min (after frontend SDK is in place)
4. Uptime Monitors (medium value, zero code effort)
What: Configure Sentry to monitor our production endpoints externally.
Where: Sentry dashboard config (no code changes).
Candidates:
GET /health— basic livenessGET /— homepage availabilityGET /dashboard— authenticated app availabilityHow: Add via Sentry UI → Uptime Monitors. Set check intervals and alert thresholds.
Effort: ~10 min in Sentry UI
5. Increase tracing visibility (low effort, already wired)
What: We already have
TracesSampleRate: 0.1in production andStartSpancalls across the job manager (internal/jobs/manager.go), but we're showing 0 performance units used — either the spans aren't completing properly or the sample rate is too low to register.Where:
cmd/app/main.go:445—TracesSampleRateconfiginternal/jobs/manager.go:381,468,527,599,687,1047— existingStartSpancallsHow: Investigate why perf units show as 0 despite configured tracing. Verify spans are being flushed. Consider bumping to
0.2in production if volume permits (we have 100k units included).Effort: ~30 min to investigate and fix
6. Structured log ingestion (lower priority, larger effort)
What: Pipe application logs into Sentry Logs so errors and log context live in one UI.
Where:
cmd/app/main.go— zerolog is the current logger. Sentry Go SDK supports log integration.How: Add a Sentry log hook to zerolog that forwards
Warn+ level entries. Correlates automatically with error events.Effort: ~1-2 hrs. Lower priority since we have 5 TB included but logs work fine in Fly dashboard currently.
Priority order