feat: add webhook and Slack notification channels#74
Conversation
Phase 2 of the notification roadmap. Adds per-monitor notification channel configuration supporting email, outgoing webhooks (HMAC-SHA256 signed with retry logic), and Slack (OAuth v2 with encrypted bot token storage). Includes delivery log with 30-day retention, tier gating (webhooks/Slack require Pro or Power), backwards compatibility with existing emailEnabled boolean, and full test coverage. https://claude.ai/code/session_01KuoUHkKehAzWNLszLvk4Js
📝 WalkthroughWalkthroughImplements multi-channel notification delivery system with support for email, webhook, and Slack channels. Adds Slack OAuth integration, delivery logging with retry mechanisms, webhook secret encryption, and corresponding client UI components for channel management and delivery log viewing. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Client
participant Server
participant SlackOAuth as Slack OAuth
participant Database
User->>Client: Click "Connect Slack"
Client->>Server: GET /api/integrations/slack/install
Server->>Server: Sign OAuth state (HMAC)
Server-->>Client: Redirect to Slack OAuth
Client->>SlackOAuth: Redirect with state
User->>SlackOAuth: Authorize app
SlackOAuth->>Server: POST callback with code & state
Server->>Server: Verify state signature
Server->>SlackOAuth: Exchange code for token
SlackOAuth-->>Server: Return access_token
Server->>Server: Encrypt bot token (AES-256-GCM)
Server->>Database: Store SlackConnection (encrypted token)
Server-->>Client: Redirect to success
Client->>User: Display connected status
sequenceDiagram
participant Monitor as Monitor Service
participant NotifSvc as Notification Service
participant Webhook as Webhook Delivery
participant Slack as Slack Delivery
participant Email as Email Service
participant Database
Monitor->>NotifSvc: Change detected
NotifSvc->>Database: Query enabled notification channels
Database-->>NotifSvc: [email, webhook, slack]
par Multi-Channel Delivery
NotifSvc->>Webhook: deliver(monitor, change)
Webhook->>Webhook: Build & sign payload
Webhook->>Webhook: SSRF check on URL
Webhook-->>NotifSvc: {success, statusCode}
NotifSvc->>Database: Log webhook delivery
and
NotifSvc->>Slack: deliver(monitor, change, channelId, token)
Slack->>Slack: Build Block Kit message
Slack->>Slack: Decrypt bot token
Slack-->>NotifSvc: {success, slackTs}
NotifSvc->>Database: Log Slack delivery
and
NotifSvc->>Email: send(monitor, change)
Email-->>NotifSvc: {success}
NotifSvc->>Database: Log email delivery
end
NotifSvc-->>Monitor: All channels processed
sequenceDiagram
participant Scheduler as Scheduler Service
participant Database
participant Webhook as Webhook Delivery
Scheduler->>Scheduler: Every minute: check retries
Scheduler->>Database: getPendingWebhookRetries()
Database-->>Scheduler: DeliveryLogEntry[]
loop For each pending retry
Scheduler->>Scheduler: Apply backoff (1→5s, 2→30s, 3→120s)
Scheduler->>Scheduler: Validate monitor, channel, config
Scheduler->>Webhook: deliver(monitor, change, config)
alt Success
Webhook-->>Scheduler: {success: true, statusCode}
Scheduler->>Database: Update log (status=success, attempt++)
else Retry < 3
Webhook-->>Scheduler: {success: false, error}
Scheduler->>Database: Update log (status=pending, attempt++)
else Final Failure
Webhook-->>Scheduler: {success: false, error}
Scheduler->>Database: Update log (status=failed)
end
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 36
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.env.example:
- Around line 7-9: Normalize the SLACK_* entries in .env.example by removing the
spaced assignments and placing comments on separate lines so dotenv parsers
won't misread them; update SLACK_CLIENT_ID, SLACK_CLIENT_SECRET, and
SLACK_ENCRYPTION_KEY to use the standard VAR= format (no spaces around =) and
move explanatory notes or examples (like the AES-256-GCM generation command) to
a preceding or trailing comment line rather than inline after the assignment.
In `@client/src/components/DeliveryLog.tsx`:
- Around line 19-23: The statusColors mapping currently hard-codes palette
utility classes; replace those values in the statusColors Record<string,string>
with semantic token classes (e.g., use tokens like text-success, bg-success/10,
border-success/20 or text-error, bg-error/10, border-error/20, and text-warning,
bg-warning/10, border-warning/20—or your project's equivalents such as
text-primary/bg-secondary/text-muted-foreground) so the UI uses design-system
tokens instead of explicit green/red/yellow utilities; update the three entries
(success, failed, pending) inside statusColors and ensure any consumers of
statusColors (e.g., in DeliveryLog component render) still receive the same
keys.
- Around line 48-55: The icon-only channel filter Buttons in DeliveryLog.tsx
lack accessible names; update the Button (the element rendering inside the map
using variables ch, channelFilter, setChannelFilter and Icon) to include an
accessible name and state—for example add an aria-label built from the channel
value (e.g. "Filter by {ch}") and include aria-pressed={channelFilter === ch}
(or an equivalent aria attribute) so screen readers can identify and perceive
the toggle state; ensure you still call setChannelFilter(ch) in onClick.
In `@client/src/components/MonitorCard.tsx`:
- Around line 26-29: The per-card call to useNotificationChannels(monitor.id) in
MonitorCard.tsx causes N+1 requests; remove that hook from MonitorCard and
instead read precomputed channel flags from the monitor list or accept a batched
summary prop. Update MonitorCard to use monitor.notificationSummary (or props
like monitor.hasWebhook / monitor.hasSlack) rather than computing
hasWebhook/hasSlack locally, and implement the batching at the list level (e.g.,
a new useNotificationChannelsForMonitors that fetches summaries for all monitor
IDs and attaches them to each monitor before rendering) so only one network
request (or the monitor payload) is used for the entire grid.
In `@client/src/components/NotificationChannelsPanel.tsx`:
- Around line 105-108: Several interactive controls lack accessible names: add
explicit accessible labels to each unlabeled control (the Switch for email and
other channels, the secret reveal IconButton, and the Slack channel Select
trigger). For each Switch (e.g., the Switch using emailChannel and calling
handleToggleChannel("email", ...)), add aria-label or aria-labelledby that
describes the action (e.g., "Enable email notifications"); for the secret reveal
button component add aria-label like "Reveal secret" and ensure any state is
reflected with aria-pressed or aria-expanded as appropriate; for the Slack
select trigger add aria-label/aria-labelledby such as "Select Slack channel" or
attach a visible <label> with an id and reference it via aria-labelledby. Use
the same pattern for other instances referenced in this diff (lines around
125-131, 163-165, 190-200, 236-243) so every interactive element has a
programmatic accessible name.
- Around line 25-27: NotificationChannelsPanel currently derives isFreeTier from
the possibly-stale useAuth() cache, causing UI tier-gating to drift from server
state; instead, fetch a fresh tier from the server on mount or when the panel
opens (e.g., call the same server endpoint used for route enforcement or a
dedicated /me or /billing/plan API) and derive isFreeTier from that response;
update the component to use the fresh-tier value (and/or revalidate the auth
cache via the existing auth revalidation helper if available) rather than (user
as any)?.tier so the UI reflects the authoritative server-side tier.
- Around line 66-75: The handler handleRevealSecret calls
revealSecret.mutateAsync() without catching rejections; wrap the await in a
try/catch inside handleRevealSecret to handle errors from
revealSecret.mutateAsync(), e.g., catch the error and update UI state or show an
error notification instead of allowing an unhandled rejection, and ensure state
cleanup (setShowSecret/setRevealedSecret) is done appropriately on both success
and failure.
In `@client/src/hooks/use-notification-channels.ts`:
- Around line 113-115: The code appends the raw channel value to the URL
(variable url) before fetching; to fix, encode the channel when building the
query string (e.g., use encodeURIComponent(channel) or construct the query via
URLSearchParams) so special characters are escaped and the resulting URL remains
valid; update the logic around buildUrl(api.monitors.channels.deliveries.path, {
id: monitorId }) where channel is appended to ensure the query uses the encoded
channel value.
In `@client/src/hooks/use-slack.ts`:
- Around line 63-65: The onError handler in use-slack.ts assumes err.message
exists; update the onError function passed to the mutation so it safely extracts
a message (e.g., if err is an Error use err.message, otherwise coerce with
String(err) or JSON.stringify when appropriate) and pass that safeMessage into
the toast call (reference the onError callback and the toast invocation in
use-slack.ts to locate and update the logic).
In `@server/middleware/csrf.ts`:
- Line 14: Remove the unnecessary Slack callback exemption by deleting
'/api/integrations/slack/callback' from the EXEMPT_PATHS Set constant so the set
only contains ['/api/stripe/webhook', '/api/webhooks/resend']; update the
declaration of EXEMPT_PATHS (the Set constant) accordingly to keep CSRF
exemptions minimal and justified.
In `@server/routes.ts`:
- Around line 680-682: Replace the ad-hoc parsing of req.query.limit and
req.query.channel with a Zod-validated query schema (e.g.,
deliveriesQuerySchema) and parse/validate req.query before using values; define
the schema to coerce/parse limit to a number with default 50 and clamp/max 200
and channel as an optional string (or constrained set if channels are
enumerated), then use the parsed values (limit, channel) when calling
storage.getDeliveryLog(id, limit, channel) so all query params are centrally
validated per guidelines.
- Around line 804-809: The Slack disconnect handler leaves stale entries in the
in-memory slackChannelsCache and the map is unbounded; update the DELETE route
(the app.delete for api.integrations.slack.disconnect) to remove any cached
channels for the user after calling
storage.deleteSlackChannelsForUser/deleteSlackConnection (e.g., evict
slackChannelsCache entries keyed by the userId) and change the cache
implementation where slackChannelsCache is defined to enforce bounds or TTL
(implement size limit or LRU eviction and/or expiration) so the map cannot grow
unbounded and reconnects won't see stale channel data.
- Around line 690-693: The state signing currently falls back to an empty secret
making signatures predictable; change signSlackState to require a non-empty
SLACK_CLIENT_SECRET (throw an error if missing) instead of using "" and use that
secret in createHmac("sha256", secret). Also update the corresponding state
verification function (the peer function around lines 706-709, e.g.,
verifySlackState or similar) to likewise fail/throw when SLACK_CLIENT_SECRET is
not set before computing/validating the HMAC so the app fails closed when the
secret is missing.
- Around line 581-582: Replace plain error JSON responses like return
res.status(400).json({ message: "Invalid channel type. Must be email, webhook,
or slack." }); with the normalized shape return res.status(400).json({ message:
"Invalid channel type. Must be email, webhook, or slack.", code:
"INVALID_CHANNEL_TYPE" }); and do the same for all other occurrences in
server/routes.ts that currently return only { message } (the instances flagged
in the review); use a concise, constant-like error code (e.g.,
INVALID_CHANNEL_TYPE, MISSING_<FIELD>, NOT_FOUND, UNAUTHORIZED) derived from
each message so clients receive { message, code } consistently.
- Around line 589-621: Parse the request body first using
api.monitors.channels.put.input.parse to determine intent (enable/disable or
config remove) before enforcing the tier gate or running channel-specific
validation; only run the tier check (authStorage.getUser) and the webhook/slack
validators (webhookConfigInputSchema.parse, slackConfigInputSchema.parse) when
the parsed input indicates the channel is being enabled or a URL/header is being
set, and skip webhook SSRF check (isPrivateUrl) and secret generation
(storage.getMonitorChannels, generateWebhookSecret) when the user is disabling
the channel or removing config. This requires moving input parsing to the top,
branching on the parsed input's intent to short-circuit free-user disables, and
only executing existingWebhook/secret logic for webhook when actually
enabling/updating the URL.
- Around line 640-647: The DELETE handler registered with app.delete that
currently reads req.params.channel directly must validate that param with
channelTypeSchema before use; use
channelTypeSchema.safeParse(req.params.channel) (or parse within try/catch) and
if validation fails return res.status(400).json({ message: "Invalid channel" }),
otherwise use the validated value when calling storage.deleteMonitorChannel(id,
validatedChannel). Update the handler around the isAuthenticated route to
replace direct access to req.params.channel with the Zod-validated value so
invalid input never reaches storage.deleteMonitorChannel.
In `@server/services/notification.test.ts`:
- Around line 875-889: Add a new test case that simulates decryptToken throwing
while other channels remain enabled: in the new "decrypt failure isolation"
it-block, set mockGetMonitorChannels to return at least two channels (one slack
entry with config.channelId "C0123" and one other enabled channel such as
email), set mockGetSlackConnection to return a connection, have
mockDecryptToken.mockRejectedValue(new Error("decrypt failed")), then call
processChangeNotification(monitor, change, false) and assert that
mockDecryptToken was called, mockSlackDeliver was NOT called, the other channel
delivery mock (e.g., mockEmailDeliver) WAS called with the monitor and change,
and that the error logger (e.g., processLogger.error or mockLogger.error) was
called with a message referencing Slack/decrypt and the error; use the existing
symbols processChangeNotification, mockDecryptToken, mockSlackDeliver,
mockGetMonitorChannels, mockGetSlackConnection, and the other channel delivery
mock to locate/implement the test.
In `@server/services/notification.ts`:
- Around line 184-272: deliverToChannels currently dispatches paid channels
(webhook/slack) without verifying the user's current tier; before calling
deliverWebhook or deliverSlack (inside the enabledChannels map) fetch and verify
the user's subscription/tier (e.g., via storage.getUserSubscription or
storage.getUserPlan using monitor.userId) and skip or fail deliveries for
channels not allowed by that tier. Specifically, in deliverToChannels check
eligibility for ch.channel ("webhook" and "slack") and if disallowed, do not
call deliverWebhook/deliverSlack, instead add a delivery log via
storage.addDeliveryLog marking status "failed" or "skipped" with a response
explaining "tier_not_eligible", and emit a warning log; keep email handling
unchanged but use the same tier-check if email is also tier-gated.
- Around line 333-346: Slack digest sends via deliverSlack but never records
successes/failures in deliveryLog and swallows errors; update the "slack" case
to (1) decrypt the bot token using decryptToken(connection.botToken) as now, (2)
iterate changes and for each call deliverSlack(monitor, change,
slackConfig.channelId, botToken) inside its own try/catch so individual message
failures don't stop the loop, and (3) after each attempt push a delivery record
into deliveryLog (include monitor.id, change id or summary, channelId,
timestamp, status success|failure and error.message when caught). Use
storage.getSlackConnection and monitor/changes/slackConfig.channelId to locate
the code paths; ensure the outer catch still logs token decryption errors.
In `@server/services/scheduler.test.ts`:
- Line 121: Replace the current cleanup that uses Object.keys(...).forEach((k)
=> delete (cronCallbacks as any)[k]) with a version that doesn't return the
delete expression and removes the unsafe any cast; e.g. iterate with for (const
k of Object.keys(cronCallbacks)) { delete cronCallbacks[k as keyof typeof
cronCallbacks]; } or use Object.keys(cronCallbacks).forEach((k) => { delete
cronCallbacks[k as keyof typeof cronCallbacks]; }); targeting the cronCallbacks
symbol and ensuring proper typing instead of (cronCallbacks as any).
In `@server/services/scheduler.ts`:
- Around line 109-174: The retry loop in cron.schedule reads pending rows with
storage.getPendingWebhookRetries and later updates them, allowing races and
double-sends; change to an atomic claim-and-return pattern (e.g., add a
storage.claimPendingWebhookRetries or storage.claimWebhookRetry that uses UPDATE
... RETURNING or SELECT ... FOR UPDATE SKIP LOCKED) so each retry row is
marked/claimed exclusively before calling deliverWebhook; then operate on the
claimed rows (use entry.id, increment attempt in the same DB transaction or via
an atomic UPDATE when claiming) and only after deliverWebhook update the
delivery log to success/failed, ensuring attempt counts and status are stored
atomically to prevent TOCTOU duplicates.
- Around line 158-173: The current retry branch always marks failures as
"pending" (line using storage.updateDeliveryLog with status:"pending"), but
non-retryable failures must be marked "failed" immediately; modify the retry
logic in scheduler.ts to inspect the webhook result (e.g., result.retryable
boolean or result.statusCode) before requeueing: if result indicates a permanent
failure (SSRF block, non-retryable 4xx except 429, or result.retryable ===
false) call storage.updateDeliveryLog(entry.id, { status: "failed", attempt:
nextAttempt, response: { error: result.error } }) and log accordingly (use
monitor.id and new URL(config.url).hostname for context); otherwise keep the
existing pending/retry behavior and warning log that includes attempt and error.
Ensure you reference the same symbols: storage.updateDeliveryLog, entry.id,
nextAttempt, result.error, result.statusCode/result.retryable, monitor.id, and
config.url.
- Around line 141-143: Currently the retry logic loads the entire change history
via storage.getMonitorChanges(monitor.id) and then searches for the matching
change, which is inefficient; replace that pattern by calling a new or existing
storage.getMonitorChangeById(monitor.id, entry.changeId) (or implement that
method) and use its result directly instead of allChanges.find(...), updating
the code paths that reference monitor.id and entry.changeId (in the retry
processing function) to handle a possibly missing change the same way as before.
In `@server/services/slackDelivery.ts`:
- Around line 82-92: The fetch calls to Slack in
server/services/slackDelivery.ts (the POST to
"https://slack.com/api/chat.postMessage" that builds the body with channelId,
botToken and message, and the other fetch call later in the file) currently have
no timeout; update both fetch option objects to include signal:
AbortSignal.timeout(5000) so each call aborts after 5s, leaving all other
headers/body fields unchanged and using the existing botToken/channelId/message
variables to locate the correct call sites.
In `@server/services/webhookDelivery.test.ts`:
- Around line 177-186: Update the test "includes custom headers from config" to
also assert that security headers from the delivery implementation cannot be
overridden by config headers: after calling deliver(makeMonitor(), makeChange(),
config) and extracting options from mockFetch.mock.calls[0], check that
options.headers["X-FTC-Signature-256"], options.headers["Content-Type"], and
options.headers["User-Agent"] are the values produced by the deliver
implementation (not the values from config.headers); keep using the existing
test helpers (deliver, makeMonitor, makeChange, WebhookConfig, testConfig,
mockFetch) to locate and augment the assertions.
In `@server/services/webhookDelivery.ts`:
- Around line 61-87: Replace the raw fetch call with the SSRF-safe request
primitive (ssrfSafeFetch or validateUrlBeforeFetch from server/utils/ssrf.ts) so
validation and network resolution happen together at send time; keep the
existing payload, headers, method, redirect, timeout/AbortController
(controller.signal) and error handling, and remove/adjust the separate pre-check
(isPrivateUrl/ssrfError) so there is no TOCTOU/DNS-rebinding gap—call the
ssrf-safe wrapper with config.url, method: "POST", headers, body, signal:
controller.signal, and redirect: "manual" to perform atomic validation +
request.
- Around line 70-75: The current headers creation lets user-supplied
config.headers override mandatory headers (notably "X-FTC-Signature-256");
change the logic in webhookDelivery.ts where the headers const is built so that
config.headers cannot overwrite service-controlled keys: either spread
...(config.headers || {}) before the mandatory headers or explicitly strip
forbidden keys (e.g., "X-FTC-Signature-256", "Content-Type", "User-Agent") from
config.headers before merging, ensuring the final headers object always uses the
generated signature variable (signature) and service-controlled values.
In `@server/storage.ts`:
- Around line 59-60: The direct deletes on deliveryLog and notificationChannels
in deleteMonitor() can fail if those tables don’t exist during a partial
migration; add a readiness check before issuing deletes by testing for table
existence (e.g., using your DB client's schema/table-exists helper or an
information_schema query) and only call await
db.delete(deliveryLog).where(eq(deliveryLog.monitorId, id)) and await
db.delete(notificationChannels).where(eq(notificationChannels.monitorId, id)) if
their tables are present; alternatively guard with a try/catch that skips the
error for “table not found” to preserve backward compatibility.
In `@server/utils/encryption.ts`:
- Around line 7-17: The getEncryptionKey function validates SLACK_ENCRYPTION_KEY
on every call which delays failure to runtime; change it to validate and parse
the env var once at startup and cache the resulting Buffer for all callers.
Specifically, add a module-level cached variable (e.g., cachedEncryptionKey) and
initialize it by parsing process.env.SLACK_ENCRYPTION_KEY and asserting length
=== 32 when the module loads (or expose an init function that is called from app
startup), then have getEncryptionKey simply return the cached Buffer (or throw
if initialization failed) so misconfiguration fails fast and subsequent calls
are constant-time.
In `@shared/routes.ts`:
- Around line 165-215: Add explicit 403 forbidden responses to the
monitor-scoped route contracts (channels.list, channels.put, channels.delete,
channels.revealSecret, channels.deliveries) and update their response schemas to
include a 403 z.object({ message: z.string(), code: z.string().optional() });
then update the corresponding handlers (the server routes that read
notificationChannels and deliveryLog by monitor id) to enforce ownership by
verifying req.user.id (or equivalent) against the monitor ownerId before
returning data—if the user is not the owner return 403; for revealSecret also
enforce ownership (and keep the existing 429) and ensure all handlers never
return 200 for unauthorized owners.
- Around line 250-258: The install route's declared responses for path
'/api/integrations/slack/install' (the install entry in shared/routes.ts) are
missing the 400 response used by the handler for "Slack not configured", causing
a contract mismatch; update the install.responses to include a 400 schema (e.g.,
z.object({ message: z.string(), code: z.string().optional() }) or the
appropriate error schema used elsewhere) alongside the existing 501 and other
responses so clients/types match runtime behavior.
- Around line 66-71: The channelInputSchema currently uses z.union with
permissive schemas, allowing empty or extra properties to pass; update
emailConfigInputSchema, webhookConfigInputSchema, slackConfigInputSchema to be
strict (add .strict()) and make channelInputSchema itself strict as well (e.g.,
export const emailConfigInputSchema = z.object({...}).strict(); same for
webhookConfigInputSchema and slackConfigInputSchema), and ensure
channelInputSchema uses those strict schemas (channelInputSchema = z.object({
enabled: z.boolean(), config: z.union([webhookConfigInputSchema,
slackConfigInputSchema, emailConfigInputSchema]) }).strict()) so only exact
expected properties (and valid webhook URL via z.string().url()) are accepted.
In `@shared/schema.ts`:
- Line 265: The config JSONB currently stores raw channel/webhook secrets via
the config field (jsonb("config").$type<Record<string, unknown>>()), which
exposes signing keys; change the schema to remove or stop writing plaintext
secrets into config by adding a dedicated protected column (e.g.,
webhook_secret_encrypted) or using an encrypted type for that value instead of
jsonb, and update all readers/writers (the routes that read/write config) to
encrypt on write and decrypt on read using the app's secret-store/encryption
utility (or DB-side encryption/pgcrypto) so secrets are never persisted
plaintext in config. Ensure the unique symbol references: update
shared/schema.ts for jsonb("config") and the route handlers that currently
access config.secret to use the new encrypted column or SecretStore API.
- Around line 293-295: The existing index monitorCreatedIdx on delivery_log
covers (monitorId, createdAt) but retry processing queries filter by channel,
status, attempt and ORDER BY createdAt; add a new composite index in the same
index block for delivery_log (e.g., deliveryLogRetryIdx) that indexes (channel,
status, attempt, createdAt) so the WHERE predicate and ordering are covered and
avoids full table scans; place the new index alongside monitorCreatedIdx using
the same index(...) builder and give it a descriptive name (e.g.,
"delivery_log_retry_idx").
- Around line 263-289: The channel and status columns are currently plain text
and must be constrained; replace the freeform text types with Postgres enums and
use those enums in the table definitions: create a pgEnum for notification
channels (e.g., values "email","webhook","slack") and use it for
notificationChannels.channel and deliveryLog.channel, and create a pgEnum for
delivery statuses (e.g., "success","failed","pending") and use it for
deliveryLog.status; update any related type annotations (NotificationChannel and
delivery log select types) and references in code that expect those fields so
the DB enforces valid values.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 4c155fd7-e1e9-4f24-9219-5f6b206116eb
📒 Files selected for processing (22)
.env.exampleclient/src/components/DeliveryLog.tsxclient/src/components/MonitorCard.tsxclient/src/components/NotificationChannelsPanel.tsxclient/src/hooks/use-notification-channels.tsclient/src/hooks/use-slack.tsclient/src/pages/MonitorDetails.tsxserver/middleware/csrf.tsserver/routes.tsserver/services/notification.test.tsserver/services/notification.tsserver/services/scheduler.test.tsserver/services/scheduler.tsserver/services/slackDelivery.test.tsserver/services/slackDelivery.tsserver/services/webhookDelivery.test.tsserver/services/webhookDelivery.tsserver/storage.tsserver/utils/encryption.test.tsserver/utils/encryption.tsshared/routes.tsshared/schema.ts
.env.example
Outdated
| SLACK_CLIENT_ID= # Slack app OAuth client ID | ||
| SLACK_CLIENT_SECRET= # Slack app OAuth client secret | ||
| SLACK_ENCRYPTION_KEY= # 32 bytes hex for AES-256-GCM (generate: node -e "console.log(require('crypto').randomBytes(32).toString('hex'))") |
There was a problem hiding this comment.
Normalize Slack env placeholder formatting to avoid dotenv parsing/lint issues.
The current spaced assignments with inline comments are flagged and can behave inconsistently across dotenv parsers.
Suggested patch
-SLACK_CLIENT_ID= # Slack app OAuth client ID
-SLACK_CLIENT_SECRET= # Slack app OAuth client secret
-SLACK_ENCRYPTION_KEY= # 32 bytes hex for AES-256-GCM (generate: node -e "console.log(require('crypto').randomBytes(32).toString('hex'))")
+SLACK_CLIENT_ID="" # Slack app OAuth client ID
+SLACK_CLIENT_SECRET="" # Slack app OAuth client secret
+SLACK_ENCRYPTION_KEY="" # 32-byte hex for AES-256-GCM (generate: node -e "console.log(require('crypto').randomBytes(32).toString('hex'))")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| SLACK_CLIENT_ID= # Slack app OAuth client ID | |
| SLACK_CLIENT_SECRET= # Slack app OAuth client secret | |
| SLACK_ENCRYPTION_KEY= # 32 bytes hex for AES-256-GCM (generate: node -e "console.log(require('crypto').randomBytes(32).toString('hex'))") | |
| SLACK_CLIENT_ID="" # Slack app OAuth client ID | |
| SLACK_CLIENT_SECRET="" # Slack app OAuth client secret | |
| SLACK_ENCRYPTION_KEY="" # 32-byte hex for AES-256-GCM (generate: node -e "console.log(require('crypto').randomBytes(32).toString('hex'))") |
🧰 Tools
🪛 dotenv-linter (4.0.0)
[warning] 7-7: [SpaceCharacter] The line has spaces around equal sign
(SpaceCharacter)
[warning] 7-7: [ValueWithoutQuotes] This value needs to be surrounded in quotes
(ValueWithoutQuotes)
[warning] 8-8: [SpaceCharacter] The line has spaces around equal sign
(SpaceCharacter)
[warning] 8-8: [ValueWithoutQuotes] This value needs to be surrounded in quotes
(ValueWithoutQuotes)
[warning] 9-9: [SpaceCharacter] The line has spaces around equal sign
(SpaceCharacter)
[warning] 9-9: [ValueWithoutQuotes] This value needs to be surrounded in quotes
(ValueWithoutQuotes)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.env.example around lines 7 - 9, Normalize the SLACK_* entries in
.env.example by removing the spaced assignments and placing comments on separate
lines so dotenv parsers won't misread them; update SLACK_CLIENT_ID,
SLACK_CLIENT_SECRET, and SLACK_ENCRYPTION_KEY to use the standard VAR= format
(no spaces around =) and move explanatory notes or examples (like the
AES-256-GCM generation command) to a preceding or trailing comment line rather
than inline after the assignment.
| const statusColors: Record<string, string> = { | ||
| success: "bg-green-500/10 text-green-500 border-green-500/20", | ||
| failed: "bg-red-500/10 text-red-500 border-red-500/20", | ||
| pending: "bg-yellow-500/10 text-yellow-500 border-yellow-500/20", | ||
| }; |
There was a problem hiding this comment.
Use semantic color tokens for status styling.
Line 19 to Line 23 hard-codes palette values (green/red/yellow utility classes) instead of semantic tokens.
As per coding guidelines, apply semantic color tokens (text-primary, text-muted-foreground, bg-secondary, etc.).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@client/src/components/DeliveryLog.tsx` around lines 19 - 23, The statusColors
mapping currently hard-codes palette utility classes; replace those values in
the statusColors Record<string,string> with semantic token classes (e.g., use
tokens like text-success, bg-success/10, border-success/20 or text-error,
bg-error/10, border-error/20, and text-warning, bg-warning/10,
border-warning/20—or your project's equivalents such as
text-primary/bg-secondary/text-muted-foreground) so the UI uses design-system
tokens instead of explicit green/red/yellow utilities; update the three entries
(success, failed, pending) inside statusColors and ensure any consumers of
statusColors (e.g., in DeliveryLog component render) still receive the same
keys.
| <Button | ||
| key={ch} | ||
| variant={channelFilter === ch ? "secondary" : "ghost"} | ||
| size="sm" | ||
| onClick={() => setChannelFilter(ch)} | ||
| > | ||
| <Icon className="h-4 w-4" /> | ||
| </Button> |
There was a problem hiding this comment.
Add accessible names to icon-only channel filter buttons.
Line 48 to Line 55 renders icon-only <Button> controls without an accessible label, which makes the filter actions ambiguous for screen readers.
Suggested fix
<Button
key={ch}
variant={channelFilter === ch ? "secondary" : "ghost"}
size="sm"
onClick={() => setChannelFilter(ch)}
+ aria-label={`Filter by ${ch} deliveries`}
+ title={`Filter by ${ch} deliveries`}
>
- <Icon className="h-4 w-4" />
+ <Icon className="h-4 w-4" aria-hidden="true" />
</Button>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <Button | |
| key={ch} | |
| variant={channelFilter === ch ? "secondary" : "ghost"} | |
| size="sm" | |
| onClick={() => setChannelFilter(ch)} | |
| > | |
| <Icon className="h-4 w-4" /> | |
| </Button> | |
| <Button | |
| key={ch} | |
| variant={channelFilter === ch ? "secondary" : "ghost"} | |
| size="sm" | |
| onClick={() => setChannelFilter(ch)} | |
| aria-label={`Filter by ${ch} deliveries`} | |
| title={`Filter by ${ch} deliveries`} | |
| > | |
| <Icon className="h-4 w-4" aria-hidden="true" /> | |
| </Button> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@client/src/components/DeliveryLog.tsx` around lines 48 - 55, The icon-only
channel filter Buttons in DeliveryLog.tsx lack accessible names; update the
Button (the element rendering inside the map using variables ch, channelFilter,
setChannelFilter and Icon) to include an accessible name and state—for example
add an aria-label built from the channel value (e.g. "Filter by {ch}") and
include aria-pressed={channelFilter === ch} (or an equivalent aria attribute) so
screen readers can identify and perceive the toggle state; ensure you still call
setChannelFilter(ch) in onClick.
| const { data: channels = [] } = useNotificationChannels(monitor.id); | ||
| const hasWebhook = channels.some((c) => c.channel === "webhook" && c.enabled); | ||
| const hasSlack = channels.some((c) => c.channel === "slack" && c.enabled); | ||
| const [isEditing, setIsEditing] = useState(false); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
This introduces additional N+1 request fan-out on dashboard renders.
useNotificationChannels(monitor.id) runs per card, so a monitor grid now adds one extra HTTP call per monitor. This will scale poorly for larger accounts.
Prefer returning channel-summary flags (email/webhook/slack enabled) in the monitor list payload, or add a batched channels-summary endpoint keyed by monitor IDs.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@client/src/components/MonitorCard.tsx` around lines 26 - 29, The per-card
call to useNotificationChannels(monitor.id) in MonitorCard.tsx causes N+1
requests; remove that hook from MonitorCard and instead read precomputed channel
flags from the monitor list or accept a batched summary prop. Update MonitorCard
to use monitor.notificationSummary (or props like monitor.hasWebhook /
monitor.hasSlack) rather than computing hasWebhook/hasSlack locally, and
implement the batching at the list level (e.g., a new
useNotificationChannelsForMonitors that fetches summaries for all monitor IDs
and attaches them to each monitor before rendering) so only one network request
(or the monitor payload) is used for the entire grid.
| const { user } = useAuth(); | ||
| const isFreeTier = (user as any)?.tier === "free" || !(user as any)?.tier; | ||
|
|
There was a problem hiding this comment.
Tier-gating UI can drift from server reality due stale auth cache.
Line 25 to Line 27 uses cached useAuth() tier for gating, while server routes enforce tier from fresh DB reads. This can show incorrect availability for several minutes after plan changes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@client/src/components/NotificationChannelsPanel.tsx` around lines 25 - 27,
NotificationChannelsPanel currently derives isFreeTier from the possibly-stale
useAuth() cache, causing UI tier-gating to drift from server state; instead,
fetch a fresh tier from the server on mount or when the panel opens (e.g., call
the same server endpoint used for route enforcement or a dedicated /me or
/billing/plan API) and derive isFreeTier from that response; update the
component to use the fresh-tier value (and/or revalidate the auth cache via the
existing auth revalidation helper if available) rather than (user as any)?.tier
so the UI reflects the authoritative server-side tier.
| channels: { | ||
| list: { | ||
| method: 'GET' as const, | ||
| path: '/api/monitors/:id/channels', | ||
| responses: { | ||
| 200: z.array(z.custom<typeof notificationChannels.$inferSelect>()), | ||
| 404: errorSchemas.notFound, | ||
| 401: errorSchemas.unauthorized, | ||
| }, | ||
| }, | ||
| put: { | ||
| method: 'PUT' as const, | ||
| path: '/api/monitors/:id/channels/:channel', | ||
| input: channelInputSchema, | ||
| responses: { | ||
| 200: z.custom<typeof notificationChannels.$inferSelect>(), | ||
| 404: errorSchemas.notFound, | ||
| 401: errorSchemas.unauthorized, | ||
| 403: z.object({ message: z.string(), code: z.string().optional() }), | ||
| 422: errorSchemas.validation, | ||
| }, | ||
| }, | ||
| delete: { | ||
| method: 'DELETE' as const, | ||
| path: '/api/monitors/:id/channels/:channel', | ||
| responses: { | ||
| 204: z.void(), | ||
| 404: errorSchemas.notFound, | ||
| 401: errorSchemas.unauthorized, | ||
| }, | ||
| }, | ||
| revealSecret: { | ||
| method: 'POST' as const, | ||
| path: '/api/monitors/:id/channels/webhook/reveal-secret', | ||
| responses: { | ||
| 200: z.object({ secret: z.string() }), | ||
| 404: errorSchemas.notFound, | ||
| 401: errorSchemas.unauthorized, | ||
| 429: z.object({ message: z.string() }), | ||
| }, | ||
| }, | ||
| deliveries: { | ||
| method: 'GET' as const, | ||
| path: '/api/monitors/:id/deliveries', | ||
| responses: { | ||
| 200: z.array(z.custom<typeof deliveryLog.$inferSelect>()), | ||
| 404: errorSchemas.notFound, | ||
| 401: errorSchemas.unauthorized, | ||
| }, | ||
| }, | ||
| }, |
There was a problem hiding this comment.
Monitor-scoped channel/delivery endpoints need explicit forbidden contract and ownership enforcement (IDOR risk).
These endpoints expose monitor-associated data but currently only model 401/404. In the provided server/routes.ts snippet for GET /api/monitors/:id/channels, the handler reads channels by monitor ID without validating monitor ownership first, which can leak other users’ channel metadata.
Suggested contract patch in this file (handler must also enforce ownership)
list: {
method: 'GET' as const,
path: '/api/monitors/:id/channels',
responses: {
+ 403: z.object({ message: z.string() }),
},
},
@@
deliveries: {
method: 'GET' as const,
path: '/api/monitors/:id/deliveries',
responses: {
+ 403: z.object({ message: z.string() }),
},
},As per coding guidelines "Every route that accesses user data MUST check req.user and validate ownership (userId match)."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@shared/routes.ts` around lines 165 - 215, Add explicit 403 forbidden
responses to the monitor-scoped route contracts (channels.list, channels.put,
channels.delete, channels.revealSecret, channels.deliveries) and update their
response schemas to include a 403 z.object({ message: z.string(), code:
z.string().optional() }); then update the corresponding handlers (the server
routes that read notificationChannels and deliveryLog by monitor id) to enforce
ownership by verifying req.user.id (or equivalent) against the monitor ownerId
before returning data—if the user is not the owner return 403; for revealSecret
also enforce ownership (and keep the existing 429) and ensure all handlers never
return 200 for unauthorized owners.
| install: { | ||
| method: 'GET' as const, | ||
| path: '/api/integrations/slack/install', | ||
| responses: { | ||
| 302: z.void(), | ||
| 401: errorSchemas.unauthorized, | ||
| 403: z.object({ message: z.string(), code: z.string().optional() }), | ||
| 501: z.object({ message: z.string() }), | ||
| }, |
There was a problem hiding this comment.
Slack install response schema is inconsistent with route behavior.
The install handler returns 400 for “Slack not configured”, but this schema only advertises 501. That breaks shared API typing/expectations for clients.
Suggested contract patch
install: {
method: 'GET' as const,
path: '/api/integrations/slack/install',
responses: {
+ 400: z.object({ message: z.string() }),
},
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| install: { | |
| method: 'GET' as const, | |
| path: '/api/integrations/slack/install', | |
| responses: { | |
| 302: z.void(), | |
| 401: errorSchemas.unauthorized, | |
| 403: z.object({ message: z.string(), code: z.string().optional() }), | |
| 501: z.object({ message: z.string() }), | |
| }, | |
| install: { | |
| method: 'GET' as const, | |
| path: '/api/integrations/slack/install', | |
| responses: { | |
| 302: z.void(), | |
| 400: z.object({ message: z.string() }), | |
| 401: errorSchemas.unauthorized, | |
| 403: z.object({ message: z.string(), code: z.string().optional() }), | |
| 501: z.object({ message: z.string() }), | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@shared/routes.ts` around lines 250 - 258, The install route's declared
responses for path '/api/integrations/slack/install' (the install entry in
shared/routes.ts) are missing the 400 response used by the handler for "Slack
not configured", causing a contract mismatch; update the install.responses to
include a 400 schema (e.g., z.object({ message: z.string(), code:
z.string().optional() }) or the appropriate error schema used elsewhere)
alongside the existing 501 and other responses so clients/types match runtime
behavior.
| channel: text("channel").notNull(), // "email" | "webhook" | "slack" | ||
| enabled: boolean("enabled").default(true).notNull(), | ||
| config: jsonb("config").notNull().$type<Record<string, unknown>>(), | ||
| createdAt: timestamp("created_at").defaultNow().notNull(), | ||
| updatedAt: timestamp("updated_at").defaultNow().notNull(), | ||
| }, (table) => ({ | ||
| monitorIdx: index("notification_channels_monitor_idx").on(table.monitorId), | ||
| monitorChannelUniq: uniqueIndex("notification_channels_monitor_channel_uniq").on(table.monitorId, table.channel), | ||
| })); | ||
|
|
||
| export const notificationChannelsRelations = relations(notificationChannels, ({ one }) => ({ | ||
| monitor: one(monitors, { | ||
| fields: [notificationChannels.monitorId], | ||
| references: [monitors.id], | ||
| }), | ||
| })); | ||
|
|
||
| export type NotificationChannel = typeof notificationChannels.$inferSelect; | ||
|
|
||
| // Delivery log — records every notification delivery attempt across all channels | ||
| export const deliveryLog = pgTable("delivery_log", { | ||
| id: serial("id").primaryKey(), | ||
| monitorId: integer("monitor_id").notNull().references(() => monitors.id, { onDelete: "cascade" }), | ||
| changeId: integer("change_id").notNull().references(() => monitorChanges.id), | ||
| channel: text("channel").notNull(), // "email" | "webhook" | "slack" | ||
| status: text("status").notNull(), // "success" | "failed" | "pending" | ||
| attempt: integer("attempt").default(1).notNull(), |
There was a problem hiding this comment.
Constrain channel and status at the database level.
Line 263 and Line 288 are unconstrained text fields. Invalid values can be persisted and break delivery/retry logic. Enforce allowed values with DB-level constraints/enums.
As per coding guidelines: “shared/** ... Column types and constraints (notNull, default, unique) must match business logic.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@shared/schema.ts` around lines 263 - 289, The channel and status columns are
currently plain text and must be constrained; replace the freeform text types
with Postgres enums and use those enums in the table definitions: create a
pgEnum for notification channels (e.g., values "email","webhook","slack") and
use it for notificationChannels.channel and deliveryLog.channel, and create a
pgEnum for delivery statuses (e.g., "success","failed","pending") and use it for
deliveryLog.status; update any related type annotations (NotificationChannel and
delivery log select types) and references in code that expect those fields so
the DB enforces valid values.
| monitorId: integer("monitor_id").notNull().references(() => monitors.id, { onDelete: "cascade" }), | ||
| channel: text("channel").notNull(), // "email" | "webhook" | "slack" | ||
| enabled: boolean("enabled").default(true).notNull(), | ||
| config: jsonb("config").notNull().$type<Record<string, unknown>>(), |
There was a problem hiding this comment.
Webhook signing secrets are being stored plaintext in JSONB config.
Line 265 stores raw channel config, and webhook secret is written/read directly in routes. This exposes signing keys in DB dumps and internal reads. Store the secret encrypted at rest (or in a dedicated protected secret store/column).
As per coding guidelines: “server/** ... Secret exposure: Flag any hardcoded API keys, tokens, or secrets.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@shared/schema.ts` at line 265, The config JSONB currently stores raw
channel/webhook secrets via the config field
(jsonb("config").$type<Record<string, unknown>>()), which exposes signing keys;
change the schema to remove or stop writing plaintext secrets into config by
adding a dedicated protected column (e.g., webhook_secret_encrypted) or using an
encrypted type for that value instead of jsonb, and update all readers/writers
(the routes that read/write config) to encrypt on write and decrypt on read
using the app's secret-store/encryption utility (or DB-side encryption/pgcrypto)
so secrets are never persisted plaintext in config. Ensure the unique symbol
references: update shared/schema.ts for jsonb("config") and the route handlers
that currently access config.secret to use the new encrypted column or
SecretStore API.
| }, (table) => ({ | ||
| monitorCreatedIdx: index("delivery_log_monitor_created_idx").on(table.monitorId, table.createdAt), | ||
| })); |
There was a problem hiding this comment.
Add an index that matches webhook retry lookups.
Line 294 indexes (monitorId, createdAt), but retry processing filters by channel, status, and attempt, then orders by createdAt. Add a composite index for that predicate to prevent full scans as delivery_log grows.
As per coding guidelines: “shared/** ... Indexes should exist for frequently queried columns.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@shared/schema.ts` around lines 293 - 295, The existing index
monitorCreatedIdx on delivery_log covers (monitorId, createdAt) but retry
processing queries filter by channel, status, attempt and ORDER BY createdAt;
add a new composite index in the same index block for delivery_log (e.g.,
deliveryLogRetryIdx) that indexes (channel, status, attempt, createdAt) so the
WHERE predicate and ordering are covered and avoids full table scans; place the
new index alongside monitorCreatedIdx using the same index(...) builder and give
it a descriptive name (e.g., "delivery_log_retry_idx").
…th routes - Add 41 route-level tests for new notification channel CRUD endpoints (GET/PUT/DELETE channels, reveal-secret, deliveries) and Slack OAuth (install, callback, status, disconnect, channels) - Add 21 notification service tests covering: missing webhook config, missing Slack channelId, decryption failures, all-disabled channels, delivery log write failures, and digest-to-webhook/slack delivery - Add 6 webhook delivery edge case tests: header override behavior, 3xx redirect handling, null values, redirect mode, abort signal - Add 6 slack delivery edge case tests: missing channels key, null values, missing error field, non-Error throw handling All 859 tests pass across 24 test files. https://claude.ai/code/session_01KuoUHkKehAzWNLszLvk4Js
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@server/routes.notificationChannels.test.ts`:
- Around line 766-809: Add a new test for the happy path of GET
/api/integrations/slack/callback that uses makeReq, ensureRoutes and callHandler
to simulate a valid OAuth response: set process.env.SLACK_CLIENT_SECRET and
construct a correct state HMAC for "user1", include a fake authorization code in
query, and stub any external token-exchange/network calls so the handler
receives tokens; then assert the handler redirects to a success URL (contains
"slack=success"), verifies token encryption/storage upsert was invoked (check
the repository/upsert or storage mock), and cleans up env vars and mocks after
the test to avoid flakiness. Ensure the test covers state verification, token
handling/encryption, and the final redirect.
- Around line 165-167: Tests are bypassing middleware because express-rate-limit
is mocked to a no-op and callHandler only invokes the last handler; change the
test harness to exercise the full middleware chain by either removing the
express-rate-limit mock or replacing it with the real middleware (or a mock that
maintains middleware semantics), and stop using callHandler to directly invoke
the final handler—mount the router on an Express app and issue requests via
supertest (or ensure callHandler invokes each middleware in sequence) so auth,
validation, and rate-limit middleware run; look for references to callHandler
and the express-rate-limit mock in the test file and update them accordingly.
In `@server/services/slackDelivery.test.ts`:
- Around line 109-124: The test for listChannels should also assert the outgoing
request contract to catch auth regressions: verify mockFetch (or fetch) was
called with the Slack conversations.list endpoint URL and that the request
options include the Authorization header set to `Bearer xoxb-token` and method
"GET" (or appropriate fetch init); update the test around listChannels to add
assertions on mockFetch.mock.calls[0] (or equivalent) to check the URL string
contains "conversations.list" and that headers.Authorization equals the expected
bearer token. Ensure these assertions coexist with the existing response parsing
assertions in the "returns channel list" test so the test both validates output
and the request shape used by listChannels.
- Around line 41-43: The tests currently use a global fetch stub plus
vi.clearAllMocks() in beforeEach, which allows mock implementations to leak
across tests; move the global fetch stub creation into the beforeEach block (so
each test gets a fresh stub) and replace vi.clearAllMocks() with
vi.resetAllMocks() to fully reset mock implementations and restore behaviour
between tests; update references to the global fetch stub and the beforeEach
setup in server/services/slackDelivery.test.ts and ensure any per-test
mockResolvedValue/mockRejectedValue calls continue to set behavior after the new
reset.
In `@server/services/webhookDelivery.test.ts`:
- Around line 71-86: Update the test for buildWebhookPayload to assert the exact
detectedAt value instead of only toBeDefined(): make the change fixture
deterministic (or extract the expected ISO string from the change created by
makeChange()) and replace the detectedAt assertion with a strict equality check
(e.g., expect(payload.detectedAt).toBe(expectedIsoString)); reference the
buildWebhookPayload call and makeChange()/makeMonitor() used in the test so the
expected timestamp is sourced from the fixture.
- Around line 188-199: The code currently spreads config.headers last in deliver
(server/services/webhookDelivery.ts), allowing custom headers to overwrite
security headers like "X-FTC-Signature-256", "Content-Type", and "User-Agent";
fix deliver so it merges headers with config.headers first and then writes the
security headers last (ensuring "X-FTC-Signature-256" and other security headers
cannot be overridden), and update the test in webhookDelivery.test.ts (the
"custom headers can override built-in headers" test) to assert that attempts to
set those security headers in WebhookConfig.headers do not replace the
authoritative values (e.g., expect options.headers["X-FTC-Signature-256"] to
equal the computed signature and Content-Type/User-Agent to be the enforced
values).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 02dd6ec8-d9d3-4b44-9cc5-ff516b3ba775
📒 Files selected for processing (4)
server/routes.notificationChannels.test.tsserver/services/notification.test.tsserver/services/slackDelivery.test.tsserver/services/webhookDelivery.test.ts
| vi.mock("express-rate-limit", () => ({ | ||
| default: () => (_req: any, _res: any, next: any) => next(), | ||
| })); |
There was a problem hiding this comment.
Middleware chain is bypassed, so security middleware behavior is not actually tested.
callHandler executes only the last handler, and express-rate-limit is mocked as pass-through. This skips auth/rate-limit/validation middleware execution and can mask security regressions in these routes.
Suggested harness fix
async function callHandler(method: string, path: string, req: any) {
const handlers = registeredRoutes[method]?.[path];
if (!handlers) throw new Error(`No handler for ${method.toUpperCase()} ${path}`);
const res = makeRes();
- const handler = handlers[handlers.length - 1];
- await handler(req, res);
+ let index = 0;
+ const next = async (err?: unknown): Promise<void> => {
+ if (err) throw err;
+ const handler = handlers[index++];
+ if (!handler) return;
+
+ // Express-style middleware: (req, res, next)
+ if (handler.length >= 3) {
+ await handler(req, res, next);
+ return;
+ }
+
+ // Terminal handler: (req, res)
+ await handler(req, res);
+ await next();
+ };
+
+ await next();
return res;
}As per coding guidelines: “Security-related tests: verify that SSRF, CSRF, auth bypass, and rate limiting tests exist and are thorough.”
Also applies to: 205-211
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/routes.notificationChannels.test.ts` around lines 165 - 167, Tests are
bypassing middleware because express-rate-limit is mocked to a no-op and
callHandler only invokes the last handler; change the test harness to exercise
the full middleware chain by either removing the express-rate-limit mock or
replacing it with the real middleware (or a mock that maintains middleware
semantics), and stop using callHandler to directly invoke the final
handler—mount the router on an Express app and issue requests via supertest (or
ensure callHandler invokes each middleware in sequence) so auth, validation, and
rate-limit middleware run; look for references to callHandler and the
express-rate-limit mock in the test file and update them accordingly.
| describe("GET /api/integrations/slack/callback", () => { | ||
| const ENDPOINT = "/api/integrations/slack/callback"; | ||
|
|
||
| beforeEach(async () => { | ||
| await ensureRoutes(); | ||
| resetMocks(); | ||
| }); | ||
|
|
||
| it("redirects with error when error param is present", async () => { | ||
| const req = makeReq("user1", { | ||
| query: { error: "access_denied" }, | ||
| protocol: "https", | ||
| get: () => "example.com", | ||
| }); | ||
| const res = await callHandler("get", ENDPOINT, req); | ||
| expect(res._redirectUrl).toContain("slack=error"); | ||
| expect(res._redirectUrl).toContain("access_denied"); | ||
| }); | ||
|
|
||
| it("redirects with error when state or code is missing", async () => { | ||
| const req = makeReq("user1", { | ||
| query: {}, | ||
| protocol: "https", | ||
| get: () => "example.com", | ||
| }); | ||
| const res = await callHandler("get", ENDPOINT, req); | ||
| expect(res._redirectUrl).toContain("slack=error"); | ||
| expect(res._redirectUrl).toContain("missing_params"); | ||
| }); | ||
|
|
||
| it("redirects with error when state HMAC is invalid", async () => { | ||
| process.env.SLACK_CLIENT_SECRET = "test-secret"; | ||
| const req = makeReq("user1", { | ||
| query: { code: "some-code", state: "user1:invalid-signature" }, | ||
| protocol: "https", | ||
| get: () => "example.com", | ||
| }); | ||
| const res = await callHandler("get", ENDPOINT, req); | ||
| expect(res._redirectUrl).toContain("slack=error"); | ||
| expect(res._redirectUrl).toContain("invalid_state"); | ||
|
|
||
| delete process.env.SLACK_CLIENT_SECRET; | ||
| }); | ||
| }); |
There was a problem hiding this comment.
Slack OAuth callback happy path is missing in tests.
Only callback error branches are asserted. Add a success-case test that validates state acceptance, token handling, encryption, storage upsert, and success redirect; this is the highest-risk OAuth path.
As per coding guidelines: “**/*.test.ts ... Test coverage of edge cases and error paths.” and “Security-related tests: verify that SSRF, CSRF, auth bypass, and rate limiting tests exist and are thorough.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/routes.notificationChannels.test.ts` around lines 766 - 809, Add a new
test for the happy path of GET /api/integrations/slack/callback that uses
makeReq, ensureRoutes and callHandler to simulate a valid OAuth response: set
process.env.SLACK_CLIENT_SECRET and construct a correct state HMAC for "user1",
include a fake authorization code in query, and stub any external
token-exchange/network calls so the handler receives tokens; then assert the
handler redirects to a success URL (contains "slack=success"), verifies token
encryption/storage upsert was invoked (check the repository/upsert or storage
mock), and cleans up env vars and mocks after the test to avoid flakiness.
Ensure the test covers state verification, token handling/encryption, and the
final redirect.
| beforeEach(() => { | ||
| vi.clearAllMocks(); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n server/services/slackDelivery.test.tsRepository: bd73-com/fetchthechange
Length of output: 7449
🌐 Web query:
Vitest clearAllMocks vs resetAllMocks difference mock isolation
💡 Result:
What’s the difference?
-
vi.clearAllMocks()- Calls
.mockClear()on all spies/mocks. - Clears only “history” (calls, instances, results), keeps the current implementation (e.g.,
mockReturnValue,mockImplementationstay in place).
(vitest.dev)
- Calls
-
vi.resetAllMocks()- Calls
.mockReset()on all spies/mocks. - Clears history and resets implementations back to the mock’s original/default implementation, also clearing any “once” behaviors.
(vitest.dev)
- Calls
How this affects mock isolation between tests
- Use
clearAllMockswhen you want test isolation for assertions (“who was called how many times”) but you intentionally keep shared mock behavior across tests. - Use
resetAllMockswhen you want stronger isolation, ensuring that any per-test changes likemockReturnValue(...),mockImplementationOnce(...), etc. don’t leak into later tests.
(vitest.dev)
Config equivalents (automatic per-test)
test.clearMocks: true⇒ run the equivalent ofmockClearbefore each test. (vitest.dev)test.mockReset: true⇒ run the equivalent ofmockResetbefore each test. (vitest.dev)- (Related)
test.restoreMocks: true⇒ runsmockRestorebehavior for spies created withvi.spyOn(restores originals). (vitest.dev)
Citations:
- 1: https://vitest.dev/api/vi
- 2: https://vitest.dev/api/vi
- 3: https://vitest.dev/api/vi
- 4: https://vitest.dev/api/mock
- 5: https://vitest.dev/api/mock
- 6: https://vitest.dev/api/mock
Move global fetch stub into beforeEach and use resetAllMocks() instead of clearAllMocks().
The global stub at line 4 and vi.clearAllMocks() at line 42 create a mock isolation violation. clearAllMocks() only clears call history—it does not reset mock implementations (e.g., mockResolvedValue(), mockRejectedValue()) or restore global stubs. This allows mock state from one test to leak into the next. Although each test explicitly sets its own mock behavior, this pattern is fragile: a test that skips explicit setup could inherit leftover behavior from a previous test, causing false positives or false negatives.
Proposed fix
-import { describe, it, expect, vi, beforeEach } from "vitest";
+import { describe, it, expect, vi, beforeEach, afterEach } from "vitest";
const mockFetch = vi.fn();
-vi.stubGlobal("fetch", mockFetch);
import { deliver, listChannels } from "./slackDelivery";
import type { Monitor, MonitorChange } from "@shared/schema";
describe("slackDelivery", () => {
beforeEach(() => {
- vi.clearAllMocks();
+ vi.resetAllMocks();
+ vi.stubGlobal("fetch", mockFetch);
});
+
+ afterEach(() => {
+ vi.unstubAllGlobals();
+ });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/services/slackDelivery.test.ts` around lines 41 - 43, The tests
currently use a global fetch stub plus vi.clearAllMocks() in beforeEach, which
allows mock implementations to leak across tests; move the global fetch stub
creation into the beforeEach block (so each test gets a fresh stub) and replace
vi.clearAllMocks() with vi.resetAllMocks() to fully reset mock implementations
and restore behaviour between tests; update references to the global fetch stub
and the beforeEach setup in server/services/slackDelivery.test.ts and ensure any
per-test mockResolvedValue/mockRejectedValue calls continue to set behavior
after the new reset.
| it("returns channel list", async () => { | ||
| mockFetch.mockResolvedValue({ | ||
| json: () => Promise.resolve({ | ||
| ok: true, | ||
| channels: [ | ||
| { id: "C001", name: "general" }, | ||
| { id: "C002", name: "alerts" }, | ||
| ], | ||
| }), | ||
| }); | ||
|
|
||
| const channels = await listChannels("xoxb-token"); | ||
| expect(channels).toHaveLength(2); | ||
| expect(channels[0]).toEqual({ id: "C001", name: "general" }); | ||
| expect(channels[1]).toEqual({ id: "C002", name: "alerts" }); | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add request-contract assertions for listChannels to catch auth regressions.
This test validates parsed output well, but it should also assert endpoint and Authorization header (same rigor as deliver). That prevents silent breakage in Slack auth wiring.
Proposed assertion additions
it("returns channel list", async () => {
@@
const channels = await listChannels("xoxb-token");
+ expect(mockFetch).toHaveBeenCalledOnce();
+ const [url, options] = mockFetch.mock.calls[0];
+ expect(url).toBe("https://slack.com/api/conversations.list");
+ expect(options.headers.Authorization).toBe("Bearer xoxb-token");
expect(channels).toHaveLength(2);
expect(channels[0]).toEqual({ id: "C001", name: "general" });
expect(channels[1]).toEqual({ id: "C002", name: "alerts" });
});As per coding guidelines, "Assertions are specific (not just truthy/falsy)."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/services/slackDelivery.test.ts` around lines 109 - 124, The test for
listChannels should also assert the outgoing request contract to catch auth
regressions: verify mockFetch (or fetch) was called with the Slack
conversations.list endpoint URL and that the request options include the
Authorization header set to `Bearer xoxb-token` and method "GET" (or appropriate
fetch init); update the test around listChannels to add assertions on
mockFetch.mock.calls[0] (or equivalent) to check the URL string contains
"conversations.list" and that headers.Authorization equals the expected bearer
token. Ensure these assertions coexist with the existing response parsing
assertions in the "returns channel list" test so the test both validates output
and the request shape used by listChannels.
| describe("buildWebhookPayload", () => { | ||
| it("builds correct payload structure", () => { | ||
| const monitor = makeMonitor(); | ||
| const change = makeChange(); | ||
| const payload = buildWebhookPayload(monitor, change); | ||
|
|
||
| expect(payload.event).toBe("change.detected"); | ||
| expect(payload.monitorId).toBe(1); | ||
| expect(payload.monitorName).toBe("Test Monitor"); | ||
| expect(payload.url).toBe("https://example.com"); | ||
| expect(payload.oldValue).toBe("$19.99"); | ||
| expect(payload.newValue).toBe("$24.99"); | ||
| expect(payload.detectedAt).toBeDefined(); | ||
| expect(payload.timestamp).toBeDefined(); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Payload structure test is thorough.
Tests all expected fields: event, monitorId, monitorName, url, oldValue, newValue, detectedAt, timestamp. Consider adding an assertion that detectedAt equals the expected ISO string value rather than just toBeDefined() for more specific verification.
🔧 More precise timestamp assertion
- expect(payload.detectedAt).toBeDefined();
+ expect(payload.detectedAt).toBe("2024-01-01T00:00:00.000Z");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| describe("buildWebhookPayload", () => { | |
| it("builds correct payload structure", () => { | |
| const monitor = makeMonitor(); | |
| const change = makeChange(); | |
| const payload = buildWebhookPayload(monitor, change); | |
| expect(payload.event).toBe("change.detected"); | |
| expect(payload.monitorId).toBe(1); | |
| expect(payload.monitorName).toBe("Test Monitor"); | |
| expect(payload.url).toBe("https://example.com"); | |
| expect(payload.oldValue).toBe("$19.99"); | |
| expect(payload.newValue).toBe("$24.99"); | |
| expect(payload.detectedAt).toBeDefined(); | |
| expect(payload.timestamp).toBeDefined(); | |
| }); | |
| }); | |
| describe("buildWebhookPayload", () => { | |
| it("builds correct payload structure", () => { | |
| const monitor = makeMonitor(); | |
| const change = makeChange(); | |
| const payload = buildWebhookPayload(monitor, change); | |
| expect(payload.event).toBe("change.detected"); | |
| expect(payload.monitorId).toBe(1); | |
| expect(payload.monitorName).toBe("Test Monitor"); | |
| expect(payload.url).toBe("https://example.com"); | |
| expect(payload.oldValue).toBe("$19.99"); | |
| expect(payload.newValue).toBe("$24.99"); | |
| expect(payload.detectedAt).toBe("2024-01-01T00:00:00.000Z"); | |
| expect(payload.timestamp).toBeDefined(); | |
| }); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/services/webhookDelivery.test.ts` around lines 71 - 86, Update the
test for buildWebhookPayload to assert the exact detectedAt value instead of
only toBeDefined(): make the change fixture deterministic (or extract the
expected ISO string from the change created by makeChange()) and replace the
detectedAt assertion with a strict equality check (e.g.,
expect(payload.detectedAt).toBe(expectedIsoString)); reference the
buildWebhookPayload call and makeChange()/makeMonitor() used in the test so the
expected timestamp is sourced from the fixture.
- Use ssrfSafeFetch instead of raw fetch in webhook delivery to close TOCTOU/DNS-rebinding gap - Prevent custom headers from overriding security headers (signature, content-type, user-agent) by applying them last - Fail closed when SLACK_CLIENT_SECRET is missing instead of signing with empty string - Validate channel param with Zod on DELETE route - Clear Slack channels cache on disconnect to prevent stale data - Log Slack digest deliveries to delivery_log table - Add error handling to handleRevealSecret to prevent unhandled promise rejections - Remove unnecessary CSRF exemption for Slack callback GET route - Add error codes to all new JSON error responses per API contract - Normalize .env.example Slack vars for consistent dotenv parsing https://claude.ai/code/session_01KuoUHkKehAzWNLszLvk4Js
There was a problem hiding this comment.
♻️ Duplicate comments (9)
server/routes.notificationChannels.test.ts (2)
165-167: 🧹 Nitpick | 🔵 TrivialTest harness bypasses middleware chain.
The
callHandlerhelper (lines 205-211) only executes the last registered handler, skipping middleware likeisAuthenticatedand rate limiters. While mocking these as pass-through is acceptable for unit tests, it means security middleware behavior (e.g., correct middleware ordering, auth enforcement) is not actually verified.Consider either:
- Adding integration tests with a real Express app via supertest
- Modifying
callHandlerto execute the full middleware chainThis is a test coverage gap rather than a bug in the production code.
Also applies to: 205-211
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/routes.notificationChannels.test.ts` around lines 165 - 167, The test helper callHandler currently only invokes the last registered handler and thereby skips middleware like isAuthenticated and the express-rate-limit mock; update tests to either add an integration test using a real Express app (e.g., supertest) or modify the callHandler helper so it executes the full middleware chain in order (respecting middleware signatures and next() propagation) so that middleware functions such as isAuthenticated and the rate-limiter are actually run and asserted in unit tests.
767-809:⚠️ Potential issue | 🟠 MajorMissing: Slack OAuth callback success path test.
The callback tests only cover error branches (error param, missing params, invalid state). There's no test for the happy path that verifies:
- Valid state HMAC acceptance
- Token exchange handling
- Token encryption before storage
- Storage upsert invocation
- Success redirect to
/?slack=connectedThis is the highest-risk OAuth path and should have explicit coverage to catch regressions in authentication/authorization logic.
Suggested test case
it("handles successful OAuth callback with valid state", async () => { // Setup: compute valid HMAC for user const { createHmac } = await import("node:crypto"); process.env.SLACK_CLIENT_ID = "test-client-id"; process.env.SLACK_CLIENT_SECRET = "test-client-secret"; const userId = "user1"; const validSig = createHmac("sha256", "test-client-secret").update(userId).digest("hex"); // Mock token exchange response global.fetch = vi.fn().mockResolvedValueOnce({ json: async () => ({ ok: true, access_token: "xoxb-test-token", team: { id: "T001", name: "TestTeam" }, scope: "chat:write", }), }); const req = makeReq(userId, { query: { code: "auth-code", state: `${userId}:${validSig}` }, protocol: "https", get: () => "example.com", }); const res = await callHandler("get", ENDPOINT, req); expect(res._redirectUrl).toContain("slack=connected"); expect(mockEncryptToken).toHaveBeenCalledWith("xoxb-test-token"); expect(mockUpsertSlackConnection).toHaveBeenCalledWith(expect.objectContaining({ userId, teamId: "T001", })); // Cleanup delete process.env.SLACK_CLIENT_ID; delete process.env.SLACK_CLIENT_SECRET; });As per coding guidelines, security-related tests should be thorough, and OAuth flows are high-risk paths.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/routes.notificationChannels.test.ts` around lines 767 - 809, Add a happy-path test for GET /api/integrations/slack/callback that sets SLACK_CLIENT_ID and SLACK_CLIENT_SECRET, computes a valid HMAC for userId using createHmac("sha256", process.env.SLACK_CLIENT_SECRET).update(userId).digest("hex"), mocks global.fetch to return a successful Slack token response (ok:true, access_token, team.id/name, scope), calls the handler using makeReq and callHandler with query { code, state: `${userId}:${validSig}` }, then assert the response redirect contains "slack=connected" and that mockEncryptToken and mockUpsertSlackConnection were called with the expected token and connection object (including userId and teamId), and finally restore global.fetch and unset the env vars; reference ENDPOINT, makeReq, callHandler, mockEncryptToken, and mockUpsertSlackConnection.server/routes.ts (3)
684-686:⚠️ Potential issue | 🟡 MinorValidate query parameters with Zod schema.
The
limitandchannelquery parameters are parsed ad-hoc without schema validation. This violates the coding guideline requiring all request params/query values to be validated with Zod before use.Suggested fix
+ const deliveriesQuerySchema = z.object({ + limit: z.coerce.number().int().min(1).max(200).default(50), + channel: z.enum(["email", "webhook", "slack"]).optional(), + }); + // GET /api/monitors/:id/deliveries app.get(api.monitors.channels.deliveries.path, isAuthenticated, async (req: any, res) => { const id = Number(req.params.id); const monitor = await storage.getMonitor(id); if (!monitor) return res.status(404).json({ message: "Not found" }); if (String(monitor.userId) !== String(req.user.claims.sub)) return res.status(403).json({ message: "Forbidden" }); - const limit = Math.min(Number(req.query.limit) || 50, 200); - const channel = req.query.channel as string | undefined; + const parsed = deliveriesQuerySchema.safeParse(req.query); + if (!parsed.success) { + return res.status(400).json({ message: parsed.error.errors[0].message, code: "VALIDATION_ERROR" }); + } + const { limit, channel } = parsed.data; const entries = await storage.getDeliveryLog(id, limit, channel); res.json(entries); });As per coding guidelines, all request body, params, and query values MUST be validated with Zod schemas before use.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/routes.ts` around lines 684 - 686, Replace the ad-hoc parsing of req.query.limit and req.query.channel with a Zod-validated query schema (e.g., create a Zod object like { limit: z.preprocess(...), channel: z.string().optional() }) and use it to parse/validate req.query before using values; then derive const limit = Math.min(validated.limit, 200) and const channel = validated.channel and pass them to storage.getDeliveryLog(id, limit, channel) so all query params are validated (refer to req.query, limit, channel, and storage.getDeliveryLog in the route handler).
589-599:⚠️ Potential issue | 🟠 MajorTier gate blocks disable flows for downgraded users.
The current tier check at lines 590-599 runs before parsing the request body, blocking free-tier users from any webhook/slack operation—including disabling a channel they configured while on a paid plan.
If a user downgrades from Pro to Free, they cannot disable their webhook or Slack channels through this endpoint, leaving stale configurations that may continue to fail or consume resources.
Consider moving the tier check after input parsing and only enforcing it when
input.enabled === true:+ const input = api.monitors.channels.put.input.parse(req.body); + // Tier gating: webhook and slack require Pro or Power - if (channel !== "email") { + if (channel !== "email" && input.enabled) { const user = await authStorage.getUser(req.user.claims.sub); const tier = ((user as any)?.tier || "free") as UserTier; if (tier === "free") { return res.status(403).json({...}); } } - - const input = api.monitors.channels.put.input.parse(req.body);This allows downgraded users to gracefully disable or remove their paid-tier channel configurations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/routes.ts` around lines 589 - 599, The tier-gate currently executed when channel !== "email" (using authStorage.getUser and req.user.claims.sub to derive tier) runs before the request body is parsed and therefore blocks downgraded users from disabling paid channels; move this tier check to after the request body/input is parsed and only enforce it when input.enabled === true (i.e., when trying to enable or create a webhook/Slack channel), leaving disable/remove operations allowed for free-tier users.
811-843:⚠️ Potential issue | 🟡 MinorUnbounded in-memory cache for Slack channels.
The
slackChannelsCacheMap (line 811) grows without bounds—one entry per user who fetches channels. While the cache has TTL logic (5 minutes), there's no size limit or LRU eviction. In a multi-tenant environment with many users, this could lead to memory exhaustion.Consider adding a max size or using an LRU cache library:
// Option: Simple size limit const MAX_CACHE_SIZE = 1000; if (slackChannelsCache.size >= MAX_CACHE_SIZE) { const oldestKey = slackChannelsCache.keys().next().value; slackChannelsCache.delete(oldestKey); } slackChannelsCache.set(userId, { data: channels, timestamp: Date.now() });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/routes.ts` around lines 811 - 843, The slackChannelsCache Map currently grows unbounded; add a size-cap and eviction policy when creating/setting entries to prevent memory exhaustion. Replace or augment slackChannelsCache handling: either swap the Map for an LRU implementation (e.g., using an LRU cache library) or enforce a MAX_CACHE_SIZE when inserting in the block that calls slackChannelsCache.set(userId, ...); if cache size >= MAX_CACHE_SIZE evict the oldest/least-recently-used entry before setting the new one. Update both the declaration of slackChannelsCache and the insertion logic around listSlackChannels/ slackChannelsCache.set to implement the eviction semantics.server/services/notification.ts (2)
468-470:⚠️ Potential issue | 🟠 MajorQueue acknowledgment only considers email delivery outcome.
Lines 468-470 and 516-518 mark queue entries as delivered when
result.email?.success !== false, ignoring webhook and Slack outcomes. For monitors using only webhook or Slack (email disabled), entries are marked delivered even when those channels fail.Consider checking all configured channel outcomes:
- if (result.email?.success !== false) { + const anyChannelFailed = + (result.email && result.email.success === false) || + (result.webhook && !result.webhook.success) || + (result.slack && !result.slack.success); + if (!anyChannelFailed) { await storage.markQueueEntriesDelivered(entries.map((e) => e.id)); }Or at minimum, only mark delivered if at least one channel succeeded.
Also applies to: 516-518
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/services/notification.ts` around lines 468 - 470, The queue-acknowledgement logic currently only inspects result.email?.success before calling storage.markQueueEntriesDelivered(entries.map(e => e.id)), so monitors that use only webhook or Slack get acked even if those channels failed; update the logic in the places that call storage.markQueueEntriesDelivered (the blocks handling send results) to compute success across all configured channels (e.g., check result.email?.success, result.webhook?.success, result.slack?.success) and only mark entries delivered if at least one of the enabled channels returned success, or alternatively determine configured channels for the monitor and require at least one configured channel to succeed before calling storage.markQueueEntriesDelivered.
153-274:⚠️ Potential issue | 🟠 MajorMissing tier verification during channel delivery.
deliverToChannelsdispatches to webhook and Slack channels without verifying the monitor owner's current tier. If a user downgrades from Pro to Free after configuring paid channels, their webhook and Slack notifications continue to fire—bypassing the tier restriction enforced at configuration time.This is a privilege escalation gap: the routes block free users from configuring webhook/slack, but the delivery path doesn't enforce the same restriction.
Suggested fix
async function deliverToChannels( monitor: Monitor, change: MonitorChange, emailOverride?: string ): Promise<ChannelDeliveryResult> { const result: ChannelDeliveryResult = {}; + + // Fetch user tier for paid channel eligibility check + let userTier: string = "free"; + try { + const { authStorage } = await import("../replit_integrations/auth/storage"); + const user = await authStorage.getUser(monitor.userId); + userTier = (user as any)?.tier || "free"; + } catch { /* fall back to free */ } + let channels: NotificationChannel[]; // ... existing code ... const enabledChannels = channels.filter((c) => c.enabled); const deliveries = enabledChannels.map(async (ch) => { try { switch (ch.channel) { case "email": { // ... existing code ... } case "webhook": { + if (userTier === "free") { + console.warn(`[Notification] Skipping webhook (free tier) for monitor ${monitor.id}`); + await storage.addDeliveryLog({ + monitorId: monitor.id, + changeId: change.id, + channel: "webhook", + status: "failed", + response: { error: "tier_not_eligible" }, + }); + break; + } // ... existing webhook delivery code ... } case "slack": { + if (userTier === "free") { + // Similar skip logic + break; + } // ... existing slack delivery code ... } }As per coding guidelines, ensure no privilege escalation between tiers (free/pro/power).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/services/notification.ts` around lines 153 - 274, deliverToChannels currently sends webhook and slack notifications without verifying the monitor owner's current tier, allowing downgraded users to keep receiving paid-channel deliveries; fix by checking the user's current tier (e.g., fetch via storage.getUserById or storage.getUserTier using monitor.userId) before dispatching paid channels and skip or mark deliveries as failed/pending when the tier no longer permits the channel. Concretely, in deliverToChannels adjust the enabledChannels handling (the enabledChannels variable and the "webhook" and "slack" switch cases) to filter out or short-circuit webhook/slack when the fetched user tier is insufficient, and create appropriate storage.addDeliveryLog entries and warnings for skipped deliveries so they are recorded. Ensure you still allow email regardless of tier and reuse monitor.userId for the user lookup.client/src/components/NotificationChannelsPanel.tsx (1)
110-113:⚠️ Potential issue | 🟡 MinorAccessibility: Interactive controls missing accessible names.
Multiple interactive elements lack programmatic accessible names, which is a compliance issue for screen reader users:
- Email Switch (line 110-113) - needs
aria-label="Enable email notifications"- Webhook Switch (line 130-136) - needs
aria-label="Enable webhook notifications"- Secret reveal button (line 168-170) - needs accessible label describing action
- Slack Switch (line 195-205) - needs
aria-label="Enable Slack notifications"- Slack channel Select (line 241-247) - needs associated
<Label>withhtmlFororaria-labelSuggested fix for key elements
<Switch + aria-label="Enable email notifications" checked={emailChannel?.enabled ?? true} onCheckedChange={(checked) => handleToggleChannel("email", checked, {})} /> ... <Switch + aria-label="Enable webhook notifications" checked={webhookChannel?.enabled ?? false} ... - <Button variant="ghost" size="sm" onClick={handleRevealSecret} disabled={revealSecret.isPending}> + <Button variant="ghost" size="sm" onClick={handleRevealSecret} disabled={revealSecret.isPending} aria-label={showSecret ? "Hide webhook signing secret" : "Reveal webhook signing secret"}> ... <Switch + aria-label="Enable Slack notifications" checked={slackChannel?.enabled ?? false} ... + <Label htmlFor="slack-channel-select">Slack channel</Label> <Select value={selectedSlackChannel || (slackChannel?.config as any)?.channelId || ""} onValueChange={setSelectedSlackChannel} > - <SelectTrigger> + <SelectTrigger id="slack-channel-select">As per coding guidelines, interactive elements need proper ARIA labels and form inputs require associated labels.
Also applies to: 130-136, 168-170, 195-205, 241-247
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/src/components/NotificationChannelsPanel.tsx` around lines 110 - 113, In NotificationChannelsPanel.tsx, add programmatic accessible names to all interactive controls: give each Switch (the email Switch using handleToggleChannel("email", ...), the webhook Switch, and the Slack Switch) an aria-label (e.g., aria-label="Enable email notifications", "Enable webhook notifications", "Enable Slack notifications"); update the secret reveal button to include an accessible name that describes its action (e.g., aria-label="Reveal webhook secret" or similar); and ensure the Slack channel Select has either an associated <Label htmlFor="..."> that matches the Select's id or an aria-label on the Select itself so it is announced correctly by screen readers. Ensure you use the existing handlers (e.g., handleToggleChannel) and component instances when adding these attributes so no behavior changes.server/services/webhookDelivery.test.ts (1)
72-87: 🧹 Nitpick | 🔵 TrivialConsider stricter timestamp assertion.
Line 84 uses
toBeDefined()fordetectedAt, but sincemakeChange()creates a deterministic date (new Date("2024-01-01T00:00:00Z")), a strict equality check would catch regressions more precisely:- expect(payload.detectedAt).toBeDefined(); + expect(payload.detectedAt).toBe("2024-01-01T00:00:00.000Z");This is a minor improvement for test precision.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/services/webhookDelivery.test.ts` around lines 72 - 87, The test uses a loose assertion for the timestamp; replace the toBeDefined() check for payload.detectedAt with a strict equality check against the deterministic date produced by makeChange() (e.g., new Date("2024-01-01T00:00:00Z") or the exact value returned by makeChange()) so that buildWebhookPayload (invoked with makeMonitor() and makeChange()) asserts payload.detectedAt exactly equals the expected Date/ISO string and catches regressions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@client/src/components/NotificationChannelsPanel.tsx`:
- Around line 110-113: In NotificationChannelsPanel.tsx, add programmatic
accessible names to all interactive controls: give each Switch (the email Switch
using handleToggleChannel("email", ...), the webhook Switch, and the Slack
Switch) an aria-label (e.g., aria-label="Enable email notifications", "Enable
webhook notifications", "Enable Slack notifications"); update the secret reveal
button to include an accessible name that describes its action (e.g.,
aria-label="Reveal webhook secret" or similar); and ensure the Slack channel
Select has either an associated <Label htmlFor="..."> that matches the Select's
id or an aria-label on the Select itself so it is announced correctly by screen
readers. Ensure you use the existing handlers (e.g., handleToggleChannel) and
component instances when adding these attributes so no behavior changes.
In `@server/routes.notificationChannels.test.ts`:
- Around line 165-167: The test helper callHandler currently only invokes the
last registered handler and thereby skips middleware like isAuthenticated and
the express-rate-limit mock; update tests to either add an integration test
using a real Express app (e.g., supertest) or modify the callHandler helper so
it executes the full middleware chain in order (respecting middleware signatures
and next() propagation) so that middleware functions such as isAuthenticated and
the rate-limiter are actually run and asserted in unit tests.
- Around line 767-809: Add a happy-path test for GET
/api/integrations/slack/callback that sets SLACK_CLIENT_ID and
SLACK_CLIENT_SECRET, computes a valid HMAC for userId using createHmac("sha256",
process.env.SLACK_CLIENT_SECRET).update(userId).digest("hex"), mocks
global.fetch to return a successful Slack token response (ok:true, access_token,
team.id/name, scope), calls the handler using makeReq and callHandler with query
{ code, state: `${userId}:${validSig}` }, then assert the response redirect
contains "slack=connected" and that mockEncryptToken and
mockUpsertSlackConnection were called with the expected token and connection
object (including userId and teamId), and finally restore global.fetch and unset
the env vars; reference ENDPOINT, makeReq, callHandler, mockEncryptToken, and
mockUpsertSlackConnection.
In `@server/routes.ts`:
- Around line 684-686: Replace the ad-hoc parsing of req.query.limit and
req.query.channel with a Zod-validated query schema (e.g., create a Zod object
like { limit: z.preprocess(...), channel: z.string().optional() }) and use it to
parse/validate req.query before using values; then derive const limit =
Math.min(validated.limit, 200) and const channel = validated.channel and pass
them to storage.getDeliveryLog(id, limit, channel) so all query params are
validated (refer to req.query, limit, channel, and storage.getDeliveryLog in the
route handler).
- Around line 589-599: The tier-gate currently executed when channel !== "email"
(using authStorage.getUser and req.user.claims.sub to derive tier) runs before
the request body is parsed and therefore blocks downgraded users from disabling
paid channels; move this tier check to after the request body/input is parsed
and only enforce it when input.enabled === true (i.e., when trying to enable or
create a webhook/Slack channel), leaving disable/remove operations allowed for
free-tier users.
- Around line 811-843: The slackChannelsCache Map currently grows unbounded; add
a size-cap and eviction policy when creating/setting entries to prevent memory
exhaustion. Replace or augment slackChannelsCache handling: either swap the Map
for an LRU implementation (e.g., using an LRU cache library) or enforce a
MAX_CACHE_SIZE when inserting in the block that calls
slackChannelsCache.set(userId, ...); if cache size >= MAX_CACHE_SIZE evict the
oldest/least-recently-used entry before setting the new one. Update both the
declaration of slackChannelsCache and the insertion logic around
listSlackChannels/ slackChannelsCache.set to implement the eviction semantics.
In `@server/services/notification.ts`:
- Around line 468-470: The queue-acknowledgement logic currently only inspects
result.email?.success before calling
storage.markQueueEntriesDelivered(entries.map(e => e.id)), so monitors that use
only webhook or Slack get acked even if those channels failed; update the logic
in the places that call storage.markQueueEntriesDelivered (the blocks handling
send results) to compute success across all configured channels (e.g., check
result.email?.success, result.webhook?.success, result.slack?.success) and only
mark entries delivered if at least one of the enabled channels returned success,
or alternatively determine configured channels for the monitor and require at
least one configured channel to succeed before calling
storage.markQueueEntriesDelivered.
- Around line 153-274: deliverToChannels currently sends webhook and slack
notifications without verifying the monitor owner's current tier, allowing
downgraded users to keep receiving paid-channel deliveries; fix by checking the
user's current tier (e.g., fetch via storage.getUserById or storage.getUserTier
using monitor.userId) before dispatching paid channels and skip or mark
deliveries as failed/pending when the tier no longer permits the channel.
Concretely, in deliverToChannels adjust the enabledChannels handling (the
enabledChannels variable and the "webhook" and "slack" switch cases) to filter
out or short-circuit webhook/slack when the fetched user tier is insufficient,
and create appropriate storage.addDeliveryLog entries and warnings for skipped
deliveries so they are recorded. Ensure you still allow email regardless of tier
and reuse monitor.userId for the user lookup.
In `@server/services/webhookDelivery.test.ts`:
- Around line 72-87: The test uses a loose assertion for the timestamp; replace
the toBeDefined() check for payload.detectedAt with a strict equality check
against the deterministic date produced by makeChange() (e.g., new
Date("2024-01-01T00:00:00Z") or the exact value returned by makeChange()) so
that buildWebhookPayload (invoked with makeMonitor() and makeChange()) asserts
payload.detectedAt exactly equals the expected Date/ISO string and catches
regressions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 22501d78-6c0b-4ffb-9bcf-e54dbfe5a6fd
📒 Files selected for processing (7)
.env.exampleclient/src/components/NotificationChannelsPanel.tsxserver/routes.notificationChannels.test.tsserver/routes.tsserver/services/notification.tsserver/services/webhookDelivery.test.tsserver/services/webhookDelivery.ts
Summary
emailEnabledboolean — monitors without channel rows fall back to legacy behaviorChanges
Backend
shared/schema.ts): 3 new tables —notificationChannels,deliveryLog,slackConnectionsserver/storage.ts): 11 new methods for channel CRUD, delivery logging, Slack connectionsserver/utils/encryption.ts): AES-256-GCM for Slack bot tokensserver/services/webhookDelivery.ts): HMAC-SHA256 signing, SSRF validation, 5s timeoutserver/services/slackDelivery.ts): Block Kit messages viachat.postMessageserver/services/notification.ts): Multi-channel dispatch withPromise.allSettledfor failure isolationserver/services/scheduler.ts): Webhook retry cron (every minute) + delivery log cleanup (daily)server/routes.ts): 10 new API endpoints for channel management and Slack OAuthshared/routes.ts): Zod schemas for channel config inputFrontend
use-notification-channels.tsanduse-slack.tswith React QuerySecurity
isPrivateUrl()Test plan
npm run test)npm run build)https://claude.ai/code/session_01KuoUHkKehAzWNLszLvk4Js
Summary by CodeRabbit