Conversation
📝 WalkthroughWalkthroughAdds a Chat feature with webhook-based LLM integration: new API endpoints and in-memory job store for webhook results, a chat client library with polling, five chat UI components, icon additions, styling/navigation updates, and environment variables for webhook configuration. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Browser
participant ChatAPI as /api/llm/call (proxy)
participant LLM as Upstream LLM Service
participant Webhook as /api/llm/webhook/[id]
participant JobStore as In-Memory Job Store
Client->>ChatAPI: POST create LLM call (callback_url)
ChatAPI->>ChatAPI: Append secret if configured
ChatAPI->>LLM: Forward request
LLM->>LLM: Process asynchronously
Client->>ChatAPI: Poll GET /api/llm/call/[job_id]/result
ChatAPI->>JobStore: getResult(job_id)
JobStore-->>ChatAPI: 204 / no result
LLM-->>Webhook: POST result to callback_url (secret)
Webhook->>Webhook: Validate secret (query/header)
Webhook->>JobStore: publish(job_id, record)
Client->>ChatAPI: Poll GET /api/llm/call/[job_id]/result
ChatAPI->>JobStore: getResult(job_id)
JobStore-->>ChatAPI: Return record
ChatAPI->>JobStore: clearResult(job_id)
ChatAPI-->>Client: 200 { success, llm_response, ... }
sequenceDiagram
participant User as User
participant ChatPage as Chat Page
participant ChatClient as chatClient
participant APIRoute as /api/llm/call routes
participant JobStore as Job Store
User->>ChatPage: Type message + send
ChatPage->>ChatPage: Append user message + pending assistant
ChatPage->>ChatClient: createLLMCall(..., callback_url)
ChatClient->>APIRoute: POST /api/llm/call
APIRoute-->>ChatClient: { job_id, ... }
ChatPage->>ChatClient: pollLLMCall(job_id)
ChatClient->>APIRoute: GET /api/llm/call/[job_id]/result (loop)
APIRoute->>JobStore: getResult(job_id)
JobStore-->>APIRoute: LLMJobRecord (when webhook published)
APIRoute-->>ChatClient: { success, llm_response }
ChatClient->>ChatPage: extracted assistant text
ChatPage->>ChatPage: Update assistant message (complete/error)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (6)
app/components/user-menu/Branding.tsx (1)
9-12: Right-sizenext/imageintrinsic dimensions to match rendered size.Lines 9–11 declare a full intrinsic size (
801x311) while CSS constrains display toh-10; this over-declares the image and causespriorityto preload unnecessarily large variants. Align declared dimensions with rendered size using the aspect-ratio-preserving values below.Suggested change
- width={801} - height={311} + width={104} + height={40} + sizes="104px" className="h-10 w-auto"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/components/user-menu/Branding.tsx` around lines 9 - 12, The next/image in the Branding component is declaring intrinsic dimensions width={801} height={311} while CSS forces a small rendered height via className="h-10 w-auto", causing oversized preloads; update the intrinsic width/height to scaled values that preserve the original aspect ratio and match the rendered size (e.g., compute new width/height consistent with h-10 and the original 801x311 ratio) so the Image component in Branding.tsx uses appropriately smaller intrinsic dimensions and avoids loading unnecessarily large priority variants.app/components/chat/ChatConfigPicker.tsx (1)
108-121: Add ARIA menu semantics for better accessibility.Consider adding
aria-expanded/aria-haspopupon the trigger androleattributes on the menu/menu items to improve screen-reader navigation.Also applies to: 149-153, 187-193
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/components/chat/ChatConfigPicker.tsx` around lines 108 - 121, The trigger button that toggles the menu (the element using onClick={() => !disabled && setOpen(prev => !prev)} and the label/Icons) should include aria-expanded={open} and aria-haspopup="menu" (and an aria-controls pointing to the menu's id); the menu container (the div rendered when open) should have role="menu" and a unique id, and each selectable entry rendered inside that container should have role="menuitem" (or role="menuitemradio"/"menuitemcheckbox" as appropriate) plus tabIndex and keyboard handling; update the components around setOpen/open, the trigger button and the menu div (and the item render logic around lines referenced) to add these attributes so screen-readers can correctly detect and navigate the menu.app/api/llm/call/route.ts (1)
14-21: Consider using a pure function instead of mutating the body.The
appendSecretToCallbackfunction mutates its input parameter. While this works correctly, a pure function returning the modified body would be more predictable and easier to test.♻️ Optional refactor to pure function
-function appendSecretToCallback(body: Record<string, unknown>): void { +function withSecretCallback(body: Record<string, unknown>): Record<string, unknown> { const secret = process.env.WEBHOOK_SECRET; - if (!secret) return; + if (!secret) return body; const url = body.callback_url; - if (typeof url !== "string" || url.length === 0) return; + if (typeof url !== "string" || url.length === 0) return body; const sep = url.includes("?") ? "&" : "?"; - body.callback_url = `${url}${sep}secret=${encodeURIComponent(secret)}`; + return { ...body, callback_url: `${url}${sep}secret=${encodeURIComponent(secret)}` }; }Then in POST:
const body = (await request.json()) as Record<string, unknown>; - appendSecretToCallback(body); + const finalBody = withSecretCallback(body); const { status, data } = await apiClient(request, "/api/v1/llm/call", { method: "POST", - body: JSON.stringify(body), + body: JSON.stringify(finalBody), });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/api/llm/call/route.ts` around lines 14 - 21, The helper appendSecretToCallback currently mutates its input body; change it to a pure function that returns a new body object (e.g., function appendSecretToCallback(body: Record<string, unknown>): Record<string, unknown>) that leaves the original untouched, only adding/overwriting callback_url in the returned object when WEBHOOK_SECRET exists and url is valid; update callers (the POST handler) to use the returned value instead of relying on in-place mutation and preserve the same runtime behavior and types (encodeURIComponent, same separator logic).app/globals.css (1)
82-90: Consider adding dark mode accent color overrides.The dark mode media query updates background, foreground, muted, and border colors but doesn't override the accent colors (
--accent,--accent-hover). The new blue accent (#1f4496) may have insufficient contrast on dark backgrounds.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/globals.css` around lines 82 - 90, The dark-mode :root block updates background/foreground colors but doesn't override the accent variables, so --accent and --accent-hover stay as the light-theme blue and may lack contrast; add overrides for --accent and --accent-hover inside the `@media` (prefers-color-scheme: dark) :root (the CSS variables named --accent and --accent-hover) and pick darker-theme-friendly values (e.g., a lighter/brighter blue variant or contrast-optimized hex) to ensure sufficient contrast on the dark backgrounds and update any related accent variables the same way.app/components/chat/ChatMessageList.tsx (1)
18-20: Auto-scroll triggers on any message change, not just appends.The effect depends on the entire
messagesarray reference. This means editing an existing message's content or status (e.g., when a pending message receives its response) will also trigger a scroll. This is likely the intended behavior for a chat UI, but if you want scroll-to-bottom only on new messages, consider trackingmessages.lengthor the last message'sidinstead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/components/chat/ChatMessageList.tsx` around lines 18 - 20, The current useEffect in ChatMessageList that calls bottomRef.current?.scrollIntoView({ behavior: "smooth", block: "end" }) depends on the whole messages array and will run on any change; change the dependency to a more specific signal (e.g., messages.length or the id of the last message such as messages[messages.length-1]?.id) so scrolling only happens when a message is appended, not when an existing message is updated; update the dependency array and ensure bottomRef and messages are still referenced inside the effect (keep the same effect body but replace [messages] with [messages.length] or [messages.at(-1)?.id]).app/api/llm/webhook/[callback_id]/route.ts (1)
28-37: Consider constant-time comparison for webhook secret.The direct string equality comparison (
provided === expected) is vulnerable to timing attacks, which could allow an attacker to guess the secret character by character by measuring response times. For webhook secrets in production, use a constant-time comparison.🔒 Suggested fix using crypto.timingSafeEqual
+import { timingSafeEqual } from "crypto"; + +function safeCompare(a: string, b: string): boolean { + if (a.length !== b.length) return false; + return timingSafeEqual(Buffer.from(a), Buffer.from(b)); +} + function isAuthorized(request: Request): boolean { const expected = process.env.WEBHOOK_SECRET; if (!expected) return true; const url = new URL(request.url); const provided = url.searchParams.get("secret") || request.headers.get("x-webhook-secret") || ""; - return provided === expected; + return safeCompare(provided, expected); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/api/llm/webhook/`[callback_id]/route.ts around lines 28 - 37, Replace the direct equality check in isAuthorized with a constant-time comparison: keep the early return when process.env.WEBHOOK_SECRET is missing, otherwise obtain expected and provided, convert both to Buffers (e.g., Buffer.from(expected) and Buffer.from(provided || "")), and use crypto.timingSafeEqual after first checking lengths match (if lengths differ return false) to avoid throwing; ensure you import/require Node's crypto and reference the isAuthorized function, expected and provided variables when updating the logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.env.example:
- Line 3: The GUARDRAILS_TOKEN environment variable line currently has spaces
around the equals sign; change the assignment to use dotenv-compatible KEY=value
formatting by removing the spaces so the line reads GUARDRAILS_TOKEN=<value or
empty> (i.e., no spaces around '=') to fix linting and dotenv parsing issues.
In `@app/api/llm/call/`[job_id]/result/route.ts:
- Around line 19-27: Replace the non-atomic getResult() then clearResult()
pattern with a new atomic popResult(jobId) helper in the job store (e.g., the
module that currently exports getResult/clearResult) that retrieves the
LLMJobRecord, deletes it from store.results, and clears/deletes any associated
timer from store.timers in one operation; then update the route handler to call
popResult(job_id) instead of getResult() followed by clearResult() so a single
call both returns and removes the record atomically (preserve the same return
semantics as getResult).
In `@app/components/chat/ChatInput.tsx`:
- Around line 68-71: The button's onClick currently calls onSend directly; add
the same defensive guard used in handleKeyDown so the click handler checks
disabled/isPending and that value.trim() is non-empty before invoking onSend.
Update the button's onClick in ChatInput.tsx to mirror the condition (e.g., if
(!disabled && !isPending && value.trim()) onSend()), using the existing
canSend/disabled/isPending/value variables so clicks won't trigger sends when
the component is not ready.
In `@app/components/user-menu/Branding.tsx`:
- Line 8: The logo image in the Branding component uses a terse alt ("Kaapi");
update the alt attribute on the <img> in Branding (or the component that renders
the logo) to a more descriptive label such as "Kaapi — [short descriptive
tagline]" or "Kaapi logo" so screen readers convey intent; modify the alt value
in Branding.tsx where the image is rendered (the alt prop on the logo <img>)
accordingly.
In `@app/lib/chatClient.ts`:
- Around line 76-83: fetchWebhookResult does not accept or pass an AbortSignal
so its fetch call cannot be cancelled; modify fetchWebhookResult to accept a
signal?: AbortSignal parameter and forward it into the fetch init (signal:
signal) and then update callers (e.g., pollLLMCall and other polling helpers
referenced around lines 110-128 and 150-159) to pass their AbortSignal through;
ensure all fetch calls in functions named fetchWebhookResult (and similar
polling fetch helpers) receive and use the provided signal so in-flight requests
are aborted when signal.aborted is triggered.
- Around line 11-12: This module is client-side but still uses apiFetch and raw
fetch, which bypasses the app's 401/token-refresh flow; replace uses of apiFetch
in createLLMCall and the raw fetch calls in the polling logic (and the
occurrences noted around lines 31-39 and 76-98) with clientFetch so token
refresh and AUTH_EXPIRED_EVENT handling are applied. Import clientFetch from the
same api client module, pass the same endpoint and options you currently send to
apiFetch/fetch, and ensure response-stream handling (for streaming/polling in
createLLMCall) remains identical after switching to clientFetch (i.e., use
clientFetch(...).then(res => res.body / res.json() as before). Keep function
names createLLMCall and the polling loop intact; only replace the network call
helpers.
- Around line 225-267: The configToBlob function currently drops
SavedConfig.promptContent; update configToBlob to copy config.promptContent into
blob.completion.prompt_template.template (creating prompt_template if missing)
so the resulting ConfigBlob preserves the saved prompt template; specifically,
in configToBlob (function name) when building the blob variable, if
config.promptContent is set/non-empty, set blob.completion.prompt_template = {
template: config.promptContent } (or merge into an existing prompt_template) so
chat requests retain the saved prompt template.
In `@app/lib/llmJobStore.ts`:
- Around line 39-45: The current implementation uses process-local storage via
globalRef/__llmJobStore with Store.results and Store.timers which breaks in
multi-instance/serverless setups; replace this in-memory Store with a
pluggable/shared implementation (e.g., Redis/Upstash) and update publish() and
getResult() to use that backing store: define an abstract JobStore interface
matching the existing methods/shape (results map, expiration/timers behavior),
provide a Redis-backed JobStore implementation, wire initialization where
globalRef.__llmJobStore is set to the Redis-backed instance (or a factory)
instead of an in-memory Map, and update any code referencing store, results,
timers, publish(), and getResult() to call the new shared store methods so
callbacks and polls work across instances.
In `@app/page.tsx`:
- Around line 172-174: The finally block is clearing isPending for stale/aborted
requests; fix by making the AbortController instance local (const controller =
new AbortController()), assign it to abortRef.current when starting the request,
and in the finally handlers only clear isPending (and any UI state) if
abortRef.current === controller so only the most recent request can disable the
composer/config picker; apply the same guard for the other similar block that
uses abortRef and isPending.
---
Nitpick comments:
In `@app/api/llm/call/route.ts`:
- Around line 14-21: The helper appendSecretToCallback currently mutates its
input body; change it to a pure function that returns a new body object (e.g.,
function appendSecretToCallback(body: Record<string, unknown>): Record<string,
unknown>) that leaves the original untouched, only adding/overwriting
callback_url in the returned object when WEBHOOK_SECRET exists and url is valid;
update callers (the POST handler) to use the returned value instead of relying
on in-place mutation and preserve the same runtime behavior and types
(encodeURIComponent, same separator logic).
In `@app/api/llm/webhook/`[callback_id]/route.ts:
- Around line 28-37: Replace the direct equality check in isAuthorized with a
constant-time comparison: keep the early return when process.env.WEBHOOK_SECRET
is missing, otherwise obtain expected and provided, convert both to Buffers
(e.g., Buffer.from(expected) and Buffer.from(provided || "")), and use
crypto.timingSafeEqual after first checking lengths match (if lengths differ
return false) to avoid throwing; ensure you import/require Node's crypto and
reference the isAuthorized function, expected and provided variables when
updating the logic.
In `@app/components/chat/ChatConfigPicker.tsx`:
- Around line 108-121: The trigger button that toggles the menu (the element
using onClick={() => !disabled && setOpen(prev => !prev)} and the label/Icons)
should include aria-expanded={open} and aria-haspopup="menu" (and an
aria-controls pointing to the menu's id); the menu container (the div rendered
when open) should have role="menu" and a unique id, and each selectable entry
rendered inside that container should have role="menuitem" (or
role="menuitemradio"/"menuitemcheckbox" as appropriate) plus tabIndex and
keyboard handling; update the components around setOpen/open, the trigger button
and the menu div (and the item render logic around lines referenced) to add
these attributes so screen-readers can correctly detect and navigate the menu.
In `@app/components/chat/ChatMessageList.tsx`:
- Around line 18-20: The current useEffect in ChatMessageList that calls
bottomRef.current?.scrollIntoView({ behavior: "smooth", block: "end" }) depends
on the whole messages array and will run on any change; change the dependency to
a more specific signal (e.g., messages.length or the id of the last message such
as messages[messages.length-1]?.id) so scrolling only happens when a message is
appended, not when an existing message is updated; update the dependency array
and ensure bottomRef and messages are still referenced inside the effect (keep
the same effect body but replace [messages] with [messages.length] or
[messages.at(-1)?.id]).
In `@app/components/user-menu/Branding.tsx`:
- Around line 9-12: The next/image in the Branding component is declaring
intrinsic dimensions width={801} height={311} while CSS forces a small rendered
height via className="h-10 w-auto", causing oversized preloads; update the
intrinsic width/height to scaled values that preserve the original aspect ratio
and match the rendered size (e.g., compute new width/height consistent with h-10
and the original 801x311 ratio) so the Image component in Branding.tsx uses
appropriately smaller intrinsic dimensions and avoids loading unnecessarily
large priority variants.
In `@app/globals.css`:
- Around line 82-90: The dark-mode :root block updates background/foreground
colors but doesn't override the accent variables, so --accent and --accent-hover
stay as the light-theme blue and may lack contrast; add overrides for --accent
and --accent-hover inside the `@media` (prefers-color-scheme: dark) :root (the CSS
variables named --accent and --accent-hover) and pick darker-theme-friendly
values (e.g., a lighter/brighter blue variant or contrast-optimized hex) to
ensure sufficient contrast on the dark backgrounds and update any related accent
variables the same way.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f05d161c-5c1b-45b1-858e-65012dff0129
⛔ Files ignored due to path filters (1)
public/kaapi-logo.pngis excluded by!**/*.png
📒 Files selected for processing (28)
.env.exampleapp/(main)/document/page.tsxapp/api/llm/call/[job_id]/result/route.tsapp/api/llm/call/[job_id]/route.tsapp/api/llm/call/route.tsapp/api/llm/webhook/[callback_id]/route.tsapp/components/Button.tsxapp/components/Sidebar.tsxapp/components/auth/TokenVerifyPage.tsxapp/components/chat/ChatConfigPicker.tsxapp/components/chat/ChatEmptyState.tsxapp/components/chat/ChatInput.tsxapp/components/chat/ChatMessage.tsxapp/components/chat/ChatMessageList.tsxapp/components/chat/index.tsapp/components/icons/index.tsxapp/components/icons/sidebar/ChatIcon.tsxapp/components/icons/sidebar/SendIcon.tsxapp/components/settings/SettingsSidebar.tsxapp/components/user-menu/Branding.tsxapp/globals.cssapp/lib/chatClient.tsapp/lib/colors.tsapp/lib/llmJobStore.tsapp/lib/navConfig.tsapp/lib/types/chat.tsapp/page.tsxmiddleware.ts
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (4)
app/lib/chatClient.ts (3)
240-246:⚠️ Potential issue | 🟠 Major
promptContentis not preserved in ConfigBlob.The
configToBlobfunction dropsconfig.promptContent, meaning chat requests built from saved configs may ignore the saved prompt template. This was flagged in a previous review.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/lib/chatClient.ts` around lines 240 - 246, The ConfigBlob created in configToBlob currently omits config.promptContent so saved prompt templates are lost; update the blob construction in configToBlob (and the ConfigBlob/completion type if needed) to include completion.promptContent = config.promptContent when present, preserving the original prompt template for later chat requests (ensure you update any related types/interfaces like ConfigBlob and the call sites that consume completion.promptContent).
75-82:⚠️ Potential issue | 🟠 MajorAbort signal not passed to fetch.
The
fetchWebhookResultfunction accepts no signal parameter, so in-flight polling requests cannot be cancelled when the user aborts. This was flagged in a previous review.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/lib/chatClient.ts` around lines 75 - 82, The fetchWebhookResult function currently cannot be cancelled; add an optional parameter (e.g., signal?: AbortSignal) to fetchWebhookResult(jobId: string, apiKey: string, signal?: AbortSignal): Promise<LLMCallStatusResponse | null> and pass that signal into the fetch options (include signal alongside headers and credentials) so in-flight requests can be aborted; also update any callers (polling helpers) to forward their AbortSignal when invoking fetchWebhookResult so user aborts actually cancel the request.
30-38:⚠️ Potential issue | 🟠 MajorShould use
clientFetchfor browser-side API calls.This module is client-side but uses
apiFetch, bypassing the app's standard 401 refresh and auth-expired handling. This was flagged in a previous review.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/lib/chatClient.ts` around lines 30 - 38, The createLLMCall function in app/lib/chatClient.ts is using apiFetch (server-style) instead of the browser-safe clientFetch, so replace the apiFetch call in createLLMCall with clientFetch("/api/llm/call", { method: "POST", body: JSON.stringify(body) }) and let the app's clientFetch handle auth/401 refresh; if callers currently pass an apiKey and you must preserve that behavior, forward it as a header (e.g., "x-api-key") in the init object; update the function signature accordingly (remove apiKey if unused) and ensure the returned type remains Promise<LLMCallCreateResponse>.app/page.tsx (1)
230-235:⚠️ Potential issue | 🟠 Major
isPendingis still cleared by stale requests.The
finallyblock correctly guardsabortRef.current = nullwith the controller check, butsetIsPending(false)on line 234 executes unconditionally. If request A is aborted because request B starts, A's finally block still clearsisPending, re-enabling UI controls while B is in-flight.🐛 Suggested fix
} finally { if (abortRef.current === controller) { abortRef.current = null; + setIsPending(false); } - setIsPending(false); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/page.tsx` around lines 230 - 235, The finally block unconditionally calls setIsPending(false) causing stale request A to clear the pending state for a newer request B; update the finally to only clear isPending when the finishing controller is still the active one (same guard used for abortRef.current): move or wrap setIsPending(false) inside the existing if (abortRef.current === controller) block (or check the same condition before calling setIsPending) so only the current request can reset the pending state; reference abortRef, controller, and setIsPending in the change.
🧹 Nitpick comments (7)
app/components/Sidebar.tsx (2)
115-123: Consider standardizing icon sizing in the map.The
ChatIconaddition follows the existing pattern where most icons don't have explicit sizing. Note thatGearIconhasclassName="w-5 h-5"while others rely on default sizing. This inconsistency was pre-existing but could be standardized for uniform rendering.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/components/Sidebar.tsx` around lines 115 - 123, The icon map (iconMap) has inconsistent sizing—GearIcon is the only entry with an explicit className while others (e.g., ChatIcon, ClipboardIcon, DocumentFileIcon, BookOpenIcon, ShieldCheckIcon, SlidersIcon) rely on defaults; update the map to standardize sizes by either removing the explicit className from GearIcon or applying a shared size (e.g., a single className like "w-5 h-5") to all icons, or wrap each icon with a small Icon component/utility that enforces the uniform size, so rendering is consistent across the icons.
33-39: Consider readingsidebarCollapsedfrom context instead of props.The component receives
collapsedas a prop (line 34) but writes state changes viasetSidebarCollapsedfromuseApp()(line 39). Meanwhile,PageHeaderreads and writes directly from the context. This asymmetry works but could be simplified by using the context consistently for both read and write operations.♻️ Suggested refactor
export default function Sidebar({ - collapsed, activeRoute = "/", }: SidebarProps) { const router = useRouter(); const { currentUser, googleProfile, isAuthenticated, logout } = useAuth(); - const { setSidebarCollapsed } = useApp(); + const { sidebarCollapsed: collapsed, setSidebarCollapsed } = useApp();This would also require updating
SidebarPropsto removecollapsedif it's no longer needed as a prop.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/components/Sidebar.tsx` around lines 33 - 39, The Sidebar component currently accepts a collapsed prop but uses setSidebarCollapsed from useApp(); change Sidebar to read the collapsed state from context instead of via the collapsed prop: remove the collapsed prop from SidebarProps and any callers that pass it, use useApp() inside Sidebar to get the current sidebarCollapsed value (alongside setSidebarCollapsed), and update any references that relied on the prop to use that context value; ensure PageHeader and Sidebar both use the same context-based state (useApp, setSidebarCollapsed) to keep behavior consistent.app/api/llm/webhook/[callback_id]/route.ts (1)
17-26: Consider constant-time comparison for webhook secret.The secret comparison on line 25 uses
===which is vulnerable to timing attacks. While the risk is low for webhook secrets (attacker would need many requests and precise timing), using constant-time comparison is a security best practice.🔒 Suggested fix using crypto.timingSafeEqual
+import { timingSafeEqual } from "crypto"; + +function safeCompare(a: string, b: string): boolean { + if (a.length !== b.length) return false; + return timingSafeEqual(Buffer.from(a), Buffer.from(b)); +} + function isAuthorized(request: Request): boolean { const expected = process.env.WEBHOOK_SECRET; if (!expected) return true; const url = new URL(request.url); const provided = url.searchParams.get("secret") || request.headers.get("x-webhook-secret") || ""; - return provided === expected; + return safeCompare(provided, expected); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/api/llm/webhook/`[callback_id]/route.ts around lines 17 - 26, The isAuthorized function currently does a plain === comparison of the webhook secret; replace that with a constant-time comparison using crypto.timingSafeEqual: import Node's crypto, convert both provided and expected to Buffers (e.g., Buffer.from(...)) and if their lengths differ return false, otherwise call crypto.timingSafeEqual(bufProvided, bufExpected) and return that boolean; keep the existing behavior when process.env.WEBHOOK_SECRET is unset (still return true) and ensure you handle empty/null provided values safely before buffering.app/api/llm/call/route.ts (1)
26-31: Consider extracting upstream error details on non-2xx responses.When the upstream
/api/v1/llm/callreturns an error status, the current code passes throughdatadirectly. Per coding guidelines, error messages should be extracted usingbody.error || body.message || body.detail. Ifdatacontains structured error info, consider normalizing it for consistency.♻️ Suggested improvement
const { status, data } = await apiClient(request, "/api/v1/llm/call", { method: "POST", body: JSON.stringify(body), }); + if (!data?.success && status >= 400) { + const errorMsg = data?.error || data?.message || data?.detail || "Request failed"; + return NextResponse.json( + { success: false, error: errorMsg, data: null }, + { status }, + ); + } + return NextResponse.json(data, { status });As per coding guidelines: Extract error messages from API responses using the pattern
body.error || body.message || body.detailwhen adding new API routes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/api/llm/call/route.ts` around lines 26 - 31, The route currently proxies the upstream response directly (const { status, data } = await apiClient(...); return NextResponse.json(data, { status });) which can expose inconsistent error structures; update the handler to detect non-2xx statuses and normalize the error payload by extracting the message from data.error || data.message || data.detail (falling back to a generic message), then return a consistent JSON error object (e.g., { error: message, details: data }) with the original status using NextResponse.json; keep usage of apiClient, status, data and NextResponse.json to locate and modify the code.app/lib/types/chat.ts (1)
11-19: Consider makingstatusrequired onChatMessage.The
statusfield is marked optional (status?: ChatMessageStatus), but the UI code inpage.tsxalways sets it ("pending", "complete", "error"). Making it required would provide better type safety and document the invariant.💡 Suggested change
export interface ChatMessage { id: string; role: ChatRole; content: string; createdAt: number; - status?: ChatMessageStatus; + status: ChatMessageStatus; jobId?: string; error?: string; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/lib/types/chat.ts` around lines 11 - 19, The ChatMessage interface currently makes status optional but the UI (e.g., page.tsx) always sets it; update the ChatMessage type by removing the optional modifier from status (make status: ChatMessageStatus) and then update all message creation sites (places that construct ChatMessage objects, such as in page.tsx/new message builders or any tests) to explicitly supply a valid ChatMessageStatus value ("pending"|"complete"|"error" or the enum members) so the code typechecks and the invariant is documented.app/lib/chatClient.ts (1)
232-238: Only first tool'smax_num_resultsis used.When multiple tools exist,
max_num_resultsis taken only fromtools[0](line 236), ignoring values from other tools. This may be intentional, but could lead to unexpected behavior if tools have different limits.💡 Consider documenting or handling multiple tools
const tools: Tool[] = config.tools ?? []; if (tools.length > 0) { const knowledge_base_ids = tools.flatMap((t) => t.knowledge_base_ids); if (knowledge_base_ids.length > 0) { params.knowledge_base_ids = knowledge_base_ids; - params.max_num_results = tools[0].max_num_results; + // Use the maximum value across all tools, or the first tool's value + params.max_num_results = Math.max(...tools.map((t) => t.max_num_results ?? 0)); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/lib/chatClient.ts` around lines 232 - 238, The current code only uses tools[0].max_num_results when setting params.max_num_results which ignores other tools' limits; update the logic in the block that processes tools (the variables tools, knowledge_base_ids, params) to compute a combined value (e.g., take the maximum of all tools' max_num_results via tools.map(t => t.max_num_results) and Math.max, or pick another aggregation like the minimum if you prefer conservative limits) and assign that aggregated value to params.max_num_results instead of tools[0].max_num_results; ensure you still collect knowledge_base_ids with tools.flatMap as before.app/components/chat/ChatEmptyState.tsx (1)
46-59: Consider dark mode compatibility for hover states.The
hover:bg-neutral-50class uses a hardcoded light color that may not adapt well in dark mode contexts. Consider using a theme-aware token likehover:bg-bg-secondaryor similar if dark mode support is planned.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/components/chat/ChatEmptyState.tsx` around lines 46 - 59, The hover state on the suggestion buttons uses a hardcoded light color (class hover:bg-neutral-50) which breaks in dark mode; update the button classes rendered in the SUGGESTIONS map (the element using onSuggestion, isAuthenticated, hasConfig) to use a theme-aware token such as hover:bg-bg-secondary (or add a dark variant like hover:dark:bg-bg-secondary / hover:dark:bg-neutral-800) instead of hover:bg-neutral-50 so the hover background adapts to dark/light themes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/lib/chatClient.ts`:
- Around line 143-152: The promise-based polling wait in chatClient.ts leaks
abort listeners because the onAbort handler is only removed when abort fires;
fix it by capturing the onAbort function in a variable and, when the timer
resolves (before calling resolve), remove the abort listener via
signal.removeEventListener("abort", onAbort) and clear the timeout; ensure you
guard signal exists and only add/remove the listener around
signal.addEventListener and signal.removeEventListener so the listener is
cleaned up whether the timer fires or abort occurs.
---
Duplicate comments:
In `@app/lib/chatClient.ts`:
- Around line 240-246: The ConfigBlob created in configToBlob currently omits
config.promptContent so saved prompt templates are lost; update the blob
construction in configToBlob (and the ConfigBlob/completion type if needed) to
include completion.promptContent = config.promptContent when present, preserving
the original prompt template for later chat requests (ensure you update any
related types/interfaces like ConfigBlob and the call sites that consume
completion.promptContent).
- Around line 75-82: The fetchWebhookResult function currently cannot be
cancelled; add an optional parameter (e.g., signal?: AbortSignal) to
fetchWebhookResult(jobId: string, apiKey: string, signal?: AbortSignal):
Promise<LLMCallStatusResponse | null> and pass that signal into the fetch
options (include signal alongside headers and credentials) so in-flight requests
can be aborted; also update any callers (polling helpers) to forward their
AbortSignal when invoking fetchWebhookResult so user aborts actually cancel the
request.
- Around line 30-38: The createLLMCall function in app/lib/chatClient.ts is
using apiFetch (server-style) instead of the browser-safe clientFetch, so
replace the apiFetch call in createLLMCall with clientFetch("/api/llm/call", {
method: "POST", body: JSON.stringify(body) }) and let the app's clientFetch
handle auth/401 refresh; if callers currently pass an apiKey and you must
preserve that behavior, forward it as a header (e.g., "x-api-key") in the init
object; update the function signature accordingly (remove apiKey if unused) and
ensure the returned type remains Promise<LLMCallCreateResponse>.
In `@app/page.tsx`:
- Around line 230-235: The finally block unconditionally calls
setIsPending(false) causing stale request A to clear the pending state for a
newer request B; update the finally to only clear isPending when the finishing
controller is still the active one (same guard used for abortRef.current): move
or wrap setIsPending(false) inside the existing if (abortRef.current ===
controller) block (or check the same condition before calling setIsPending) so
only the current request can reset the pending state; reference abortRef,
controller, and setIsPending in the change.
---
Nitpick comments:
In `@app/api/llm/call/route.ts`:
- Around line 26-31: The route currently proxies the upstream response directly
(const { status, data } = await apiClient(...); return NextResponse.json(data, {
status });) which can expose inconsistent error structures; update the handler
to detect non-2xx statuses and normalize the error payload by extracting the
message from data.error || data.message || data.detail (falling back to a
generic message), then return a consistent JSON error object (e.g., { error:
message, details: data }) with the original status using NextResponse.json; keep
usage of apiClient, status, data and NextResponse.json to locate and modify the
code.
In `@app/api/llm/webhook/`[callback_id]/route.ts:
- Around line 17-26: The isAuthorized function currently does a plain ===
comparison of the webhook secret; replace that with a constant-time comparison
using crypto.timingSafeEqual: import Node's crypto, convert both provided and
expected to Buffers (e.g., Buffer.from(...)) and if their lengths differ return
false, otherwise call crypto.timingSafeEqual(bufProvided, bufExpected) and
return that boolean; keep the existing behavior when process.env.WEBHOOK_SECRET
is unset (still return true) and ensure you handle empty/null provided values
safely before buffering.
In `@app/components/chat/ChatEmptyState.tsx`:
- Around line 46-59: The hover state on the suggestion buttons uses a hardcoded
light color (class hover:bg-neutral-50) which breaks in dark mode; update the
button classes rendered in the SUGGESTIONS map (the element using onSuggestion,
isAuthenticated, hasConfig) to use a theme-aware token such as
hover:bg-bg-secondary (or add a dark variant like hover:dark:bg-bg-secondary /
hover:dark:bg-neutral-800) instead of hover:bg-neutral-50 so the hover
background adapts to dark/light themes.
In `@app/components/Sidebar.tsx`:
- Around line 115-123: The icon map (iconMap) has inconsistent sizing—GearIcon
is the only entry with an explicit className while others (e.g., ChatIcon,
ClipboardIcon, DocumentFileIcon, BookOpenIcon, ShieldCheckIcon, SlidersIcon)
rely on defaults; update the map to standardize sizes by either removing the
explicit className from GearIcon or applying a shared size (e.g., a single
className like "w-5 h-5") to all icons, or wrap each icon with a small Icon
component/utility that enforces the uniform size, so rendering is consistent
across the icons.
- Around line 33-39: The Sidebar component currently accepts a collapsed prop
but uses setSidebarCollapsed from useApp(); change Sidebar to read the collapsed
state from context instead of via the collapsed prop: remove the collapsed prop
from SidebarProps and any callers that pass it, use useApp() inside Sidebar to
get the current sidebarCollapsed value (alongside setSidebarCollapsed), and
update any references that relied on the prop to use that context value; ensure
PageHeader and Sidebar both use the same context-based state (useApp,
setSidebarCollapsed) to keep behavior consistent.
In `@app/lib/chatClient.ts`:
- Around line 232-238: The current code only uses tools[0].max_num_results when
setting params.max_num_results which ignores other tools' limits; update the
logic in the block that processes tools (the variables tools,
knowledge_base_ids, params) to compute a combined value (e.g., take the maximum
of all tools' max_num_results via tools.map(t => t.max_num_results) and
Math.max, or pick another aggregation like the minimum if you prefer
conservative limits) and assign that aggregated value to params.max_num_results
instead of tools[0].max_num_results; ensure you still collect knowledge_base_ids
with tools.flatMap as before.
In `@app/lib/types/chat.ts`:
- Around line 11-19: The ChatMessage interface currently makes status optional
but the UI (e.g., page.tsx) always sets it; update the ChatMessage type by
removing the optional modifier from status (make status: ChatMessageStatus) and
then update all message creation sites (places that construct ChatMessage
objects, such as in page.tsx/new message builders or any tests) to explicitly
supply a valid ChatMessageStatus value ("pending"|"complete"|"error" or the enum
members) so the code typechecks and the invariant is documented.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0718bf9b-4790-477c-817e-163c86605459
📒 Files selected for processing (15)
app/api/llm/call/[job_id]/result/route.tsapp/api/llm/call/route.tsapp/api/llm/webhook/[callback_id]/route.tsapp/components/PageHeader.tsxapp/components/Sidebar.tsxapp/components/chat/ChatEmptyState.tsxapp/components/chat/ChatMessage.tsxapp/components/document/DocumentListing.tsxapp/components/user-menu/Branding.tsxapp/globals.cssapp/lib/chatClient.tsapp/lib/llmJobStore.tsapp/lib/types/chat.tsapp/page.tsxmiddleware.ts
✅ Files skipped from review due to trivial changes (1)
- app/components/document/DocumentListing.tsx
🚧 Files skipped from review as they are similar to previous changes (5)
- middleware.ts
- app/api/llm/call/[job_id]/result/route.ts
- app/components/user-menu/Branding.tsx
- app/components/chat/ChatMessage.tsx
- app/globals.css
Issue: #126
Summary: