Skip to content

Chat Interface: LLM Call Test#127

Open
Ayush8923 wants to merge 11 commits intomainfrom
feat/chat-llm-call
Open

Chat Interface: LLM Call Test#127
Ayush8923 wants to merge 11 commits intomainfrom
feat/chat-llm-call

Conversation

@Ayush8923
Copy link
Copy Markdown
Collaborator

@Ayush8923 Ayush8923 commented Apr 28, 2026

Issue: #126

Summary:

@Ayush8923 Ayush8923 linked an issue Apr 28, 2026 that may be closed by this pull request
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 28, 2026

📝 Walkthrough

Walkthrough

Adds a Chat feature with webhook-based LLM integration: new API endpoints and in-memory job store for webhook results, a chat client library with polling, five chat UI components, icon additions, styling/navigation updates, and environment variables for webhook configuration.

Changes

Cohort / File(s) Summary
Environment & Config
\.env.example, app/lib/colors.ts, app/lib/navConfig.ts
Adds NEXT_PUBLIC_APP_URL and WEBHOOK_SECRET to env example; updates accent color values; inserts Chat nav item at / and reorders Evaluations.
API Routes (LLM & Webhook)
app/api/llm/call/route.ts, app/api/llm/call/[job_id]/route.ts, app/api/llm/call/[job_id]/result/route.ts, app/api/llm/webhook/[callback_id]/route.ts
Introduces POST to create LLM calls (appends webhook secret when configured), GET proxy/status for job_id, GET result polling endpoint (returns 204 if not ready), and POST webhook endpoint that validates optional secret and publishes job results to in-process store.
Server Job Store & Types
app/lib/llmJobStore.ts, app/lib/types/chat.ts
Adds in-memory LLM job store with 10m TTL and typed domain model for chat/LLM requests, responses, polling envelopes, and helper types.
Chat Client Library
app/lib/chatClient.ts
New chat BFF client: createLLMCall, callback id/url builders, pollLLMCall with timeout/abort, response extraction, and config→blob mapping.
Chat UI Components
app/components/chat/..., app/components/chat/index.ts
Adds ChatConfigPicker, ChatEmptyState, ChatInput, ChatMessage, ChatMessageList and barrel export. Includes lazy version loading, suggestions, auto-resize textarea, message rendering, and auto-scroll behavior.
Page & App Wiring
app/page.tsx, middleware.ts, app/components/auth/TokenVerifyPage.tsx, app/components/settings/SettingsSidebar.tsx
Converts home page to ChatPage with session state, config persistence, LLM lifecycle and polling; makes / public in middleware; changes post-verify/dashboard redirects to /.
Sidebar, Branding & Styling
app/components/Sidebar.tsx, app/components/user-menu/Branding.tsx, app/components/document/page.tsx, app/globals.css, app/lib/colors.ts, app/components/Button.tsx
Sidebar styling and behavior updates (collapse control, new icon mapping, accent-based active styles); branding switched to image logo; document and border tokens updated; global accent tokens and chat-pulse keyframes added; new secondary button variant.
Icons
app/components/icons/sidebar/ChatIcon.tsx, app/components/icons/sidebar/SendIcon.tsx, app/components/icons/index.tsx
Adds ChatIcon and SendIcon components and re-exports them from icon index.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Browser
    participant ChatAPI as /api/llm/call (proxy)
    participant LLM as Upstream LLM Service
    participant Webhook as /api/llm/webhook/[id]
    participant JobStore as In-Memory Job Store

    Client->>ChatAPI: POST create LLM call (callback_url)
    ChatAPI->>ChatAPI: Append secret if configured
    ChatAPI->>LLM: Forward request
    LLM->>LLM: Process asynchronously
    Client->>ChatAPI: Poll GET /api/llm/call/[job_id]/result
    ChatAPI->>JobStore: getResult(job_id)
    JobStore-->>ChatAPI: 204 / no result
    LLM-->>Webhook: POST result to callback_url (secret)
    Webhook->>Webhook: Validate secret (query/header)
    Webhook->>JobStore: publish(job_id, record)
    Client->>ChatAPI: Poll GET /api/llm/call/[job_id]/result
    ChatAPI->>JobStore: getResult(job_id)
    JobStore-->>ChatAPI: Return record
    ChatAPI->>JobStore: clearResult(job_id)
    ChatAPI-->>Client: 200 { success, llm_response, ... }
Loading
sequenceDiagram
    participant User as User
    participant ChatPage as Chat Page
    participant ChatClient as chatClient
    participant APIRoute as /api/llm/call routes
    participant JobStore as Job Store

    User->>ChatPage: Type message + send
    ChatPage->>ChatPage: Append user message + pending assistant
    ChatPage->>ChatClient: createLLMCall(..., callback_url)
    ChatClient->>APIRoute: POST /api/llm/call
    APIRoute-->>ChatClient: { job_id, ... }
    ChatPage->>ChatClient: pollLLMCall(job_id)
    ChatClient->>APIRoute: GET /api/llm/call/[job_id]/result (loop)
    APIRoute->>JobStore: getResult(job_id)
    JobStore-->>APIRoute: LLMJobRecord (when webhook published)
    APIRoute-->>ChatClient: { success, llm_response }
    ChatClient->>ChatPage: extracted assistant text
    ChatPage->>ChatPage: Update assistant message (complete/error)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

enhancement, ready-for-review

Suggested reviewers

  • Prajna1999
  • vprashrex

Poem

🐰 Hopped in with a tiny ping,
Built a chat that makes hearts sing.
Webhooks hop, and polls go round,
Bubbles, icons — joy unbound! 🥕✨

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 16.22% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The PR title 'Chat Interface: LLM Call Test' is vague and does not accurately reflect the scope of changes. The changeset includes extensive chat UI components, webhook infrastructure, API routes, styling updates, navigation refactoring, and color scheme changes—far more comprehensive than the title suggests. Consider a more descriptive title such as 'Implement chat interface with LLM webhook integration and API routes' or 'Add chat feature with webhook-based LLM result delivery' to better capture the primary scope of this substantial feature.
✅ Passed checks (3 passed)
Check name Status Explanation
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/chat-llm-call

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Ayush8923 Ayush8923 self-assigned this Apr 28, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Nitpick comments (6)
app/components/user-menu/Branding.tsx (1)

9-12: Right-size next/image intrinsic dimensions to match rendered size.

Lines 9–11 declare a full intrinsic size (801x311) while CSS constrains display to h-10; this over-declares the image and causes priority to preload unnecessarily large variants. Align declared dimensions with rendered size using the aspect-ratio-preserving values below.

Suggested change
-        width={801}
-        height={311}
+        width={104}
+        height={40}
+        sizes="104px"
         className="h-10 w-auto"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/components/user-menu/Branding.tsx` around lines 9 - 12, The next/image in
the Branding component is declaring intrinsic dimensions width={801}
height={311} while CSS forces a small rendered height via className="h-10
w-auto", causing oversized preloads; update the intrinsic width/height to scaled
values that preserve the original aspect ratio and match the rendered size
(e.g., compute new width/height consistent with h-10 and the original 801x311
ratio) so the Image component in Branding.tsx uses appropriately smaller
intrinsic dimensions and avoids loading unnecessarily large priority variants.
app/components/chat/ChatConfigPicker.tsx (1)

108-121: Add ARIA menu semantics for better accessibility.

Consider adding aria-expanded/aria-haspopup on the trigger and role attributes on the menu/menu items to improve screen-reader navigation.

Also applies to: 149-153, 187-193

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/components/chat/ChatConfigPicker.tsx` around lines 108 - 121, The trigger
button that toggles the menu (the element using onClick={() => !disabled &&
setOpen(prev => !prev)} and the label/Icons) should include aria-expanded={open}
and aria-haspopup="menu" (and an aria-controls pointing to the menu's id); the
menu container (the div rendered when open) should have role="menu" and a unique
id, and each selectable entry rendered inside that container should have
role="menuitem" (or role="menuitemradio"/"menuitemcheckbox" as appropriate) plus
tabIndex and keyboard handling; update the components around setOpen/open, the
trigger button and the menu div (and the item render logic around lines
referenced) to add these attributes so screen-readers can correctly detect and
navigate the menu.
app/api/llm/call/route.ts (1)

14-21: Consider using a pure function instead of mutating the body.

The appendSecretToCallback function mutates its input parameter. While this works correctly, a pure function returning the modified body would be more predictable and easier to test.

♻️ Optional refactor to pure function
-function appendSecretToCallback(body: Record<string, unknown>): void {
+function withSecretCallback(body: Record<string, unknown>): Record<string, unknown> {
   const secret = process.env.WEBHOOK_SECRET;
-  if (!secret) return;
+  if (!secret) return body;
   const url = body.callback_url;
-  if (typeof url !== "string" || url.length === 0) return;
+  if (typeof url !== "string" || url.length === 0) return body;
   const sep = url.includes("?") ? "&" : "?";
-  body.callback_url = `${url}${sep}secret=${encodeURIComponent(secret)}`;
+  return { ...body, callback_url: `${url}${sep}secret=${encodeURIComponent(secret)}` };
 }

Then in POST:

     const body = (await request.json()) as Record<string, unknown>;
-    appendSecretToCallback(body);
+    const finalBody = withSecretCallback(body);

     const { status, data } = await apiClient(request, "/api/v1/llm/call", {
       method: "POST",
-      body: JSON.stringify(body),
+      body: JSON.stringify(finalBody),
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/llm/call/route.ts` around lines 14 - 21, The helper
appendSecretToCallback currently mutates its input body; change it to a pure
function that returns a new body object (e.g., function
appendSecretToCallback(body: Record<string, unknown>): Record<string, unknown>)
that leaves the original untouched, only adding/overwriting callback_url in the
returned object when WEBHOOK_SECRET exists and url is valid; update callers (the
POST handler) to use the returned value instead of relying on in-place mutation
and preserve the same runtime behavior and types (encodeURIComponent, same
separator logic).
app/globals.css (1)

82-90: Consider adding dark mode accent color overrides.

The dark mode media query updates background, foreground, muted, and border colors but doesn't override the accent colors (--accent, --accent-hover). The new blue accent (#1f4496) may have insufficient contrast on dark backgrounds.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/globals.css` around lines 82 - 90, The dark-mode :root block updates
background/foreground colors but doesn't override the accent variables, so
--accent and --accent-hover stay as the light-theme blue and may lack contrast;
add overrides for --accent and --accent-hover inside the `@media`
(prefers-color-scheme: dark) :root (the CSS variables named --accent and
--accent-hover) and pick darker-theme-friendly values (e.g., a lighter/brighter
blue variant or contrast-optimized hex) to ensure sufficient contrast on the
dark backgrounds and update any related accent variables the same way.
app/components/chat/ChatMessageList.tsx (1)

18-20: Auto-scroll triggers on any message change, not just appends.

The effect depends on the entire messages array reference. This means editing an existing message's content or status (e.g., when a pending message receives its response) will also trigger a scroll. This is likely the intended behavior for a chat UI, but if you want scroll-to-bottom only on new messages, consider tracking messages.length or the last message's id instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/components/chat/ChatMessageList.tsx` around lines 18 - 20, The current
useEffect in ChatMessageList that calls bottomRef.current?.scrollIntoView({
behavior: "smooth", block: "end" }) depends on the whole messages array and will
run on any change; change the dependency to a more specific signal (e.g.,
messages.length or the id of the last message such as
messages[messages.length-1]?.id) so scrolling only happens when a message is
appended, not when an existing message is updated; update the dependency array
and ensure bottomRef and messages are still referenced inside the effect (keep
the same effect body but replace [messages] with [messages.length] or
[messages.at(-1)?.id]).
app/api/llm/webhook/[callback_id]/route.ts (1)

28-37: Consider constant-time comparison for webhook secret.

The direct string equality comparison (provided === expected) is vulnerable to timing attacks, which could allow an attacker to guess the secret character by character by measuring response times. For webhook secrets in production, use a constant-time comparison.

🔒 Suggested fix using crypto.timingSafeEqual
+import { timingSafeEqual } from "crypto";
+
+function safeCompare(a: string, b: string): boolean {
+  if (a.length !== b.length) return false;
+  return timingSafeEqual(Buffer.from(a), Buffer.from(b));
+}
+
 function isAuthorized(request: Request): boolean {
   const expected = process.env.WEBHOOK_SECRET;
   if (!expected) return true;
   const url = new URL(request.url);
   const provided =
     url.searchParams.get("secret") ||
     request.headers.get("x-webhook-secret") ||
     "";
-  return provided === expected;
+  return safeCompare(provided, expected);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/llm/webhook/`[callback_id]/route.ts around lines 28 - 37, Replace the
direct equality check in isAuthorized with a constant-time comparison: keep the
early return when process.env.WEBHOOK_SECRET is missing, otherwise obtain
expected and provided, convert both to Buffers (e.g., Buffer.from(expected) and
Buffer.from(provided || "")), and use crypto.timingSafeEqual after first
checking lengths match (if lengths differ return false) to avoid throwing;
ensure you import/require Node's crypto and reference the isAuthorized function,
expected and provided variables when updating the logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.env.example:
- Line 3: The GUARDRAILS_TOKEN environment variable line currently has spaces
around the equals sign; change the assignment to use dotenv-compatible KEY=value
formatting by removing the spaces so the line reads GUARDRAILS_TOKEN=<value or
empty> (i.e., no spaces around '=') to fix linting and dotenv parsing issues.

In `@app/api/llm/call/`[job_id]/result/route.ts:
- Around line 19-27: Replace the non-atomic getResult() then clearResult()
pattern with a new atomic popResult(jobId) helper in the job store (e.g., the
module that currently exports getResult/clearResult) that retrieves the
LLMJobRecord, deletes it from store.results, and clears/deletes any associated
timer from store.timers in one operation; then update the route handler to call
popResult(job_id) instead of getResult() followed by clearResult() so a single
call both returns and removes the record atomically (preserve the same return
semantics as getResult).

In `@app/components/chat/ChatInput.tsx`:
- Around line 68-71: The button's onClick currently calls onSend directly; add
the same defensive guard used in handleKeyDown so the click handler checks
disabled/isPending and that value.trim() is non-empty before invoking onSend.
Update the button's onClick in ChatInput.tsx to mirror the condition (e.g., if
(!disabled && !isPending && value.trim()) onSend()), using the existing
canSend/disabled/isPending/value variables so clicks won't trigger sends when
the component is not ready.

In `@app/components/user-menu/Branding.tsx`:
- Line 8: The logo image in the Branding component uses a terse alt ("Kaapi");
update the alt attribute on the <img> in Branding (or the component that renders
the logo) to a more descriptive label such as "Kaapi — [short descriptive
tagline]" or "Kaapi logo" so screen readers convey intent; modify the alt value
in Branding.tsx where the image is rendered (the alt prop on the logo <img>)
accordingly.

In `@app/lib/chatClient.ts`:
- Around line 76-83: fetchWebhookResult does not accept or pass an AbortSignal
so its fetch call cannot be cancelled; modify fetchWebhookResult to accept a
signal?: AbortSignal parameter and forward it into the fetch init (signal:
signal) and then update callers (e.g., pollLLMCall and other polling helpers
referenced around lines 110-128 and 150-159) to pass their AbortSignal through;
ensure all fetch calls in functions named fetchWebhookResult (and similar
polling fetch helpers) receive and use the provided signal so in-flight requests
are aborted when signal.aborted is triggered.
- Around line 11-12: This module is client-side but still uses apiFetch and raw
fetch, which bypasses the app's 401/token-refresh flow; replace uses of apiFetch
in createLLMCall and the raw fetch calls in the polling logic (and the
occurrences noted around lines 31-39 and 76-98) with clientFetch so token
refresh and AUTH_EXPIRED_EVENT handling are applied. Import clientFetch from the
same api client module, pass the same endpoint and options you currently send to
apiFetch/fetch, and ensure response-stream handling (for streaming/polling in
createLLMCall) remains identical after switching to clientFetch (i.e., use
clientFetch(...).then(res => res.body / res.json() as before). Keep function
names createLLMCall and the polling loop intact; only replace the network call
helpers.
- Around line 225-267: The configToBlob function currently drops
SavedConfig.promptContent; update configToBlob to copy config.promptContent into
blob.completion.prompt_template.template (creating prompt_template if missing)
so the resulting ConfigBlob preserves the saved prompt template; specifically,
in configToBlob (function name) when building the blob variable, if
config.promptContent is set/non-empty, set blob.completion.prompt_template = {
template: config.promptContent } (or merge into an existing prompt_template) so
chat requests retain the saved prompt template.

In `@app/lib/llmJobStore.ts`:
- Around line 39-45: The current implementation uses process-local storage via
globalRef/__llmJobStore with Store.results and Store.timers which breaks in
multi-instance/serverless setups; replace this in-memory Store with a
pluggable/shared implementation (e.g., Redis/Upstash) and update publish() and
getResult() to use that backing store: define an abstract JobStore interface
matching the existing methods/shape (results map, expiration/timers behavior),
provide a Redis-backed JobStore implementation, wire initialization where
globalRef.__llmJobStore is set to the Redis-backed instance (or a factory)
instead of an in-memory Map, and update any code referencing store, results,
timers, publish(), and getResult() to call the new shared store methods so
callbacks and polls work across instances.

In `@app/page.tsx`:
- Around line 172-174: The finally block is clearing isPending for stale/aborted
requests; fix by making the AbortController instance local (const controller =
new AbortController()), assign it to abortRef.current when starting the request,
and in the finally handlers only clear isPending (and any UI state) if
abortRef.current === controller so only the most recent request can disable the
composer/config picker; apply the same guard for the other similar block that
uses abortRef and isPending.

---

Nitpick comments:
In `@app/api/llm/call/route.ts`:
- Around line 14-21: The helper appendSecretToCallback currently mutates its
input body; change it to a pure function that returns a new body object (e.g.,
function appendSecretToCallback(body: Record<string, unknown>): Record<string,
unknown>) that leaves the original untouched, only adding/overwriting
callback_url in the returned object when WEBHOOK_SECRET exists and url is valid;
update callers (the POST handler) to use the returned value instead of relying
on in-place mutation and preserve the same runtime behavior and types
(encodeURIComponent, same separator logic).

In `@app/api/llm/webhook/`[callback_id]/route.ts:
- Around line 28-37: Replace the direct equality check in isAuthorized with a
constant-time comparison: keep the early return when process.env.WEBHOOK_SECRET
is missing, otherwise obtain expected and provided, convert both to Buffers
(e.g., Buffer.from(expected) and Buffer.from(provided || "")), and use
crypto.timingSafeEqual after first checking lengths match (if lengths differ
return false) to avoid throwing; ensure you import/require Node's crypto and
reference the isAuthorized function, expected and provided variables when
updating the logic.

In `@app/components/chat/ChatConfigPicker.tsx`:
- Around line 108-121: The trigger button that toggles the menu (the element
using onClick={() => !disabled && setOpen(prev => !prev)} and the label/Icons)
should include aria-expanded={open} and aria-haspopup="menu" (and an
aria-controls pointing to the menu's id); the menu container (the div rendered
when open) should have role="menu" and a unique id, and each selectable entry
rendered inside that container should have role="menuitem" (or
role="menuitemradio"/"menuitemcheckbox" as appropriate) plus tabIndex and
keyboard handling; update the components around setOpen/open, the trigger button
and the menu div (and the item render logic around lines referenced) to add
these attributes so screen-readers can correctly detect and navigate the menu.

In `@app/components/chat/ChatMessageList.tsx`:
- Around line 18-20: The current useEffect in ChatMessageList that calls
bottomRef.current?.scrollIntoView({ behavior: "smooth", block: "end" }) depends
on the whole messages array and will run on any change; change the dependency to
a more specific signal (e.g., messages.length or the id of the last message such
as messages[messages.length-1]?.id) so scrolling only happens when a message is
appended, not when an existing message is updated; update the dependency array
and ensure bottomRef and messages are still referenced inside the effect (keep
the same effect body but replace [messages] with [messages.length] or
[messages.at(-1)?.id]).

In `@app/components/user-menu/Branding.tsx`:
- Around line 9-12: The next/image in the Branding component is declaring
intrinsic dimensions width={801} height={311} while CSS forces a small rendered
height via className="h-10 w-auto", causing oversized preloads; update the
intrinsic width/height to scaled values that preserve the original aspect ratio
and match the rendered size (e.g., compute new width/height consistent with h-10
and the original 801x311 ratio) so the Image component in Branding.tsx uses
appropriately smaller intrinsic dimensions and avoids loading unnecessarily
large priority variants.

In `@app/globals.css`:
- Around line 82-90: The dark-mode :root block updates background/foreground
colors but doesn't override the accent variables, so --accent and --accent-hover
stay as the light-theme blue and may lack contrast; add overrides for --accent
and --accent-hover inside the `@media` (prefers-color-scheme: dark) :root (the CSS
variables named --accent and --accent-hover) and pick darker-theme-friendly
values (e.g., a lighter/brighter blue variant or contrast-optimized hex) to
ensure sufficient contrast on the dark backgrounds and update any related accent
variables the same way.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f05d161c-5c1b-45b1-858e-65012dff0129

📥 Commits

Reviewing files that changed from the base of the PR and between 2884ac6 and 9d681b4.

⛔ Files ignored due to path filters (1)
  • public/kaapi-logo.png is excluded by !**/*.png
📒 Files selected for processing (28)
  • .env.example
  • app/(main)/document/page.tsx
  • app/api/llm/call/[job_id]/result/route.ts
  • app/api/llm/call/[job_id]/route.ts
  • app/api/llm/call/route.ts
  • app/api/llm/webhook/[callback_id]/route.ts
  • app/components/Button.tsx
  • app/components/Sidebar.tsx
  • app/components/auth/TokenVerifyPage.tsx
  • app/components/chat/ChatConfigPicker.tsx
  • app/components/chat/ChatEmptyState.tsx
  • app/components/chat/ChatInput.tsx
  • app/components/chat/ChatMessage.tsx
  • app/components/chat/ChatMessageList.tsx
  • app/components/chat/index.ts
  • app/components/icons/index.tsx
  • app/components/icons/sidebar/ChatIcon.tsx
  • app/components/icons/sidebar/SendIcon.tsx
  • app/components/settings/SettingsSidebar.tsx
  • app/components/user-menu/Branding.tsx
  • app/globals.css
  • app/lib/chatClient.ts
  • app/lib/colors.ts
  • app/lib/llmJobStore.ts
  • app/lib/navConfig.ts
  • app/lib/types/chat.ts
  • app/page.tsx
  • middleware.ts

Comment thread .env.example
Comment thread app/api/llm/call/[job_id]/result/route.ts
Comment thread app/components/chat/ChatInput.tsx
Comment thread app/components/user-menu/Branding.tsx Outdated
Comment thread app/lib/chatClient.ts
Comment thread app/lib/chatClient.ts
Comment thread app/lib/chatClient.ts
Comment thread app/lib/llmJobStore.ts
Comment thread app/page.tsx
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (4)
app/lib/chatClient.ts (3)

240-246: ⚠️ Potential issue | 🟠 Major

promptContent is not preserved in ConfigBlob.

The configToBlob function drops config.promptContent, meaning chat requests built from saved configs may ignore the saved prompt template. This was flagged in a previous review.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/lib/chatClient.ts` around lines 240 - 246, The ConfigBlob created in
configToBlob currently omits config.promptContent so saved prompt templates are
lost; update the blob construction in configToBlob (and the
ConfigBlob/completion type if needed) to include completion.promptContent =
config.promptContent when present, preserving the original prompt template for
later chat requests (ensure you update any related types/interfaces like
ConfigBlob and the call sites that consume completion.promptContent).

75-82: ⚠️ Potential issue | 🟠 Major

Abort signal not passed to fetch.

The fetchWebhookResult function accepts no signal parameter, so in-flight polling requests cannot be cancelled when the user aborts. This was flagged in a previous review.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/lib/chatClient.ts` around lines 75 - 82, The fetchWebhookResult function
currently cannot be cancelled; add an optional parameter (e.g., signal?:
AbortSignal) to fetchWebhookResult(jobId: string, apiKey: string, signal?:
AbortSignal): Promise<LLMCallStatusResponse | null> and pass that signal into
the fetch options (include signal alongside headers and credentials) so
in-flight requests can be aborted; also update any callers (polling helpers) to
forward their AbortSignal when invoking fetchWebhookResult so user aborts
actually cancel the request.

30-38: ⚠️ Potential issue | 🟠 Major

Should use clientFetch for browser-side API calls.

This module is client-side but uses apiFetch, bypassing the app's standard 401 refresh and auth-expired handling. This was flagged in a previous review.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/lib/chatClient.ts` around lines 30 - 38, The createLLMCall function in
app/lib/chatClient.ts is using apiFetch (server-style) instead of the
browser-safe clientFetch, so replace the apiFetch call in createLLMCall with
clientFetch("/api/llm/call", { method: "POST", body: JSON.stringify(body) }) and
let the app's clientFetch handle auth/401 refresh; if callers currently pass an
apiKey and you must preserve that behavior, forward it as a header (e.g.,
"x-api-key") in the init object; update the function signature accordingly
(remove apiKey if unused) and ensure the returned type remains
Promise<LLMCallCreateResponse>.
app/page.tsx (1)

230-235: ⚠️ Potential issue | 🟠 Major

isPending is still cleared by stale requests.

The finally block correctly guards abortRef.current = null with the controller check, but setIsPending(false) on line 234 executes unconditionally. If request A is aborted because request B starts, A's finally block still clears isPending, re-enabling UI controls while B is in-flight.

🐛 Suggested fix
       } finally {
         if (abortRef.current === controller) {
           abortRef.current = null;
+          setIsPending(false);
         }
-        setIsPending(false);
       }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/page.tsx` around lines 230 - 235, The finally block unconditionally calls
setIsPending(false) causing stale request A to clear the pending state for a
newer request B; update the finally to only clear isPending when the finishing
controller is still the active one (same guard used for abortRef.current): move
or wrap setIsPending(false) inside the existing if (abortRef.current ===
controller) block (or check the same condition before calling setIsPending) so
only the current request can reset the pending state; reference abortRef,
controller, and setIsPending in the change.
🧹 Nitpick comments (7)
app/components/Sidebar.tsx (2)

115-123: Consider standardizing icon sizing in the map.

The ChatIcon addition follows the existing pattern where most icons don't have explicit sizing. Note that GearIcon has className="w-5 h-5" while others rely on default sizing. This inconsistency was pre-existing but could be standardized for uniform rendering.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/components/Sidebar.tsx` around lines 115 - 123, The icon map (iconMap)
has inconsistent sizing—GearIcon is the only entry with an explicit className
while others (e.g., ChatIcon, ClipboardIcon, DocumentFileIcon, BookOpenIcon,
ShieldCheckIcon, SlidersIcon) rely on defaults; update the map to standardize
sizes by either removing the explicit className from GearIcon or applying a
shared size (e.g., a single className like "w-5 h-5") to all icons, or wrap each
icon with a small Icon component/utility that enforces the uniform size, so
rendering is consistent across the icons.

33-39: Consider reading sidebarCollapsed from context instead of props.

The component receives collapsed as a prop (line 34) but writes state changes via setSidebarCollapsed from useApp() (line 39). Meanwhile, PageHeader reads and writes directly from the context. This asymmetry works but could be simplified by using the context consistently for both read and write operations.

♻️ Suggested refactor
 export default function Sidebar({
-  collapsed,
   activeRoute = "/",
 }: SidebarProps) {
   const router = useRouter();
   const { currentUser, googleProfile, isAuthenticated, logout } = useAuth();
-  const { setSidebarCollapsed } = useApp();
+  const { sidebarCollapsed: collapsed, setSidebarCollapsed } = useApp();

This would also require updating SidebarProps to remove collapsed if it's no longer needed as a prop.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/components/Sidebar.tsx` around lines 33 - 39, The Sidebar component
currently accepts a collapsed prop but uses setSidebarCollapsed from useApp();
change Sidebar to read the collapsed state from context instead of via the
collapsed prop: remove the collapsed prop from SidebarProps and any callers that
pass it, use useApp() inside Sidebar to get the current sidebarCollapsed value
(alongside setSidebarCollapsed), and update any references that relied on the
prop to use that context value; ensure PageHeader and Sidebar both use the same
context-based state (useApp, setSidebarCollapsed) to keep behavior consistent.
app/api/llm/webhook/[callback_id]/route.ts (1)

17-26: Consider constant-time comparison for webhook secret.

The secret comparison on line 25 uses === which is vulnerable to timing attacks. While the risk is low for webhook secrets (attacker would need many requests and precise timing), using constant-time comparison is a security best practice.

🔒 Suggested fix using crypto.timingSafeEqual
+import { timingSafeEqual } from "crypto";
+
+function safeCompare(a: string, b: string): boolean {
+  if (a.length !== b.length) return false;
+  return timingSafeEqual(Buffer.from(a), Buffer.from(b));
+}
+
 function isAuthorized(request: Request): boolean {
   const expected = process.env.WEBHOOK_SECRET;
   if (!expected) return true;
   const url = new URL(request.url);
   const provided =
     url.searchParams.get("secret") ||
     request.headers.get("x-webhook-secret") ||
     "";
-  return provided === expected;
+  return safeCompare(provided, expected);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/llm/webhook/`[callback_id]/route.ts around lines 17 - 26, The
isAuthorized function currently does a plain === comparison of the webhook
secret; replace that with a constant-time comparison using
crypto.timingSafeEqual: import Node's crypto, convert both provided and expected
to Buffers (e.g., Buffer.from(...)) and if their lengths differ return false,
otherwise call crypto.timingSafeEqual(bufProvided, bufExpected) and return that
boolean; keep the existing behavior when process.env.WEBHOOK_SECRET is unset
(still return true) and ensure you handle empty/null provided values safely
before buffering.
app/api/llm/call/route.ts (1)

26-31: Consider extracting upstream error details on non-2xx responses.

When the upstream /api/v1/llm/call returns an error status, the current code passes through data directly. Per coding guidelines, error messages should be extracted using body.error || body.message || body.detail. If data contains structured error info, consider normalizing it for consistency.

♻️ Suggested improvement
     const { status, data } = await apiClient(request, "/api/v1/llm/call", {
       method: "POST",
       body: JSON.stringify(body),
     });

+    if (!data?.success && status >= 400) {
+      const errorMsg = data?.error || data?.message || data?.detail || "Request failed";
+      return NextResponse.json(
+        { success: false, error: errorMsg, data: null },
+        { status },
+      );
+    }
+
     return NextResponse.json(data, { status });

As per coding guidelines: Extract error messages from API responses using the pattern body.error || body.message || body.detail when adding new API routes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/llm/call/route.ts` around lines 26 - 31, The route currently proxies
the upstream response directly (const { status, data } = await apiClient(...);
return NextResponse.json(data, { status });) which can expose inconsistent error
structures; update the handler to detect non-2xx statuses and normalize the
error payload by extracting the message from data.error || data.message ||
data.detail (falling back to a generic message), then return a consistent JSON
error object (e.g., { error: message, details: data }) with the original status
using NextResponse.json; keep usage of apiClient, status, data and
NextResponse.json to locate and modify the code.
app/lib/types/chat.ts (1)

11-19: Consider making status required on ChatMessage.

The status field is marked optional (status?: ChatMessageStatus), but the UI code in page.tsx always sets it ("pending", "complete", "error"). Making it required would provide better type safety and document the invariant.

💡 Suggested change
 export interface ChatMessage {
   id: string;
   role: ChatRole;
   content: string;
   createdAt: number;
-  status?: ChatMessageStatus;
+  status: ChatMessageStatus;
   jobId?: string;
   error?: string;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/lib/types/chat.ts` around lines 11 - 19, The ChatMessage interface
currently makes status optional but the UI (e.g., page.tsx) always sets it;
update the ChatMessage type by removing the optional modifier from status (make
status: ChatMessageStatus) and then update all message creation sites (places
that construct ChatMessage objects, such as in page.tsx/new message builders or
any tests) to explicitly supply a valid ChatMessageStatus value
("pending"|"complete"|"error" or the enum members) so the code typechecks and
the invariant is documented.
app/lib/chatClient.ts (1)

232-238: Only first tool's max_num_results is used.

When multiple tools exist, max_num_results is taken only from tools[0] (line 236), ignoring values from other tools. This may be intentional, but could lead to unexpected behavior if tools have different limits.

💡 Consider documenting or handling multiple tools
   const tools: Tool[] = config.tools ?? [];
   if (tools.length > 0) {
     const knowledge_base_ids = tools.flatMap((t) => t.knowledge_base_ids);
     if (knowledge_base_ids.length > 0) {
       params.knowledge_base_ids = knowledge_base_ids;
-      params.max_num_results = tools[0].max_num_results;
+      // Use the maximum value across all tools, or the first tool's value
+      params.max_num_results = Math.max(...tools.map((t) => t.max_num_results ?? 0));
     }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/lib/chatClient.ts` around lines 232 - 238, The current code only uses
tools[0].max_num_results when setting params.max_num_results which ignores other
tools' limits; update the logic in the block that processes tools (the variables
tools, knowledge_base_ids, params) to compute a combined value (e.g., take the
maximum of all tools' max_num_results via tools.map(t => t.max_num_results) and
Math.max, or pick another aggregation like the minimum if you prefer
conservative limits) and assign that aggregated value to params.max_num_results
instead of tools[0].max_num_results; ensure you still collect knowledge_base_ids
with tools.flatMap as before.
app/components/chat/ChatEmptyState.tsx (1)

46-59: Consider dark mode compatibility for hover states.

The hover:bg-neutral-50 class uses a hardcoded light color that may not adapt well in dark mode contexts. Consider using a theme-aware token like hover:bg-bg-secondary or similar if dark mode support is planned.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/components/chat/ChatEmptyState.tsx` around lines 46 - 59, The hover state
on the suggestion buttons uses a hardcoded light color (class
hover:bg-neutral-50) which breaks in dark mode; update the button classes
rendered in the SUGGESTIONS map (the element using onSuggestion,
isAuthenticated, hasConfig) to use a theme-aware token such as
hover:bg-bg-secondary (or add a dark variant like hover:dark:bg-bg-secondary /
hover:dark:bg-neutral-800) instead of hover:bg-neutral-50 so the hover
background adapts to dark/light themes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/lib/chatClient.ts`:
- Around line 143-152: The promise-based polling wait in chatClient.ts leaks
abort listeners because the onAbort handler is only removed when abort fires;
fix it by capturing the onAbort function in a variable and, when the timer
resolves (before calling resolve), remove the abort listener via
signal.removeEventListener("abort", onAbort) and clear the timeout; ensure you
guard signal exists and only add/remove the listener around
signal.addEventListener and signal.removeEventListener so the listener is
cleaned up whether the timer fires or abort occurs.

---

Duplicate comments:
In `@app/lib/chatClient.ts`:
- Around line 240-246: The ConfigBlob created in configToBlob currently omits
config.promptContent so saved prompt templates are lost; update the blob
construction in configToBlob (and the ConfigBlob/completion type if needed) to
include completion.promptContent = config.promptContent when present, preserving
the original prompt template for later chat requests (ensure you update any
related types/interfaces like ConfigBlob and the call sites that consume
completion.promptContent).
- Around line 75-82: The fetchWebhookResult function currently cannot be
cancelled; add an optional parameter (e.g., signal?: AbortSignal) to
fetchWebhookResult(jobId: string, apiKey: string, signal?: AbortSignal):
Promise<LLMCallStatusResponse | null> and pass that signal into the fetch
options (include signal alongside headers and credentials) so in-flight requests
can be aborted; also update any callers (polling helpers) to forward their
AbortSignal when invoking fetchWebhookResult so user aborts actually cancel the
request.
- Around line 30-38: The createLLMCall function in app/lib/chatClient.ts is
using apiFetch (server-style) instead of the browser-safe clientFetch, so
replace the apiFetch call in createLLMCall with clientFetch("/api/llm/call", {
method: "POST", body: JSON.stringify(body) }) and let the app's clientFetch
handle auth/401 refresh; if callers currently pass an apiKey and you must
preserve that behavior, forward it as a header (e.g., "x-api-key") in the init
object; update the function signature accordingly (remove apiKey if unused) and
ensure the returned type remains Promise<LLMCallCreateResponse>.

In `@app/page.tsx`:
- Around line 230-235: The finally block unconditionally calls
setIsPending(false) causing stale request A to clear the pending state for a
newer request B; update the finally to only clear isPending when the finishing
controller is still the active one (same guard used for abortRef.current): move
or wrap setIsPending(false) inside the existing if (abortRef.current ===
controller) block (or check the same condition before calling setIsPending) so
only the current request can reset the pending state; reference abortRef,
controller, and setIsPending in the change.

---

Nitpick comments:
In `@app/api/llm/call/route.ts`:
- Around line 26-31: The route currently proxies the upstream response directly
(const { status, data } = await apiClient(...); return NextResponse.json(data, {
status });) which can expose inconsistent error structures; update the handler
to detect non-2xx statuses and normalize the error payload by extracting the
message from data.error || data.message || data.detail (falling back to a
generic message), then return a consistent JSON error object (e.g., { error:
message, details: data }) with the original status using NextResponse.json; keep
usage of apiClient, status, data and NextResponse.json to locate and modify the
code.

In `@app/api/llm/webhook/`[callback_id]/route.ts:
- Around line 17-26: The isAuthorized function currently does a plain ===
comparison of the webhook secret; replace that with a constant-time comparison
using crypto.timingSafeEqual: import Node's crypto, convert both provided and
expected to Buffers (e.g., Buffer.from(...)) and if their lengths differ return
false, otherwise call crypto.timingSafeEqual(bufProvided, bufExpected) and
return that boolean; keep the existing behavior when process.env.WEBHOOK_SECRET
is unset (still return true) and ensure you handle empty/null provided values
safely before buffering.

In `@app/components/chat/ChatEmptyState.tsx`:
- Around line 46-59: The hover state on the suggestion buttons uses a hardcoded
light color (class hover:bg-neutral-50) which breaks in dark mode; update the
button classes rendered in the SUGGESTIONS map (the element using onSuggestion,
isAuthenticated, hasConfig) to use a theme-aware token such as
hover:bg-bg-secondary (or add a dark variant like hover:dark:bg-bg-secondary /
hover:dark:bg-neutral-800) instead of hover:bg-neutral-50 so the hover
background adapts to dark/light themes.

In `@app/components/Sidebar.tsx`:
- Around line 115-123: The icon map (iconMap) has inconsistent sizing—GearIcon
is the only entry with an explicit className while others (e.g., ChatIcon,
ClipboardIcon, DocumentFileIcon, BookOpenIcon, ShieldCheckIcon, SlidersIcon)
rely on defaults; update the map to standardize sizes by either removing the
explicit className from GearIcon or applying a shared size (e.g., a single
className like "w-5 h-5") to all icons, or wrap each icon with a small Icon
component/utility that enforces the uniform size, so rendering is consistent
across the icons.
- Around line 33-39: The Sidebar component currently accepts a collapsed prop
but uses setSidebarCollapsed from useApp(); change Sidebar to read the collapsed
state from context instead of via the collapsed prop: remove the collapsed prop
from SidebarProps and any callers that pass it, use useApp() inside Sidebar to
get the current sidebarCollapsed value (alongside setSidebarCollapsed), and
update any references that relied on the prop to use that context value; ensure
PageHeader and Sidebar both use the same context-based state (useApp,
setSidebarCollapsed) to keep behavior consistent.

In `@app/lib/chatClient.ts`:
- Around line 232-238: The current code only uses tools[0].max_num_results when
setting params.max_num_results which ignores other tools' limits; update the
logic in the block that processes tools (the variables tools,
knowledge_base_ids, params) to compute a combined value (e.g., take the maximum
of all tools' max_num_results via tools.map(t => t.max_num_results) and
Math.max, or pick another aggregation like the minimum if you prefer
conservative limits) and assign that aggregated value to params.max_num_results
instead of tools[0].max_num_results; ensure you still collect knowledge_base_ids
with tools.flatMap as before.

In `@app/lib/types/chat.ts`:
- Around line 11-19: The ChatMessage interface currently makes status optional
but the UI (e.g., page.tsx) always sets it; update the ChatMessage type by
removing the optional modifier from status (make status: ChatMessageStatus) and
then update all message creation sites (places that construct ChatMessage
objects, such as in page.tsx/new message builders or any tests) to explicitly
supply a valid ChatMessageStatus value ("pending"|"complete"|"error" or the enum
members) so the code typechecks and the invariant is documented.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0718bf9b-4790-477c-817e-163c86605459

📥 Commits

Reviewing files that changed from the base of the PR and between 9d681b4 and 5c2adfc.

📒 Files selected for processing (15)
  • app/api/llm/call/[job_id]/result/route.ts
  • app/api/llm/call/route.ts
  • app/api/llm/webhook/[callback_id]/route.ts
  • app/components/PageHeader.tsx
  • app/components/Sidebar.tsx
  • app/components/chat/ChatEmptyState.tsx
  • app/components/chat/ChatMessage.tsx
  • app/components/document/DocumentListing.tsx
  • app/components/user-menu/Branding.tsx
  • app/globals.css
  • app/lib/chatClient.ts
  • app/lib/llmJobStore.ts
  • app/lib/types/chat.ts
  • app/page.tsx
  • middleware.ts
✅ Files skipped from review due to trivial changes (1)
  • app/components/document/DocumentListing.tsx
🚧 Files skipped from review as they are similar to previous changes (5)
  • middleware.ts
  • app/api/llm/call/[job_id]/result/route.ts
  • app/components/user-menu/Branding.tsx
  • app/components/chat/ChatMessage.tsx
  • app/globals.css

Comment thread app/lib/chatClient.ts
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Chat Interface: Enhance user interaction

1 participant