Skip to content

feat: chat assistant#83

Merged
gaboesquivel merged 9 commits into
mainfrom
chat-assistant
Feb 13, 2026
Merged

feat: chat assistant#83
gaboesquivel merged 9 commits into
mainfrom
chat-assistant

Conversation

@gaboesquivel
Copy link
Copy Markdown
Member

@gaboesquivel gaboesquivel commented Feb 13, 2026

Summary by CodeRabbit

  • New Features

    • Chat Assistant on the dashboard with conversation UI, streaming-aware messages, Markdown rendering, auto-scroll, prompt suggestions, and conversation download as Markdown.
    • New prompt input with send/stop toggle and improved multiline handling.
    • Magic-link sign-in now sets auth cookies on redirect for smoother login.
  • Tests

    • New E2E Playwright suites and helpers for Chat Assistant and magic-link flows; E2E config adds JWT secret and server reuse support.
  • Chores

    • Client-side chat hook and related runtime deps added.

@vercel
Copy link
Copy Markdown

vercel Bot commented Feb 13, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
basilic-fastify Ready Ready Preview, Comment Feb 13, 2026 8:07am
2 Skipped Deployments
Project Deployment Actions Updated (UTC)
basilic-docs Ignored Ignored Preview Feb 13, 2026 8:07am
basilic-next Ignored Ignored Preview Feb 13, 2026 8:07am

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Feb 13, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Normalizes dual message formats and SSE streaming for /ai/chat, adds a Chat Assistant UI with supporting components and E2E tests, switches magic-link to cookie-based tokens, exposes baseUrl/getAuthToken for client hooks, and updates test/build environments (JWT_SECRET, TS resolution, transpile rules).

Changes

Cohort / File(s) Summary
Workflow & Test env
/.github/workflows/e2e-test.yml, apps/next/playwright.config.ts, turbo.json, apps/fastify/scripts/generate-openapi.ts
Adds JWT_SECRET to CI/test and OpenAPI generation stubs; Playwright webServer env extended (DATABASE_URL, JWT_SECRET, NODE_ENV, optional OPEN_ROUTER_API_KEY) and reuseServer toggle added.
Fastify API & OpenAPI
apps/fastify/src/routes/ai/chat.ts, apps/fastify/src/routes/ai/chat.test.ts, apps/fastify/openapi/openapi.json
Accepts both UIMessage (parts) and CoreMessage (content), validates/normalizes messages (1–50), updates streaming flow to use smoothStream/streamText → SSE UI responses, and adds SSE/UIMessage tests; OpenAPI schema relaxed to anyOf.
Vitest resolution
apps/fastify/vitest.config.ts
Adds toTsPath helper and rules to rewrite .js imports to .ts (special-case db/index.js) for test resolution.
Next UI components
apps/next/components/ai-elements/conversation.tsx, apps/next/components/ai-elements/message.tsx, apps/next/components/ai-elements/prompt-input.tsx
Introduces Conversation, ConversationContent, ConversationEmptyState, ConversationDownload, Message, MessageContent, MessageResponse, PromptInputTextarea, and PromptInputSubmit with streaming/Markdown support and download/export utilities.
Dashboard integration
apps/next/components/dashboard/chat-assistant.tsx, apps/next/components/dashboard/dashboard-content.tsx
Adds ChatAssistant sheet UI (suggestions, auto-scroll, streaming indicators, error banner, input wiring) and mounts it into DashboardContent.
Auth / Cookies & Providers
apps/next/app/api/auth/[...path]/handlers/magic-link.ts, apps/next/lib/auth-server.ts, apps/next/components/providers.tsx
Magic-link now sets auth/refresh cookies via setAuthCookiesOnResponse and uses NextResponse.redirect; providers add baseUrl and getAuthToken and use credentials: 'include' for token requests.
E2E tests & helpers
apps/next/e2e/auth-helpers.ts, apps/next/e2e/chat-assistant.spec.ts, apps/next/e2e/magic-link-auth.spec.ts
Adds authHelpers for Playwright, chat-assistant E2E suite, and simplifies magic-link tests (less fixed timeouts, text-based locators).
React package & hooks
packages/react/src/hooks/use-chat.ts, packages/react/src/setup.ts, packages/react/src/index.ts, packages/react/src/components/login-form.spec.tsx
Adds useChatFromConfig (builds authenticated transport to /ai/chat using getAuthToken/baseUrl), exposes new config fields, and updates tests to wrap mutations in act.
Package manifests & build config
apps/next/package.json, packages/react/package.json, packages/ui/package.json, apps/next/next.config.mjs
Adds runtime deps (ai-elements, react-markdown, use-stick-to-bottom, ai, streamdown libs), updates transpilePackages, and appends 'source' to webpack conditionNames.
Misc config & allowlists
.gitleaks.toml, apps/fastify/scripts/generate-openapi.ts, packages/react/package.json
Adds gitleaks allowlist for .env.test.example, stubs ENCRYPTION_KEY/JWT_SECRET for OpenAPI generation, and adjusts package deps/devDeps.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Next.js Client
    participant Hook as useChatFromConfig
    participant Fastify as Fastify /ai/chat
    participant LLM as LLM Provider

    Client->>Hook: sendMessage(rawMessages)
    Hook->>Hook: getAuthToken()
    Hook->>Fastify: POST /ai/chat (Authorization: Bearer token, Accept:text/event-stream)
    Fastify->>Fastify: resolveMessages(rawMessages) -> ModelMessage[]
    Fastify->>LLM: streamText(model, messages)
    LLM-->>Fastify: streaming chunks/events
    Fastify-->>Client: text/event-stream (data: {...})
    Client->>Client: parse SSE lines, render streaming MessageResponse
    Note over LLM,Client: repeat until [DONE]
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

A rabbit hums a streaming tune, 🐇
Cookies tucked beneath the moon,
Parts and content hop in line,
Components sparkle, tests align,
Hooray — the assistant chats just fine! 🎉

🚥 Pre-merge checks | ✅ 3 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 12.50% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'feat: chat assistant' directly matches the main feature being introduced—a new chat assistant component with UI, authentication, and streaming capabilities across the codebase.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into main

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch chat-assistant

No actionable comments were generated in the recent review. 🎉


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
packages/ui/package.json (1)

14-54: ⚠️ Potential issue | 🟡 Minor

Remove unused radix-ui unified package from dependencies.

The unified radix-ui package (line 54) is not imported or used anywhere in the codebase. All imports use the individual @radix-ui/react-* packages instead (packages/ui/src/radix/index.tsx re-exports them, and components import directly from individual packages). Remove the unused dependency to keep the dependency tree clean.

apps/next/app/api/auth/[...path]/handlers/magic-link.ts (1)

40-57: ⚠️ Potential issue | 🟠 Major

Avoid logging the raw response object — it may contain token values.

Line 43 logs the full response object in the error branch. If the response has a partially valid shape (e.g., token present but wrong type), the actual token value could end up in logs.

🔒 Suggested fix — log only structural metadata
       logger.error(
         {
-          response,
           responseType: typeof response,
           hasToken: response && typeof response === 'object' && 'token' in response,
           hasRefreshToken: response && typeof response === 'object' && 'refreshToken' in response,

As per coding guidelines, "SECURITY: Verify no secrets logged (use logger redaction features)".

🤖 Fix all issues with AI agents
In `@apps/fastify/openapi/openapi.json`:
- Around line 62-66: The OpenAPI schema for the "messages" array is too
permissive because "items: {}" accepts any JSON; update the "messages" schema to
constrain each item to an object and add a sensible "maxItems" (e.g., 50) to
prevent unbounded payloads. Specifically, replace "items: {}" with an object
schema (e.g., "items": { "type": "object", "properties": { "role": { "type":
"string" }, "content": { "type": "string" }, "name": { "type": "string" } },
"required": ["role","content"] }) and add "maxItems": 50; align the fields with
the runtime validations in resolveMessages so gateway/client generators and docs
match the route handler expectations.

In `@apps/fastify/src/routes/ai/chat.ts`:
- Around line 72-97: The function resolveMessages currently lies about its
return shape and unsafely casts convertToModelMessages results to
{role:string;content:string}[], so change resolveMessages to return
Promise<ModelMessage[]> (the actual type returned by convertToModelMessages),
remove the unsafe cast when returning convertToModelMessages(...) and stop
asserting the result as {role:string;content:string}[] elsewhere (e.g., the cast
at the later call site that re-casts to ModelMessage[]); keep the same logic for
UIMessage vs CoreMessage checks (use isUIMessage, isCoreMessage) and preserve
the convertToModelMessages options (tools, ignoreIncompleteToolCalls) but update
any callers to accept ModelMessage[] instead of the narrowed string-only content
type.

In `@apps/next/components/dashboard/chat-assistant.tsx`:
- Around line 86-101: The isStreamingAssistant flag is currently set using only
global status and causes every assistant message to be treated as streaming;
change it to be true only for the assistant message that is the last assistant
in messages. Before mapping (or at the start of the map), compute the last
assistant message (e.g., find the last entry where message.role === 'assistant')
and then set isStreamingAssistant = status === 'streaming' && message.id ===
lastAssistantMessage.id (or compare indexes) so MessageResponse only receives
isStreaming for that final assistant message.

In `@apps/next/e2e/chat-assistant.spec.ts`:
- Around line 50-55: The test is accepting auth error banners as a passing
outcome which masks broken auth; update the assertions to require the assistant
response instead of OR'ing with the auth error: replace the combined expect that
uses
sheet.locator('[data-role="assistant"]').or(sheet.getByText(/UNAUTHORIZED|Authentication
required/)) with a direct assertion that the assistant element
(sheet.locator('[data-role="assistant"]')) is visible and contains the expected
assistant content; if auth failures are an environmental possibility, wrap the
test in a conditional guard (e.g., skip/conditional based on an environment
flag) or add a separate test that explicitly asserts the auth error using
sheet.getByText(/UNAUTHORIZED|Authentication required/) so tests for assistant
behavior and auth behavior are separated (also apply the same change to the
other occurrence around the second block noted).

In `@apps/next/playwright.config.ts`:
- Line 4: Add the four environment variables referenced in
apps/next/playwright.config.ts to the test:e2e task's env list in turbo.json:
include PLAYWRIGHT_REUSE_SERVER (used where const reuseServer is defined),
JWT_SECRET, PLAYWRIGHT_HTML_REPORT_OPEN, and PLAYWRIGHT_TEST_BASE_URL so Turbo
will treat changes to these values as inputs to the test:e2e pipeline and avoid
serving stale cached results.

In `@packages/react/package.json`:
- Around line 28-31: The package.json has a version mismatch between the
`@ai-sdk/react` dependency and the ai dependency; update the "ai" entry to the ai
version that `@ai-sdk/react`@3.0.87 pins to (e.g., change "ai": "^6.0.81" to "ai":
"^6.0.87" or the exact pinned 6.0.x that corresponds to `@ai-sdk/react`@3.0.87) so
that "@ai-sdk/react" and "ai" match and avoid duplicate dependency trees.

In `@packages/react/src/components/login-form.spec.tsx`:
- Around line 214-223: The call to capturedOnError triggers a component state
update but isn’t wrapped in React's act(), causing warnings; update the test to
wrap the capturedOnError invocation inside await act(async () => { ... }) just
like the capturedOnSuccess case, ensuring you call capturedOnError(...) inside
the act callback and await the act call so the component updates are flushed
(reference capturedOnError and the existing capturedOnSuccess/act pattern in the
spec).

In `@packages/ui/package.json`:
- Line 45: Remove the unused "ai" dependency entry from the UI package.json
dependencies block; edit the package.json to delete the line containing "ai":
"^6.0.85" so it is no longer listed, then run your package manager's install
(e.g., npm install or pnpm install) and update lockfile to ensure the dependency
graph is clean.
- Line 75: Add a brief explanatory comment next to the pinned pre-release
dependency "@typescript/native-preview": "7.0.0-dev.20260124.1" describing why
this specific preview build is required (e.g., to enable a TypeScript preview
feature, test a breaking change, or maintain consistency across the monorepo),
note that it is a temporary/experimental pin and any constraints (compatibility
or update plan), and reference that this same pin exists in the root
package.json so maintainers know to update both places together.
🧹 Nitpick comments (18)
apps/fastify/vitest.config.ts (2)

26-35: Unreachable branch on line 28.

The else if (id.startsWith('/') && !id.startsWith('/@')) branch on line 28 is unreachable: path.isAbsolute(id) on line 26 already returns true for any string starting with '/' (on all platforms in Node.js), so it will always match first.

If you intend to handle Vite root-relative paths (e.g., /src/foo.js) that don't start with projectRoot, consider reordering the branches or merging the logic. Otherwise, this dead branch can be removed.


129-133: Broad regex could match unintended paths.

The pattern ^(.*\/)db\/index\.js$ matches any import ending in /db/index.js, not just project-local ones. In practice this is unlikely to collide with a dependency import, but you could tighten it to anchor against src/ or the project root if this ever causes false rewrites.

apps/fastify/src/routes/ai/chat.ts (3)

62-70: isCoreMessage only accepts string content, rejecting valid multimodal CoreMessage payloads.

The AI SDK's CoreMessage type allows content to be either a string or an array of content parts (for images, etc.). The check typeof content === 'string' on line 67 will cause valid multimodal messages to fall through to the error on line 94, returning a 400.

If multimodal support is not needed now, consider adding a comment documenting this intentional limitation. Otherwise, broaden the guard:

 function isCoreMessage(msg: unknown): msg is { role: string; content: string } {
   return (
     typeof msg === 'object' &&
     msg !== null &&
-    'content' in msg &&
-    typeof (msg as { content?: unknown }).content === 'string' &&
+    'content' in msg &&
+    (typeof (msg as { content?: unknown }).content === 'string' ||
+      Array.isArray((msg as { content?: unknown }).content)) &&
     'role' in msg
   )
 }

191-203: Streaming path looks correct; minor note on error handling.

The streamTexttoUIMessageStreamResponse → manual header propagation pattern is reasonable. However, if streamText throws synchronously (e.g., invalid model), the error will propagate as an unhandled rejection since there's no try/catch around this block. The non-streaming path on line 205 is similarly unguarded but would produce a 500 via Fastify's default handler. Consider whether a try/catch with a structured error response is warranted here for consistency.


22-22: Type.Any() disables all Fastify schema validation for message items.

This is consistent with the OpenAPI change but means the server accepts any payload shape in messages (numbers, strings, deeply nested objects, etc.) with validation deferred entirely to resolveMessages. This increases the attack surface for malformed payloads reaching application code. Consider using Type.Object({}, { additionalProperties: true }) to at least enforce that each item is an object.

apps/fastify/src/routes/ai/chat.test.ts (1)

49-58: SSE test may silently pass without verifying any streamed content chunks.

If the first (and only) data: line is [DONE], the assertion inside the if block on lines 53-57 is skipped entirely. The test would pass having only verified that some data: line exists, without confirming any actual content was streamed. Consider asserting that at least one non-[DONE] data line is present:

Suggested improvement
     const lines = body.split('\n').filter(line => line.startsWith('data:'))
     expect(lines.length).toBeGreaterThan(0)
-    const firstData = lines[0]?.replace(/^data:\s*/, '')
-    if (firstData && firstData !== '[DONE]') {
-      const parsed = JSON.parse(firstData) as { type?: string }
-      expect(parsed.type).toBeDefined()
-      expect(['start', 'text-start', 'text-delta', 'text-end', 'finish', 'finish-step']).toContain(
-        parsed.type,
-      )
-    }
+    const dataLines = lines
+      .map(line => line.replace(/^data:\s*/, ''))
+      .filter(d => d && d !== '[DONE]')
+    expect(dataLines.length).toBeGreaterThan(0)
+    const parsed = JSON.parse(dataLines[0]) as { type?: string }
+    expect(parsed.type).toBeDefined()
+    expect(['start', 'text-start', 'text-delta', 'text-end', 'finish', 'finish-step']).toContain(
+      parsed.type,
+    )
apps/next/app/api/auth/[...path]/handlers/magic-link.ts (1)

61-63: Redundant as unknown as cast after the type guard.

The type guard on lines 33-59 already confirms response.token and response.refreshToken are strings. You can access them directly without the double cast.

♻️ Suggested simplification
-    const accessToken = (response as unknown as { token: string; refreshToken: string }).token
-    const refreshToken = (response as unknown as { token: string; refreshToken: string })
-      .refreshToken
+    const accessToken = response.token as string
+    const refreshToken = response.refreshToken as string
packages/react/src/setup.ts (1)

77-84: Consider documenting that getAuthToken should be a stable reference.

Since ReactApiProvider wraps the config in useMemo(() => createReactApiConfig(config), [config]), an unstable getAuthToken callback (new function each render) would invalidate the memo and trigger re-renders of all context consumers. Consumers should wrap it in useCallback.

apps/next/components/providers.tsx (1)

14-29: Consider deduplicating the session-fetch logic.

Both getAuthToken (line 14) and getRefreshToken (line 24) fetch the same /api/auth/get-session endpoint with identical options and only differ in the extracted field. If the session endpoint is called frequently (e.g., on every authenticated request), this could double the requests.

A shared helper would reduce duplication and could allow a single fetch to return both tokens:

♻️ Optional refactor
-async function getAuthToken() {
-  const response = await fetch('/api/auth/get-session', { credentials: 'include' })
-  if (!response.ok) return null
-  const data = await response.json()
-  return data.token ?? null
-}
+async function getSessionData() {
+  const response = await fetch('/api/auth/get-session', { credentials: 'include' })
+  if (!response.ok) return null
+  return response.json()
+}
+
+async function getAuthToken() {
+  const data = await getSessionData()
+  return data?.token ?? null
+}

 const coreClient = createClient({
   baseUrl: env.NEXT_PUBLIC_API_URL,
   getAuthToken,
   getRefreshToken: async () => {
-    const response = await fetch('/api/auth/get-session', { credentials: 'include' })
-    if (!response.ok) return null
-    const data = await response.json()
+    const data = await getSessionData()
     return data?.refreshToken ?? null
   },
apps/next/components/ai-elements/prompt-input.tsx (2)

15-28: Ambiguous export name Input — consider renaming to PromptInputForm.

The name Input is generic and easily confused with the HTML <input> element or the Input component from @repo/ui/components/input.tsx. Since this is actually a form wrapper, a more descriptive name would prevent import conflicts and improve readability at call sites.

♻️ Suggested rename
-export function Input({ onSubmit, className, children, ...props }: PromptInputFormProps) {
+export function PromptInputForm({ onSubmit, className, children, ...props }: PromptInputFormProps) {

38-61: Synthetic ChangeEvent construction is brittle but functional.

Object.assign({}, el, { value: newValue }) creates a plain object copy of the textarea element. This works for controlled components that only read event.target.value, but it won't behave correctly if any consumer inspects other DOM properties or methods on target. The cast through unknown (line 53) masks this.

This is a well-known pattern for programmatic textarea manipulation in React, so it's acceptable here, but worth a brief inline comment noting the limitation.

apps/next/components/ai-elements/message.tsx (1)

14-29: forwardRef is no longer required in React 19.

In React 19, ref is a regular prop. You can simplify Message by accepting ref directly in the props instead of using forwardRef. The same applies to Conversation in conversation.tsx.

♻️ Simplified version
-export const Message = forwardRef<HTMLDivElement, MessageProps>(
-  ({ from = 'assistant', className, ...props }, ref) => (
+export function Message({ from = 'assistant', className, ref, ...props }: MessageProps & { ref?: React.Ref<HTMLDivElement> }) {
+  return (
     <div
       ref={ref}
       className={cn(
         'flex w-full',
         from === 'user' && 'justify-end',
         from === 'assistant' && 'justify-start',
         className,
       )}
       data-role={from}
       {...props}
     />
-  ),
-)
-Message.displayName = 'Message'
+  )
+}
apps/next/components/ai-elements/conversation.tsx (1)

11-21: Same forwardRef note as message.tsx — optional simplification available with React 19.

Since this project uses React 19.2.3, ref can be accepted as a regular prop.

packages/react/src/hooks/use-chat.ts (1)

27-44: Streaming detection logic is complex — consider simplifying.

Three separate conditions determine streaming intent (line 27's options.stream, header X-Stream, and keepStream). The stream variable on line 27 defaults to true when options.stream is not explicitly false, which means streaming is always on unless opted out. Given that, the additional checks on lines 38-39 for X-Stream header and keepStream may be redundant — they can only matter when stream is false, which is the opt-out case.

If the intent is to allow per-request overrides of the opt-out, a brief comment explaining when X-Stream or keepStream would be set (and by whom) would help future maintainers.

apps/next/e2e/chat-assistant.spec.ts (1)

10-11: Prefer semantic locators over text= selectors.

Playwright docs recommend getByText() / getByRole() over page.locator('text=...') for readability and resilience.

Suggested fix
-    await expect(page.locator('text=Dashboard')).toBeVisible({ timeout: 10000 })
-    await expect(page.locator(`text=Welcome back, ${authHelpers.TEST_EMAIL}`)).toBeVisible({
+    await expect(page.getByText('Dashboard')).toBeVisible({ timeout: 10000 })
+    await expect(page.getByText(`Welcome back, ${authHelpers.TEST_EMAIL}`)).toBeVisible({
apps/next/e2e/auth-helpers.ts (2)

54-59: Simplify the URL wait — expectedPath is a constant, no need for RegExp.

The static analysis tool flagged RegExp construction from a variable (ReDoS risk). While it's safe here since expectedPath is a hardcoded constant, the indirection is unnecessary.

Suggested fix
   async verifyMagicLink(page: Page, token: string) {
     const verifyUrl = `/api/auth/magic-link/verify?token=${encodeURIComponent(token)}&callbackURL=/`
     await page.goto(verifyUrl)
-    const expectedPath = '/' // from callbackURL in verifyUrl
-    await page.waitForURL(new RegExp(`^\\${expectedPath}$`), { timeout: 5000 })
+    await page.waitForURL('/', { timeout: 5000 })
   },

3-4: Consider reading API_URL from an environment variable.

Hardcoding http://localhost:3001 works for local dev but will break if the API runs on a different port or host in CI.

apps/next/components/dashboard/chat-assistant.tsx (1)

109-122: Heavy as casts for tool parts reduce type safety.

Consider defining a narrow type guard (e.g., isToolOutput(part)) to replace the inline casts on lines 113-116. This improves readability and lets TypeScript validate the access.

Comment thread apps/fastify/openapi/openapi.json
Comment thread apps/fastify/src/routes/ai/chat.ts Outdated
Comment thread apps/next/components/dashboard/chat-assistant.tsx
Comment thread apps/next/e2e/chat-assistant.spec.ts Outdated
Comment thread apps/next/playwright.config.ts
Comment thread packages/react/package.json Outdated
Comment thread packages/react/src/components/login-form.spec.tsx
Comment thread packages/ui/package.json Outdated
Comment thread packages/ui/package.json
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@apps/fastify/src/routes/ai/chat.ts`:
- Around line 199-211: The streaming branch manually extracts
headers/status/body from result.toUIMessageStreamResponse() and sets them on
reply.raw; instead, call reply.send() with the Web Response object directly to
let Fastify handle headers, status, and ReadableStream body. In the block where
shouldStream is true, replace the manual header/status/body manipulation with:
const response = result.toUIMessageStreamResponse(); return
reply.send(response); (remove reply.raw.setHeader, reply.raw.statusCode, and the
as never cast), keeping references to streamText, baseOptions, smoothStream and
toUIMessageStreamResponse.
🧹 Nitpick comments (8)
apps/next/e2e/chat-assistant.spec.ts (2)

4-14: Serial mode shares browser context — beforeEach re-login is appropriate but adds latency.

With mode: 'serial', all tests share a single page instance. The beforeEach performs a full login cycle for every test, which is correct for isolation but expensive. Since state (cookies/session) persists across serial tests, you could login once in beforeAll and just navigate to / in beforeEach — cutting per-test overhead significantly.

That said, the current approach is safe and correct.


10-13: Prefer accessible locators over text= selector for consistency.

Lines 10–13 use page.locator('text=...') while the rest of the file uses getByRole/getByText. Consider switching to page.getByText('Dashboard') for consistency with the Playwright best-practice recommendation of using accessible locators.

Suggested diff
-    await expect(page.locator('text=Dashboard')).toBeVisible({ timeout: 10000 })
-    await expect(page.locator(`text=Welcome back, ${authHelpers.TEST_EMAIL}`)).toBeVisible({
+    await expect(page.getByText('Dashboard')).toBeVisible({ timeout: 10000 })
+    await expect(page.getByText(`Welcome back, ${authHelpers.TEST_EMAIL}`)).toBeVisible({
       timeout: 5000,
     })
apps/next/components/dashboard/chat-assistant.tsx (3)

60-60: Use findLast() instead of copying and reversing the array.

[...messages].reverse().find(...) allocates a new array on every render. Array.prototype.findLast() is cleaner and avoids the copy.

Suggested diff
-  const lastAssistantMessage = [...messages].reverse().find(m => m.role === 'assistant')
+  const lastAssistantMessage = messages.findLast(m => m.role === 'assistant')

113-126: Repeated type assertions for tool-output parts are fragile.

The (p as { state: string; output?: unknown }) cast appears twice and (p as { output?: unknown }) also appears. Consider extracting a type guard or a local typed variable after the initial check to avoid repeated unsafe casts.

Suggested approach
                    } else if (
                      typeof p.type === 'string' &&
                      p.type.startsWith('tool-') &&
-                      'state' in p &&
-                      (p as { state: string; output?: unknown }).state === 'output-available' &&
-                      (p as { output?: unknown }).output != null
+                      'state' in p && 'output' in p
                    ) {
-                      const output = (p as { output?: unknown }).output
+                      const toolPart = p as { state: string; output: unknown }
+                      if (toolPart.state !== 'output-available' || toolPart.output == null) return
+                      const output = toolPart.output
                       const str = typeof output === 'string' ? output : JSON.stringify(output)

164-178: Suggestions remain visible after the conversation has started.

The SUGGESTIONS bar is always rendered regardless of message count. If the intent is to guide new users, consider hiding it once messages.length > 0 to reclaim vertical space in the chat. If this is intentional (persistent quick-actions), disregard.

apps/fastify/openapi/openapi.json (1)

90-93: The parts array still lacks maxItems and has unconstrained items: {}.

The static analysis tool correctly flags this: parts accepts arrays of any size containing any JSON value. While runtime validation exists in resolveMessages, the OpenAPI contract should still provide a reasonable bound to protect against oversized payloads at the gateway/documentation level.

Suggested improvement
                            "parts": {
                              "type": "array",
-                             "items": {}
+                             "maxItems": 100,
+                             "items": {
+                               "type": "object"
+                             }
                            }
apps/fastify/src/routes/ai/chat.ts (2)

64-71: isUIMessage doesn't verify role exists, unlike isCoreMessage.

The isCoreMessage guard checks for both role and content, but isUIMessage only checks for parts. While the TypeBox schema enforces role at the route boundary, this guard is less robust if reused outside the validated path. For symmetry and defensive correctness:

Suggested fix
 function isUIMessage(msg: unknown): msg is { role: string; parts: unknown[] } {
   return (
     typeof msg === 'object' &&
     msg !== null &&
+    'role' in msg &&
     'parts' in msg &&
     Array.isArray((msg as { parts?: unknown[] }).parts)
   )
 }

177-177: Nit: verbose type annotation can be simplified.

Awaited<ReturnType<typeof resolveMessages>> is just ModelMessage[]. The direct type is clearer:

-      let messages: Awaited<ReturnType<typeof resolveMessages>>
+      let messages: ModelMessage[]

Comment on lines 199 to 211
if (shouldStream) {
const result = streamText(baseOptions)
reply.header('Content-Type', 'text/event-stream')
reply.header('Cache-Control', 'no-cache')
reply.header('Connection', 'keep-alive')
const nodeStream = Readable.from(result.textStream)
return reply.send(nodeStream as never)
const result = streamText({
...baseOptions,
experimental_transform: smoothStream({
delayInMs: 15,
chunking: 'word',
}),
})
const response = result.toUIMessageStreamResponse()
for (const [k, v] of response.headers) reply.raw.setHeader(k, v)
reply.raw.statusCode = response.status
return reply.send(response.body as never)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

head -50 apps/fastify/src/routes/ai/chat.ts

Repository: blockmatic/basilic

Length of output: 1561


🏁 Script executed:

cat package.json | grep -A 2 -B 2 fastify

Repository: blockmatic/basilic

Length of output: 464


🏁 Script executed:

fd package.json | head -5

Repository: blockmatic/basilic

Length of output: 173


🏁 Script executed:

cat apps/fastify/package.json | grep -E '"fastify"|"ai"'

Repository: blockmatic/basilic

Length of output: 107


🏁 Script executed:

sed -n '199,215p' apps/fastify/src/routes/ai/chat.ts

Repository: blockmatic/basilic

Length of output: 636


🌐 Web query:

Fastify 5 reply.send() Web ReadableStream support

💡 Result:

Fastify v5 does support sending a Web/WHATWG ReadableStream via reply.send().

  • In the v5 Reply reference, ReadableStream is explicitly supported: Fastify treats it like a Node stream, and the payload is considered pre-serialized (so it’s sent unmodified and won’t go through response validation/serialization). [1]
  • Fastify v5 also supports sending a Fetch API Response object (e.g., new Response(readableStream, { headers, status })) via reply.send(), also treated as pre-serialized. [1]

Example (from the v5 docs):

const fs = require('node:fs')
const { ReadableStream } = require('node:stream/web')

fastify.get('/streams', (req, reply) => {
  const stream = fs.createReadStream('some-file')
  reply.header('Content-Type', 'application/octet-stream')
  reply.send(ReadableStream.from(stream))
})

[1]

Caveat: ecosystem plugins may not fully support Web ReadableStreams. For example, @fastify/compress can error when the client requests compression (e.g. curl --compressed) and the payload is a ReadableStream. [2] In those cases, a common workaround is to convert to a Node stream (e.g. Readable.fromWeb(res.body)) before reply.send(), or disable compression for that route.

Sources: Fastify v5 Reply docs [1]; @fastify/compress issue about ReadableStream + compression [2].


🌐 Web query:

toUIMessageStreamResponse() AI SDK return type

💡 Result:

In the Vercel AI SDK (ai package), toUIMessageStreamResponse() returns a standard Web Response object.

TypeScript signature:

toUIMessageStreamResponse(
  options?: ResponseInit & UIMessageStreamOptions
): Response

[1]


🏁 Script executed:

rg -A 10 "toUIMessageStreamResponse" --type ts

Repository: blockmatic/basilic

Length of output: 808


🏁 Script executed:

rg -B 5 "reply.raw.setHeader" apps/fastify/src/routes/ai/chat.ts

Repository: blockmatic/basilic

Length of output: 276


🏁 Script executed:

rg "@fastify/compress" apps/fastify/

Repository: blockmatic/basilic

Length of output: 44


🌐 Web query:

Fastify 5 reply.send() Web Response object support

💡 Result:

Fastify v5 does support sending a WHATWG “Web Response” (Response) object via reply.send().

  • reply.send(response) accepts a Response instance and will use it to set the payload, status code, and headers in one step. The payload inside Response is treated as pre-serialized (so Fastify sends it unmodified and skips response validation). [1]
  • Caveat: when you send a Response, the status code and headers won’t be reflected in reply.statusCode / reply.getHeaders() the same way, because Response headers/status are read-only and this can be confusing in hooks like onSend. [1]
  • Related: Fastify v5 also documents support for WHATWG ReadableStream via reply.send(). [1]

Example (from the docs, adapted):

fastify.get('/file', async (req, reply) => {
  const { ReadableStream } = require('node:stream/web')
  const fs = require('node:fs')

  const readableStream = ReadableStream.from(fs.createReadStream('some-file'))
  const response = new Response(readableStream, {
    status: 200,
    headers: { 'content-type': 'application/octet-stream' }
  })

  return reply.send(response)
})

[1]

Sources: Fastify Reply reference (reply.send() → “Response”, “ReadableStream”). [1]


🏁 Script executed:

cat apps/fastify/src/routes/ai/chat.ts | grep -E "import.*fastify|FastifyPluginAsync" | head -5

Repository: blockmatic/basilic

Length of output: 242


🏁 Script executed:

rg "toUIMessageStreamResponse|reply.send" apps/fastify/src/routes --type ts -B 2 -A 2

Repository: blockmatic/basilic

Length of output: 916


Send the Web Response object directly instead of manually extracting headers, status, and body.

Fastify 5 natively supports sending Web Response objects, which automatically handles the status code, headers, and ReadableStream body in one call. The current approach of manually setting reply.raw.statusCode and reply.raw.setHeader() then calling reply.send() mixes low-level Node.js APIs with Fastify's reply methods, risking header conflicts and losing type safety (hence the as never cast).

Replace with:

       const result = streamText({
         ...baseOptions,
         experimental_transform: smoothStream({
           delayInMs: 15,
           chunking: 'word',
         }),
       })
       const response = result.toUIMessageStreamResponse()
-      for (const [k, v] of response.headers) reply.raw.setHeader(k, v)
-      reply.raw.statusCode = response.status
-      return reply.send(response.body as never)
+      return reply.send(response)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (shouldStream) {
const result = streamText(baseOptions)
reply.header('Content-Type', 'text/event-stream')
reply.header('Cache-Control', 'no-cache')
reply.header('Connection', 'keep-alive')
const nodeStream = Readable.from(result.textStream)
return reply.send(nodeStream as never)
const result = streamText({
...baseOptions,
experimental_transform: smoothStream({
delayInMs: 15,
chunking: 'word',
}),
})
const response = result.toUIMessageStreamResponse()
for (const [k, v] of response.headers) reply.raw.setHeader(k, v)
reply.raw.statusCode = response.status
return reply.send(response.body as never)
}
if (shouldStream) {
const result = streamText({
...baseOptions,
experimental_transform: smoothStream({
delayInMs: 15,
chunking: 'word',
}),
})
const response = result.toUIMessageStreamResponse()
return reply.send(response)
}
🤖 Prompt for AI Agents
In `@apps/fastify/src/routes/ai/chat.ts` around lines 199 - 211, The streaming
branch manually extracts headers/status/body from
result.toUIMessageStreamResponse() and sets them on reply.raw; instead, call
reply.send() with the Web Response object directly to let Fastify handle
headers, status, and ReadableStream body. In the block where shouldStream is
true, replace the manual header/status/body manipulation with: const response =
result.toUIMessageStreamResponse(); return reply.send(response); (remove
reply.raw.setHeader, reply.raw.statusCode, and the as never cast), keeping
references to streamText, baseOptions, smoothStream and
toUIMessageStreamResponse.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@apps/next/e2e/chat-assistant.spec.ts`:
- Around line 51-55: Replace the racy point-in-time check using
errorEl.isVisible() with a racing wait between the error and success locators:
use sheet.getByTestId('chat-error').waitFor({state: 'visible'}) and the
assistant locator (assistantEl) waitFor({state: 'visible'}) in a Promise.race
(or equivalent) so whichever appears first determines the outcome; if the error
locator wins, read its textContent and throw a descriptive Error, otherwise
continue when assistantEl becomes visible. Apply this change for both
occurrences that use errorEl/assistantEl (the blocks around the existing
isVisible checks).
🧹 Nitpick comments (1)
apps/next/e2e/chat-assistant.spec.ts (1)

39-40: 90-second test timeout is generous — consider whether it's justified.

test.setTimeout(90000) is set for both chat tests. Given the 60s locator timeout already acts as the effective bound, this is fine, but if AI response times are typically much shorter, a tighter test timeout (e.g., 70s) would surface regressions faster in CI.

Also applies to: 62-63

Comment thread apps/next/e2e/chat-assistant.spec.ts Outdated
Comment on lines +51 to +55
// Fail fast if API error is shown
const errorEl = sheet.getByTestId('chat-error')
if (await errorEl.isVisible()) {
throw new Error(`Chat API error: ${await errorEl.textContent()}`)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

The isVisible() "fail fast" check is racy and may not catch errors.

errorEl.isVisible() is a point-in-time snapshot, not a Playwright auto-retrying assertion. It runs immediately after the click/send — before the API call has had time to return an error. This means:

  1. If the error renders after this check runs, it's missed, and you wait the full 60s before the assistantEl assertion fails with a less informative message.
  2. It can never reliably "fail fast" unless the error is already in the DOM.

A more reliable approach is to race the two locators:

Proposed fix
-    // Fail fast if API error is shown
-    const errorEl = sheet.getByTestId('chat-error')
-    if (await errorEl.isVisible()) {
-      throw new Error(`Chat API error: ${await errorEl.textContent()}`)
-    }
-    // Assistant must respond (auth is validated in beforeEach)
-    const assistantEl = sheet.locator('[data-role="assistant"]')
-    await expect(assistantEl).toBeVisible({ timeout: 60000 })
-    await expect(assistantEl).not.toBeEmpty()
+    // Race: either assistant responds or an API error appears
+    const assistantEl = sheet.locator('[data-role="assistant"]')
+    const errorEl = sheet.getByTestId('chat-error')
+    const winner = await Promise.race([
+      assistantEl.waitFor({ state: 'visible', timeout: 60000 }).then(() => 'assistant' as const),
+      errorEl.waitFor({ state: 'visible', timeout: 60000 }).then(() => 'error' as const),
+    ])
+    if (winner === 'error') {
+      throw new Error(`Chat API error: ${await errorEl.textContent()}`)
+    }
+    await expect(assistantEl).not.toBeEmpty()

Apply the same pattern in both tests (lines 51–59 and 77–85).

Also applies to: 77-81

🤖 Prompt for AI Agents
In `@apps/next/e2e/chat-assistant.spec.ts` around lines 51 - 55, Replace the racy
point-in-time check using errorEl.isVisible() with a racing wait between the
error and success locators: use sheet.getByTestId('chat-error').waitFor({state:
'visible'}) and the assistant locator (assistantEl) waitFor({state: 'visible'})
in a Promise.race (or equivalent) so whichever appears first determines the
outcome; if the error locator wins, read its textContent and throw a descriptive
Error, otherwise continue when assistantEl becomes visible. Apply this change
for both occurrences that use errorEl/assistantEl (the blocks around the
existing isVisible checks).

@gaboesquivel gaboesquivel merged commit 5200223 into main Feb 13, 2026
9 checks passed
@gaboesquivel gaboesquivel deleted the chat-assistant branch February 13, 2026 15:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant