feat: chat assistant#83
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
2 Skipped Deployments
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughNormalizes dual message formats and SSE streaming for /ai/chat, adds a Chat Assistant UI with supporting components and E2E tests, switches magic-link to cookie-based tokens, exposes baseUrl/getAuthToken for client hooks, and updates test/build environments (JWT_SECRET, TS resolution, transpile rules). Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Next.js Client
participant Hook as useChatFromConfig
participant Fastify as Fastify /ai/chat
participant LLM as LLM Provider
Client->>Hook: sendMessage(rawMessages)
Hook->>Hook: getAuthToken()
Hook->>Fastify: POST /ai/chat (Authorization: Bearer token, Accept:text/event-stream)
Fastify->>Fastify: resolveMessages(rawMessages) -> ModelMessage[]
Fastify->>LLM: streamText(model, messages)
LLM-->>Fastify: streaming chunks/events
Fastify-->>Client: text/event-stream (data: {...})
Client->>Client: parse SSE lines, render streaming MessageResponse
Note over LLM,Client: repeat until [DONE]
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
No actionable comments were generated in the recent review. 🎉 Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
packages/ui/package.json (1)
14-54:⚠️ Potential issue | 🟡 MinorRemove unused
radix-uiunified package from dependencies.The unified
radix-uipackage (line 54) is not imported or used anywhere in the codebase. All imports use the individual@radix-ui/react-*packages instead (packages/ui/src/radix/index.tsx re-exports them, and components import directly from individual packages). Remove the unused dependency to keep the dependency tree clean.apps/next/app/api/auth/[...path]/handlers/magic-link.ts (1)
40-57:⚠️ Potential issue | 🟠 MajorAvoid logging the raw
responseobject — it may contain token values.Line 43 logs the full
responseobject in the error branch. If the response has a partially valid shape (e.g., token present but wrong type), the actual token value could end up in logs.🔒 Suggested fix — log only structural metadata
logger.error( { - response, responseType: typeof response, hasToken: response && typeof response === 'object' && 'token' in response, hasRefreshToken: response && typeof response === 'object' && 'refreshToken' in response,As per coding guidelines, "SECURITY: Verify no secrets logged (use logger redaction features)".
🤖 Fix all issues with AI agents
In `@apps/fastify/openapi/openapi.json`:
- Around line 62-66: The OpenAPI schema for the "messages" array is too
permissive because "items: {}" accepts any JSON; update the "messages" schema to
constrain each item to an object and add a sensible "maxItems" (e.g., 50) to
prevent unbounded payloads. Specifically, replace "items: {}" with an object
schema (e.g., "items": { "type": "object", "properties": { "role": { "type":
"string" }, "content": { "type": "string" }, "name": { "type": "string" } },
"required": ["role","content"] }) and add "maxItems": 50; align the fields with
the runtime validations in resolveMessages so gateway/client generators and docs
match the route handler expectations.
In `@apps/fastify/src/routes/ai/chat.ts`:
- Around line 72-97: The function resolveMessages currently lies about its
return shape and unsafely casts convertToModelMessages results to
{role:string;content:string}[], so change resolveMessages to return
Promise<ModelMessage[]> (the actual type returned by convertToModelMessages),
remove the unsafe cast when returning convertToModelMessages(...) and stop
asserting the result as {role:string;content:string}[] elsewhere (e.g., the cast
at the later call site that re-casts to ModelMessage[]); keep the same logic for
UIMessage vs CoreMessage checks (use isUIMessage, isCoreMessage) and preserve
the convertToModelMessages options (tools, ignoreIncompleteToolCalls) but update
any callers to accept ModelMessage[] instead of the narrowed string-only content
type.
In `@apps/next/components/dashboard/chat-assistant.tsx`:
- Around line 86-101: The isStreamingAssistant flag is currently set using only
global status and causes every assistant message to be treated as streaming;
change it to be true only for the assistant message that is the last assistant
in messages. Before mapping (or at the start of the map), compute the last
assistant message (e.g., find the last entry where message.role === 'assistant')
and then set isStreamingAssistant = status === 'streaming' && message.id ===
lastAssistantMessage.id (or compare indexes) so MessageResponse only receives
isStreaming for that final assistant message.
In `@apps/next/e2e/chat-assistant.spec.ts`:
- Around line 50-55: The test is accepting auth error banners as a passing
outcome which masks broken auth; update the assertions to require the assistant
response instead of OR'ing with the auth error: replace the combined expect that
uses
sheet.locator('[data-role="assistant"]').or(sheet.getByText(/UNAUTHORIZED|Authentication
required/)) with a direct assertion that the assistant element
(sheet.locator('[data-role="assistant"]')) is visible and contains the expected
assistant content; if auth failures are an environmental possibility, wrap the
test in a conditional guard (e.g., skip/conditional based on an environment
flag) or add a separate test that explicitly asserts the auth error using
sheet.getByText(/UNAUTHORIZED|Authentication required/) so tests for assistant
behavior and auth behavior are separated (also apply the same change to the
other occurrence around the second block noted).
In `@apps/next/playwright.config.ts`:
- Line 4: Add the four environment variables referenced in
apps/next/playwright.config.ts to the test:e2e task's env list in turbo.json:
include PLAYWRIGHT_REUSE_SERVER (used where const reuseServer is defined),
JWT_SECRET, PLAYWRIGHT_HTML_REPORT_OPEN, and PLAYWRIGHT_TEST_BASE_URL so Turbo
will treat changes to these values as inputs to the test:e2e pipeline and avoid
serving stale cached results.
In `@packages/react/package.json`:
- Around line 28-31: The package.json has a version mismatch between the
`@ai-sdk/react` dependency and the ai dependency; update the "ai" entry to the ai
version that `@ai-sdk/react`@3.0.87 pins to (e.g., change "ai": "^6.0.81" to "ai":
"^6.0.87" or the exact pinned 6.0.x that corresponds to `@ai-sdk/react`@3.0.87) so
that "@ai-sdk/react" and "ai" match and avoid duplicate dependency trees.
In `@packages/react/src/components/login-form.spec.tsx`:
- Around line 214-223: The call to capturedOnError triggers a component state
update but isn’t wrapped in React's act(), causing warnings; update the test to
wrap the capturedOnError invocation inside await act(async () => { ... }) just
like the capturedOnSuccess case, ensuring you call capturedOnError(...) inside
the act callback and await the act call so the component updates are flushed
(reference capturedOnError and the existing capturedOnSuccess/act pattern in the
spec).
In `@packages/ui/package.json`:
- Line 45: Remove the unused "ai" dependency entry from the UI package.json
dependencies block; edit the package.json to delete the line containing "ai":
"^6.0.85" so it is no longer listed, then run your package manager's install
(e.g., npm install or pnpm install) and update lockfile to ensure the dependency
graph is clean.
- Line 75: Add a brief explanatory comment next to the pinned pre-release
dependency "@typescript/native-preview": "7.0.0-dev.20260124.1" describing why
this specific preview build is required (e.g., to enable a TypeScript preview
feature, test a breaking change, or maintain consistency across the monorepo),
note that it is a temporary/experimental pin and any constraints (compatibility
or update plan), and reference that this same pin exists in the root
package.json so maintainers know to update both places together.
🧹 Nitpick comments (18)
apps/fastify/vitest.config.ts (2)
26-35: Unreachable branch on line 28.The
else if (id.startsWith('/') && !id.startsWith('/@'))branch on line 28 is unreachable:path.isAbsolute(id)on line 26 already returnstruefor any string starting with'/'(on all platforms in Node.js), so it will always match first.If you intend to handle Vite root-relative paths (e.g.,
/src/foo.js) that don't start withprojectRoot, consider reordering the branches or merging the logic. Otherwise, this dead branch can be removed.
129-133: Broad regex could match unintended paths.The pattern
^(.*\/)db\/index\.js$matches any import ending in/db/index.js, not just project-local ones. In practice this is unlikely to collide with a dependency import, but you could tighten it to anchor againstsrc/or the project root if this ever causes false rewrites.apps/fastify/src/routes/ai/chat.ts (3)
62-70:isCoreMessageonly accepts string content, rejecting valid multimodalCoreMessagepayloads.The AI SDK's
CoreMessagetype allowscontentto be either astringor an array of content parts (for images, etc.). The checktypeof content === 'string'on line 67 will cause valid multimodal messages to fall through to the error on line 94, returning a 400.If multimodal support is not needed now, consider adding a comment documenting this intentional limitation. Otherwise, broaden the guard:
function isCoreMessage(msg: unknown): msg is { role: string; content: string } { return ( typeof msg === 'object' && msg !== null && - 'content' in msg && - typeof (msg as { content?: unknown }).content === 'string' && + 'content' in msg && + (typeof (msg as { content?: unknown }).content === 'string' || + Array.isArray((msg as { content?: unknown }).content)) && 'role' in msg ) }
191-203: Streaming path looks correct; minor note on error handling.The
streamText→toUIMessageStreamResponse→ manual header propagation pattern is reasonable. However, ifstreamTextthrows synchronously (e.g., invalid model), the error will propagate as an unhandled rejection since there's no try/catch around this block. The non-streaming path on line 205 is similarly unguarded but would produce a 500 via Fastify's default handler. Consider whether a try/catch with a structured error response is warranted here for consistency.
22-22:Type.Any()disables all Fastify schema validation for message items.This is consistent with the OpenAPI change but means the server accepts any payload shape in messages (numbers, strings, deeply nested objects, etc.) with validation deferred entirely to
resolveMessages. This increases the attack surface for malformed payloads reaching application code. Consider usingType.Object({}, { additionalProperties: true })to at least enforce that each item is an object.apps/fastify/src/routes/ai/chat.test.ts (1)
49-58: SSE test may silently pass without verifying any streamed content chunks.If the first (and only)
data:line is[DONE], the assertion inside theifblock on lines 53-57 is skipped entirely. The test would pass having only verified that somedata:line exists, without confirming any actual content was streamed. Consider asserting that at least one non-[DONE]data line is present:Suggested improvement
const lines = body.split('\n').filter(line => line.startsWith('data:')) expect(lines.length).toBeGreaterThan(0) - const firstData = lines[0]?.replace(/^data:\s*/, '') - if (firstData && firstData !== '[DONE]') { - const parsed = JSON.parse(firstData) as { type?: string } - expect(parsed.type).toBeDefined() - expect(['start', 'text-start', 'text-delta', 'text-end', 'finish', 'finish-step']).toContain( - parsed.type, - ) - } + const dataLines = lines + .map(line => line.replace(/^data:\s*/, '')) + .filter(d => d && d !== '[DONE]') + expect(dataLines.length).toBeGreaterThan(0) + const parsed = JSON.parse(dataLines[0]) as { type?: string } + expect(parsed.type).toBeDefined() + expect(['start', 'text-start', 'text-delta', 'text-end', 'finish', 'finish-step']).toContain( + parsed.type, + )apps/next/app/api/auth/[...path]/handlers/magic-link.ts (1)
61-63: Redundantas unknown ascast after the type guard.The type guard on lines 33-59 already confirms
response.tokenandresponse.refreshTokenare strings. You can access them directly without the double cast.♻️ Suggested simplification
- const accessToken = (response as unknown as { token: string; refreshToken: string }).token - const refreshToken = (response as unknown as { token: string; refreshToken: string }) - .refreshToken + const accessToken = response.token as string + const refreshToken = response.refreshToken as stringpackages/react/src/setup.ts (1)
77-84: Consider documenting thatgetAuthTokenshould be a stable reference.Since
ReactApiProviderwraps the config inuseMemo(() => createReactApiConfig(config), [config]), an unstablegetAuthTokencallback (new function each render) would invalidate the memo and trigger re-renders of all context consumers. Consumers should wrap it inuseCallback.apps/next/components/providers.tsx (1)
14-29: Consider deduplicating the session-fetch logic.Both
getAuthToken(line 14) andgetRefreshToken(line 24) fetch the same/api/auth/get-sessionendpoint with identical options and only differ in the extracted field. If the session endpoint is called frequently (e.g., on every authenticated request), this could double the requests.A shared helper would reduce duplication and could allow a single fetch to return both tokens:
♻️ Optional refactor
-async function getAuthToken() { - const response = await fetch('/api/auth/get-session', { credentials: 'include' }) - if (!response.ok) return null - const data = await response.json() - return data.token ?? null -} +async function getSessionData() { + const response = await fetch('/api/auth/get-session', { credentials: 'include' }) + if (!response.ok) return null + return response.json() +} + +async function getAuthToken() { + const data = await getSessionData() + return data?.token ?? null +} const coreClient = createClient({ baseUrl: env.NEXT_PUBLIC_API_URL, getAuthToken, getRefreshToken: async () => { - const response = await fetch('/api/auth/get-session', { credentials: 'include' }) - if (!response.ok) return null - const data = await response.json() + const data = await getSessionData() return data?.refreshToken ?? null },apps/next/components/ai-elements/prompt-input.tsx (2)
15-28: Ambiguous export nameInput— consider renaming toPromptInputForm.The name
Inputis generic and easily confused with the HTML<input>element or theInputcomponent from@repo/ui/components/input.tsx. Since this is actually a form wrapper, a more descriptive name would prevent import conflicts and improve readability at call sites.♻️ Suggested rename
-export function Input({ onSubmit, className, children, ...props }: PromptInputFormProps) { +export function PromptInputForm({ onSubmit, className, children, ...props }: PromptInputFormProps) {
38-61: SyntheticChangeEventconstruction is brittle but functional.
Object.assign({}, el, { value: newValue })creates a plain object copy of the textarea element. This works for controlled components that only readevent.target.value, but it won't behave correctly if any consumer inspects other DOM properties or methods ontarget. The cast throughunknown(line 53) masks this.This is a well-known pattern for programmatic textarea manipulation in React, so it's acceptable here, but worth a brief inline comment noting the limitation.
apps/next/components/ai-elements/message.tsx (1)
14-29:forwardRefis no longer required in React 19.In React 19,
refis a regular prop. You can simplifyMessageby acceptingrefdirectly in the props instead of usingforwardRef. The same applies toConversationinconversation.tsx.♻️ Simplified version
-export const Message = forwardRef<HTMLDivElement, MessageProps>( - ({ from = 'assistant', className, ...props }, ref) => ( +export function Message({ from = 'assistant', className, ref, ...props }: MessageProps & { ref?: React.Ref<HTMLDivElement> }) { + return ( <div ref={ref} className={cn( 'flex w-full', from === 'user' && 'justify-end', from === 'assistant' && 'justify-start', className, )} data-role={from} {...props} /> - ), -) -Message.displayName = 'Message' + ) +}apps/next/components/ai-elements/conversation.tsx (1)
11-21: SameforwardRefnote asmessage.tsx— optional simplification available with React 19.Since this project uses React 19.2.3,
refcan be accepted as a regular prop.packages/react/src/hooks/use-chat.ts (1)
27-44: Streaming detection logic is complex — consider simplifying.Three separate conditions determine streaming intent (line 27's
options.stream, headerX-Stream, andkeepStream). Thestreamvariable on line 27 defaults totruewhenoptions.streamis not explicitlyfalse, which means streaming is always on unless opted out. Given that, the additional checks on lines 38-39 forX-Streamheader andkeepStreammay be redundant — they can only matter whenstreamisfalse, which is the opt-out case.If the intent is to allow per-request overrides of the opt-out, a brief comment explaining when
X-StreamorkeepStreamwould be set (and by whom) would help future maintainers.apps/next/e2e/chat-assistant.spec.ts (1)
10-11: Prefer semantic locators overtext=selectors.Playwright docs recommend
getByText()/getByRole()overpage.locator('text=...')for readability and resilience.Suggested fix
- await expect(page.locator('text=Dashboard')).toBeVisible({ timeout: 10000 }) - await expect(page.locator(`text=Welcome back, ${authHelpers.TEST_EMAIL}`)).toBeVisible({ + await expect(page.getByText('Dashboard')).toBeVisible({ timeout: 10000 }) + await expect(page.getByText(`Welcome back, ${authHelpers.TEST_EMAIL}`)).toBeVisible({apps/next/e2e/auth-helpers.ts (2)
54-59: Simplify the URL wait —expectedPathis a constant, no need for RegExp.The static analysis tool flagged RegExp construction from a variable (ReDoS risk). While it's safe here since
expectedPathis a hardcoded constant, the indirection is unnecessary.Suggested fix
async verifyMagicLink(page: Page, token: string) { const verifyUrl = `/api/auth/magic-link/verify?token=${encodeURIComponent(token)}&callbackURL=/` await page.goto(verifyUrl) - const expectedPath = '/' // from callbackURL in verifyUrl - await page.waitForURL(new RegExp(`^\\${expectedPath}$`), { timeout: 5000 }) + await page.waitForURL('/', { timeout: 5000 }) },
3-4: Consider readingAPI_URLfrom an environment variable.Hardcoding
http://localhost:3001works for local dev but will break if the API runs on a different port or host in CI.apps/next/components/dashboard/chat-assistant.tsx (1)
109-122: Heavyascasts for tool parts reduce type safety.Consider defining a narrow type guard (e.g.,
isToolOutput(part)) to replace the inline casts on lines 113-116. This improves readability and lets TypeScript validate the access.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/fastify/src/routes/ai/chat.ts`:
- Around line 199-211: The streaming branch manually extracts
headers/status/body from result.toUIMessageStreamResponse() and sets them on
reply.raw; instead, call reply.send() with the Web Response object directly to
let Fastify handle headers, status, and ReadableStream body. In the block where
shouldStream is true, replace the manual header/status/body manipulation with:
const response = result.toUIMessageStreamResponse(); return
reply.send(response); (remove reply.raw.setHeader, reply.raw.statusCode, and the
as never cast), keeping references to streamText, baseOptions, smoothStream and
toUIMessageStreamResponse.
🧹 Nitpick comments (8)
apps/next/e2e/chat-assistant.spec.ts (2)
4-14: Serial mode shares browser context —beforeEachre-login is appropriate but adds latency.With
mode: 'serial', all tests share a singlepageinstance. ThebeforeEachperforms a full login cycle for every test, which is correct for isolation but expensive. Since state (cookies/session) persists across serial tests, you could login once inbeforeAlland just navigate to/inbeforeEach— cutting per-test overhead significantly.That said, the current approach is safe and correct.
10-13: Prefer accessible locators overtext=selector for consistency.Lines 10–13 use
page.locator('text=...')while the rest of the file usesgetByRole/getByText. Consider switching topage.getByText('Dashboard')for consistency with the Playwright best-practice recommendation of using accessible locators.Suggested diff
- await expect(page.locator('text=Dashboard')).toBeVisible({ timeout: 10000 }) - await expect(page.locator(`text=Welcome back, ${authHelpers.TEST_EMAIL}`)).toBeVisible({ + await expect(page.getByText('Dashboard')).toBeVisible({ timeout: 10000 }) + await expect(page.getByText(`Welcome back, ${authHelpers.TEST_EMAIL}`)).toBeVisible({ timeout: 5000, })apps/next/components/dashboard/chat-assistant.tsx (3)
60-60: UsefindLast()instead of copying and reversing the array.
[...messages].reverse().find(...)allocates a new array on every render.Array.prototype.findLast()is cleaner and avoids the copy.Suggested diff
- const lastAssistantMessage = [...messages].reverse().find(m => m.role === 'assistant') + const lastAssistantMessage = messages.findLast(m => m.role === 'assistant')
113-126: Repeated type assertions for tool-output parts are fragile.The
(p as { state: string; output?: unknown })cast appears twice and(p as { output?: unknown })also appears. Consider extracting a type guard or a local typed variable after the initial check to avoid repeated unsafe casts.Suggested approach
} else if ( typeof p.type === 'string' && p.type.startsWith('tool-') && - 'state' in p && - (p as { state: string; output?: unknown }).state === 'output-available' && - (p as { output?: unknown }).output != null + 'state' in p && 'output' in p ) { - const output = (p as { output?: unknown }).output + const toolPart = p as { state: string; output: unknown } + if (toolPart.state !== 'output-available' || toolPart.output == null) return + const output = toolPart.output const str = typeof output === 'string' ? output : JSON.stringify(output)
164-178: Suggestions remain visible after the conversation has started.The SUGGESTIONS bar is always rendered regardless of message count. If the intent is to guide new users, consider hiding it once
messages.length > 0to reclaim vertical space in the chat. If this is intentional (persistent quick-actions), disregard.apps/fastify/openapi/openapi.json (1)
90-93: Thepartsarray still lacksmaxItemsand has unconstraineditems: {}.The static analysis tool correctly flags this:
partsaccepts arrays of any size containing any JSON value. While runtime validation exists inresolveMessages, the OpenAPI contract should still provide a reasonable bound to protect against oversized payloads at the gateway/documentation level.Suggested improvement
"parts": { "type": "array", - "items": {} + "maxItems": 100, + "items": { + "type": "object" + } }apps/fastify/src/routes/ai/chat.ts (2)
64-71:isUIMessagedoesn't verifyroleexists, unlikeisCoreMessage.The
isCoreMessageguard checks for bothroleandcontent, butisUIMessageonly checks forparts. While the TypeBox schema enforcesroleat the route boundary, this guard is less robust if reused outside the validated path. For symmetry and defensive correctness:Suggested fix
function isUIMessage(msg: unknown): msg is { role: string; parts: unknown[] } { return ( typeof msg === 'object' && msg !== null && + 'role' in msg && 'parts' in msg && Array.isArray((msg as { parts?: unknown[] }).parts) ) }
177-177: Nit: verbose type annotation can be simplified.
Awaited<ReturnType<typeof resolveMessages>>is justModelMessage[]. The direct type is clearer:- let messages: Awaited<ReturnType<typeof resolveMessages>> + let messages: ModelMessage[]
| if (shouldStream) { | ||
| const result = streamText(baseOptions) | ||
| reply.header('Content-Type', 'text/event-stream') | ||
| reply.header('Cache-Control', 'no-cache') | ||
| reply.header('Connection', 'keep-alive') | ||
| const nodeStream = Readable.from(result.textStream) | ||
| return reply.send(nodeStream as never) | ||
| const result = streamText({ | ||
| ...baseOptions, | ||
| experimental_transform: smoothStream({ | ||
| delayInMs: 15, | ||
| chunking: 'word', | ||
| }), | ||
| }) | ||
| const response = result.toUIMessageStreamResponse() | ||
| for (const [k, v] of response.headers) reply.raw.setHeader(k, v) | ||
| reply.raw.statusCode = response.status | ||
| return reply.send(response.body as never) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
head -50 apps/fastify/src/routes/ai/chat.tsRepository: blockmatic/basilic
Length of output: 1561
🏁 Script executed:
cat package.json | grep -A 2 -B 2 fastifyRepository: blockmatic/basilic
Length of output: 464
🏁 Script executed:
fd package.json | head -5Repository: blockmatic/basilic
Length of output: 173
🏁 Script executed:
cat apps/fastify/package.json | grep -E '"fastify"|"ai"'Repository: blockmatic/basilic
Length of output: 107
🏁 Script executed:
sed -n '199,215p' apps/fastify/src/routes/ai/chat.tsRepository: blockmatic/basilic
Length of output: 636
🌐 Web query:
Fastify 5 reply.send() Web ReadableStream support
💡 Result:
Fastify v5 does support sending a Web/WHATWG ReadableStream via reply.send().
- In the v5 Reply reference,
ReadableStreamis explicitly supported: Fastify treats it like a Node stream, and the payload is considered pre-serialized (so it’s sent unmodified and won’t go through response validation/serialization). [1] - Fastify v5 also supports sending a Fetch API
Responseobject (e.g.,new Response(readableStream, { headers, status })) viareply.send(), also treated as pre-serialized. [1]
Example (from the v5 docs):
const fs = require('node:fs')
const { ReadableStream } = require('node:stream/web')
fastify.get('/streams', (req, reply) => {
const stream = fs.createReadStream('some-file')
reply.header('Content-Type', 'application/octet-stream')
reply.send(ReadableStream.from(stream))
})[1]
Caveat: ecosystem plugins may not fully support Web ReadableStreams. For example, @fastify/compress can error when the client requests compression (e.g. curl --compressed) and the payload is a ReadableStream. [2] In those cases, a common workaround is to convert to a Node stream (e.g. Readable.fromWeb(res.body)) before reply.send(), or disable compression for that route.
Sources: Fastify v5 Reply docs [1]; @fastify/compress issue about ReadableStream + compression [2].
🌐 Web query:
toUIMessageStreamResponse() AI SDK return type
💡 Result:
In the Vercel AI SDK (ai package), toUIMessageStreamResponse() returns a standard Web Response object.
TypeScript signature:
toUIMessageStreamResponse(
options?: ResponseInit & UIMessageStreamOptions
): Response[1]
🏁 Script executed:
rg -A 10 "toUIMessageStreamResponse" --type tsRepository: blockmatic/basilic
Length of output: 808
🏁 Script executed:
rg -B 5 "reply.raw.setHeader" apps/fastify/src/routes/ai/chat.tsRepository: blockmatic/basilic
Length of output: 276
🏁 Script executed:
rg "@fastify/compress" apps/fastify/Repository: blockmatic/basilic
Length of output: 44
🌐 Web query:
Fastify 5 reply.send() Web Response object support
💡 Result:
Fastify v5 does support sending a WHATWG “Web Response” (Response) object via reply.send().
reply.send(response)accepts aResponseinstance and will use it to set the payload, status code, and headers in one step. The payload insideResponseis treated as pre-serialized (so Fastify sends it unmodified and skips response validation). [1]- Caveat: when you send a
Response, the status code and headers won’t be reflected inreply.statusCode/reply.getHeaders()the same way, becauseResponseheaders/status are read-only and this can be confusing in hooks likeonSend. [1] - Related: Fastify v5 also documents support for WHATWG
ReadableStreamviareply.send(). [1]
Example (from the docs, adapted):
fastify.get('/file', async (req, reply) => {
const { ReadableStream } = require('node:stream/web')
const fs = require('node:fs')
const readableStream = ReadableStream.from(fs.createReadStream('some-file'))
const response = new Response(readableStream, {
status: 200,
headers: { 'content-type': 'application/octet-stream' }
})
return reply.send(response)
})[1]
Sources: Fastify Reply reference (reply.send() → “Response”, “ReadableStream”). [1]
🏁 Script executed:
cat apps/fastify/src/routes/ai/chat.ts | grep -E "import.*fastify|FastifyPluginAsync" | head -5Repository: blockmatic/basilic
Length of output: 242
🏁 Script executed:
rg "toUIMessageStreamResponse|reply.send" apps/fastify/src/routes --type ts -B 2 -A 2Repository: blockmatic/basilic
Length of output: 916
Send the Web Response object directly instead of manually extracting headers, status, and body.
Fastify 5 natively supports sending Web Response objects, which automatically handles the status code, headers, and ReadableStream body in one call. The current approach of manually setting reply.raw.statusCode and reply.raw.setHeader() then calling reply.send() mixes low-level Node.js APIs with Fastify's reply methods, risking header conflicts and losing type safety (hence the as never cast).
Replace with:
const result = streamText({
...baseOptions,
experimental_transform: smoothStream({
delayInMs: 15,
chunking: 'word',
}),
})
const response = result.toUIMessageStreamResponse()
- for (const [k, v] of response.headers) reply.raw.setHeader(k, v)
- reply.raw.statusCode = response.status
- return reply.send(response.body as never)
+ return reply.send(response)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (shouldStream) { | |
| const result = streamText(baseOptions) | |
| reply.header('Content-Type', 'text/event-stream') | |
| reply.header('Cache-Control', 'no-cache') | |
| reply.header('Connection', 'keep-alive') | |
| const nodeStream = Readable.from(result.textStream) | |
| return reply.send(nodeStream as never) | |
| const result = streamText({ | |
| ...baseOptions, | |
| experimental_transform: smoothStream({ | |
| delayInMs: 15, | |
| chunking: 'word', | |
| }), | |
| }) | |
| const response = result.toUIMessageStreamResponse() | |
| for (const [k, v] of response.headers) reply.raw.setHeader(k, v) | |
| reply.raw.statusCode = response.status | |
| return reply.send(response.body as never) | |
| } | |
| if (shouldStream) { | |
| const result = streamText({ | |
| ...baseOptions, | |
| experimental_transform: smoothStream({ | |
| delayInMs: 15, | |
| chunking: 'word', | |
| }), | |
| }) | |
| const response = result.toUIMessageStreamResponse() | |
| return reply.send(response) | |
| } |
🤖 Prompt for AI Agents
In `@apps/fastify/src/routes/ai/chat.ts` around lines 199 - 211, The streaming
branch manually extracts headers/status/body from
result.toUIMessageStreamResponse() and sets them on reply.raw; instead, call
reply.send() with the Web Response object directly to let Fastify handle
headers, status, and ReadableStream body. In the block where shouldStream is
true, replace the manual header/status/body manipulation with: const response =
result.toUIMessageStreamResponse(); return reply.send(response); (remove
reply.raw.setHeader, reply.raw.statusCode, and the as never cast), keeping
references to streamText, baseOptions, smoothStream and
toUIMessageStreamResponse.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@apps/next/e2e/chat-assistant.spec.ts`:
- Around line 51-55: Replace the racy point-in-time check using
errorEl.isVisible() with a racing wait between the error and success locators:
use sheet.getByTestId('chat-error').waitFor({state: 'visible'}) and the
assistant locator (assistantEl) waitFor({state: 'visible'}) in a Promise.race
(or equivalent) so whichever appears first determines the outcome; if the error
locator wins, read its textContent and throw a descriptive Error, otherwise
continue when assistantEl becomes visible. Apply this change for both
occurrences that use errorEl/assistantEl (the blocks around the existing
isVisible checks).
🧹 Nitpick comments (1)
apps/next/e2e/chat-assistant.spec.ts (1)
39-40: 90-second test timeout is generous — consider whether it's justified.
test.setTimeout(90000)is set for both chat tests. Given the 60s locator timeout already acts as the effective bound, this is fine, but if AI response times are typically much shorter, a tighter test timeout (e.g., 70s) would surface regressions faster in CI.Also applies to: 62-63
| // Fail fast if API error is shown | ||
| const errorEl = sheet.getByTestId('chat-error') | ||
| if (await errorEl.isVisible()) { | ||
| throw new Error(`Chat API error: ${await errorEl.textContent()}`) | ||
| } |
There was a problem hiding this comment.
The isVisible() "fail fast" check is racy and may not catch errors.
errorEl.isVisible() is a point-in-time snapshot, not a Playwright auto-retrying assertion. It runs immediately after the click/send — before the API call has had time to return an error. This means:
- If the error renders after this check runs, it's missed, and you wait the full 60s before the
assistantElassertion fails with a less informative message. - It can never reliably "fail fast" unless the error is already in the DOM.
A more reliable approach is to race the two locators:
Proposed fix
- // Fail fast if API error is shown
- const errorEl = sheet.getByTestId('chat-error')
- if (await errorEl.isVisible()) {
- throw new Error(`Chat API error: ${await errorEl.textContent()}`)
- }
- // Assistant must respond (auth is validated in beforeEach)
- const assistantEl = sheet.locator('[data-role="assistant"]')
- await expect(assistantEl).toBeVisible({ timeout: 60000 })
- await expect(assistantEl).not.toBeEmpty()
+ // Race: either assistant responds or an API error appears
+ const assistantEl = sheet.locator('[data-role="assistant"]')
+ const errorEl = sheet.getByTestId('chat-error')
+ const winner = await Promise.race([
+ assistantEl.waitFor({ state: 'visible', timeout: 60000 }).then(() => 'assistant' as const),
+ errorEl.waitFor({ state: 'visible', timeout: 60000 }).then(() => 'error' as const),
+ ])
+ if (winner === 'error') {
+ throw new Error(`Chat API error: ${await errorEl.textContent()}`)
+ }
+ await expect(assistantEl).not.toBeEmpty()Apply the same pattern in both tests (lines 51–59 and 77–85).
Also applies to: 77-81
🤖 Prompt for AI Agents
In `@apps/next/e2e/chat-assistant.spec.ts` around lines 51 - 55, Replace the racy
point-in-time check using errorEl.isVisible() with a racing wait between the
error and success locators: use sheet.getByTestId('chat-error').waitFor({state:
'visible'}) and the assistant locator (assistantEl) waitFor({state: 'visible'})
in a Promise.race (or equivalent) so whichever appears first determines the
outcome; if the error locator wins, read its textContent and throw a descriptive
Error, otherwise continue when assistantEl becomes visible. Apply this change
for both occurrences that use errorEl/assistantEl (the blocks around the
existing isVisible checks).
Summary by CodeRabbit
New Features
Tests
Chores