feat(chat): json-render unser info card #100
Conversation
- Add @json-render/core and @json-render/react to apps/next
- Backend: getAccountInfo returns { __render, spec, summary } with flat spec
- Frontend: UserInfo catalog/registry, Renderer for getAccountInfo tool output
- E2E: strict assertions for user-info-card, chat-error
- Backend unit test: Who am I? returns 200
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Caution Review failedThe pull request is closed. WalkthroughAdds structured user-info rendering: backend produces a user-info spec for chat tools, frontend registers a UserInfo component and renders spec outputs via a JSON renderer, tests and E2E flows updated, and related test/CI/config and dependency adjustments applied. Changes
Sequence DiagramsequenceDiagram
participant User
participant Backend as Chat Backend
participant Frontend as Chat Assistant
participant Renderer as JSON Renderer
participant Component as UserInfo Component
User->>Backend: POST /ai/chat "Who am I?"
Backend->>Backend: createAccountInfoTool() -> buildUserInfoSpec(user)
Backend-->>Frontend: Response includes { __render: "user-info", spec, summary }
Frontend->>Frontend: isUserInfoSpecOutput() detects spec
Frontend->>Renderer: Render spec (StateProvider / VisibilityProvider)
Renderer->>Component: Resolve via userInfoRegistry
Component->>User: Display user info card (avatar, name, email, joinedAt)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/fastify/src/routes/ai/chat.ts (1)
243-244:⚠️ Potential issue | 🟠 Major
generateTextwithoutstopWhenreturns empty text when the model calls a tool.By default,
generateTextruns withstopWhen: stepCountIs(1)— a single model generation step. When the model calls a tool in that step, the generation ends andtextis empty because no second step occurs to generate text after the tool executes.The non-streaming path at lines 243–244 hits this case when users ask "Who am I?" — the model calls
getAccountInfo, but the route returns{ text: '' }.The streaming path works around this by rendering tool outputs directly via UIMessageStream events, independent of
result.text.Fix by adding
stopWhenwithstepCountIs()to enable multi-step tool-calling:🐛 Proposed fix
- import { generateText, ... } from 'ai' + import { generateText, stepCountIs, ... } from 'ai' const result = await generateText({ ...baseOptions, + stopWhen: stepCountIs(5), }) return reply.code(200).send({ text: result.text })(Adjust step count as needed for your use case. Higher step count allows more tool-calling loops before the model generates final text.)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/fastify/src/routes/ai/chat.ts` around lines 243 - 244, The non-streaming branch calls generateText(baseOptions) which uses the default stopWhen (stepCountIs(1)), so when the model invokes a tool in the first step no subsequent generation happens and result.text is empty; update the call to generateText to pass an explicit stopWhen using stepCountIs(n) (e.g., stepCountIs(2) or higher) so the agent can perform tool-calls and then produce final text—modify the invocation around generateText/baseOptions to merge in stopWhen: stepCountIs(...) before sending reply.
🧹 Nitpick comments (2)
apps/fastify/src/routes/ai/chat.test.ts (1)
122-135: Consider adding a non-empty text assertion for consistency.All other non-streaming 200 tests (e.g., lines 77, 96) include
expect(data.text.length).toBeGreaterThan(0), but this one only checkstoBeTypeOf('string'). The omission may be intentional (tool-call responses can return empty text in a single-stepgenerateTextrun — see the related concern inchat.ts), but the inconsistency is worth noting. If the intent is to verify the LLM produced a meaningful reply after tool execution, the assertion should be tightened.♻️ Suggested assertion
expect(data.text).toBeTypeOf('string') + expect(data.text.length).toBeGreaterThan(0)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/fastify/src/routes/ai/chat.test.ts` around lines 122 - 135, The test "should return 200 when user asks who am I (getAccountInfo tool)" currently only asserts that data.text is a string; add a non-empty check to match other non-streaming 200 tests by asserting expect(data.text.length).toBeGreaterThan(0) after the ChatResponseSchema.parse check (or, if empty text is intentionally allowed for tool-only responses, add a clarify comment explaining that exception next to the test and keep the existing assertion). Reference: the test case name, ChatResponseSchema.parse, and the route POST '/ai/chat' / generateText behavior in chat.ts.apps/next/components/dashboard/chat-assistant.tsx (1)
138-157: Redundantoutput.specguard afterisUserInfoSpecOutput.
isUserInfoSpecOutputalready verifies thatoutput.specis a non-null object, so the additional&& output.specon line 141 is dead code and TypeScript should narrow the type correctly on its own.♻️ Suggested simplification
if ( toolName === 'getAccountInfo' && - isUserInfoSpecOutput(output) && - output.spec + isUserInfoSpecOutput(output) ) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/next/components/dashboard/chat-assistant.tsx` around lines 138 - 157, Remove the redundant truthy check "&& output.spec" in the conditional that already calls isUserInfoSpecOutput(output); update the if to just check toolName === 'getAccountInfo' && isUserInfoSpecOutput(output) so TypeScript's type guard can narrow output and the block using output.spec (Renderer with userInfoRegistry inside StateProvider/VisibilityProvider) remains unchanged; ensure hasContent and elements.push logic is left intact and only the extra "&& output.spec" is removed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/next/e2e/chat-assistant.spec.ts`:
- Around line 22-26: The negative assertion on sheet.getByTestId('chat-error')
runs too early; move the await
expect(sheet.getByTestId('chat-error')).not.toBeVisible() so it executes after
the assistant appears (await
expect(sheet.locator('[data-role="assistant"]')).toBeVisible(...)) and after the
success check (await
expect(sheet.getByTestId('user-info-card')).toBeVisible(...)) to ensure any late
errors are caught; update the test ordering in chat-assistant.spec.ts
accordingly so the error check runs after those visible assertions.
---
Outside diff comments:
In `@apps/fastify/src/routes/ai/chat.ts`:
- Around line 243-244: The non-streaming branch calls generateText(baseOptions)
which uses the default stopWhen (stepCountIs(1)), so when the model invokes a
tool in the first step no subsequent generation happens and result.text is
empty; update the call to generateText to pass an explicit stopWhen using
stepCountIs(n) (e.g., stepCountIs(2) or higher) so the agent can perform
tool-calls and then produce final text—modify the invocation around
generateText/baseOptions to merge in stopWhen: stepCountIs(...) before sending
reply.
---
Nitpick comments:
In `@apps/fastify/src/routes/ai/chat.test.ts`:
- Around line 122-135: The test "should return 200 when user asks who am I
(getAccountInfo tool)" currently only asserts that data.text is a string; add a
non-empty check to match other non-streaming 200 tests by asserting
expect(data.text.length).toBeGreaterThan(0) after the ChatResponseSchema.parse
check (or, if empty text is intentionally allowed for tool-only responses, add a
clarify comment explaining that exception next to the test and keep the existing
assertion). Reference: the test case name, ChatResponseSchema.parse, and the
route POST '/ai/chat' / generateText behavior in chat.ts.
In `@apps/next/components/dashboard/chat-assistant.tsx`:
- Around line 138-157: Remove the redundant truthy check "&& output.spec" in the
conditional that already calls isUserInfoSpecOutput(output); update the if to
just check toolName === 'getAccountInfo' && isUserInfoSpecOutput(output) so
TypeScript's type guard can narrow output and the block using output.spec
(Renderer with userInfoRegistry inside StateProvider/VisibilityProvider) remains
unchanged; ensure hasContent and elements.push logic is left intact and only the
extra "&& output.spec" is removed.
| await expect(sheet.getByTestId('chat-error')).not.toBeVisible() | ||
| await expect(sheet.locator('[data-role="assistant"]')).toBeVisible({ | ||
| timeout: 60000, | ||
| }) | ||
| const errorEl = sheet.getByTestId('chat-error') | ||
| if (await errorEl.isVisible()) { | ||
| throw new Error(`Chat API error: ${await errorEl.textContent()}`) | ||
| } | ||
|
|
||
| const assistantEl = sheet.locator('[data-role="assistant"]') | ||
| await expect(assistantEl).toBeVisible({ timeout: 60000 }) | ||
| await expect(assistantEl).not.toBeEmpty() | ||
| await expect(sheet.getByTestId('user-info-card')).toBeVisible({ timeout: 60000 }) |
There was a problem hiding this comment.
chat-error negative assertion fires before the AI response arrives — fix ordering before un-skipping.
The not.toBeVisible() check on line 22 runs immediately after the user message appears (line 21), before the AI has had time to call the tool, receive a result, or encounter an error. An error that materialises 5–30 seconds later will never be caught here — the assertion silently passes.
Move the error check to after the success assertions, or at minimum after the assistant message is visible:
🛠️ Suggested order
await expect(
sheet.locator('[data-role="user"]').filter({ hasText: 'Who am I?' }),
).toBeVisible({ timeout: 10000 })
- await expect(sheet.getByTestId('chat-error')).not.toBeVisible()
await expect(sheet.locator('[data-role="assistant"]')).toBeVisible({
timeout: 60000,
})
await expect(sheet.getByTestId('user-info-card')).toBeVisible({ timeout: 60000 })
+ await expect(sheet.getByTestId('chat-error')).not.toBeVisible()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/next/e2e/chat-assistant.spec.ts` around lines 22 - 26, The negative
assertion on sheet.getByTestId('chat-error') runs too early; move the await
expect(sheet.getByTestId('chat-error')).not.toBeVisible() so it executes after
the assistant appears (await
expect(sheet.locator('[data-role="assistant"]')).toBeVisible(...)) and after the
success check (await
expect(sheet.getByTestId('user-info-card')).toBeVisible(...)) to ensure any late
errors are caught; update the test ordering in chat-assistant.spec.ts
accordingly so the error check runs after those visible assertions.
- Use INVALID_TOKEN/FAILED_VERIFY codes for magic link redirect - Add FAILED_VERIFY mapping on login page - Disable wallet-solana and wallet-metamask E2E (require extension setup) - Ignore tar GHSA in osv-scanner (override in package.json, @vercel/fun transitive)
Summary by CodeRabbit
New Features
Tests
UX / Error Messages
Dependencies