Skip to content

Conversation

@laciferin2024
Copy link
Contributor

@laciferin2024 laciferin2024 commented Sep 16, 2025

PR Checklist

Please read and check all that apply.

Changesets

  • This PR includes a Changeset under .changeset/ describing the change (patch/minor/major) with a clear, user-focused summary
  • OR this PR is docs/tests-only and I added the skip-changeset label

Quality

  • UI builds locally: bun run build (Vite)
  • E2E or unit tests added/updated where applicable (playwright, vitest if used)
  • No breaking changes to public interfaces without a major bump

Notes

  • Add a changeset via: bun run changeset
  • Policy and examples: see aidocs/changesets.md

Summary by CodeRabbit

  • New Features

    • "Stream (beta)" toggle for live token-by-token assistant output and a Receipt modal showing per-run costs, tokens, headers, and export options.
    • Auto-run option to automatically run the initial prompt.
    • Model selector now populates dynamically from the service so available models update at runtime.
  • Bug Fixes / Reliability

    • Abort-able streaming with improved error handling; empty placeholder replies removed if no content arrives.
    • Non-streaming flow remains supported.
  • Documentation

    • Updated sample and marketing examples to use new model identifiers.
  • Chores

    • Added local settings file declaring allowed build/typecheck commands.

@vercel
Copy link

vercel bot commented Sep 16, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
console Ready Ready Preview Comment Oct 16, 2025 9:44pm

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 16, 2025

Walkthrough

Replaces static model list with runtime model discovery and caching, migrates streaming and completion calls from Chat Completions to the Responses API, adds a user-facing "Stream (beta)" toggle and autorun prop, implements fetch/SSE streaming with incremental token appends and rich per-run receipt metadata.

Changes

Cohort / File(s) Summary
Chat playground (streaming + autorun)
src/components/ChatPlayground.tsx
Adds enableStreaming toggle and autorun?: boolean prop; implements fetch/SSE streaming to ${OPENAI_URL}/responses with AbortController, creates placeholder assistant message, appends streamed response.output_text.delta tokens, parses cost/tx/refund/callsRemaining headers into lastRun, and preserves non-streaming path using client.responses.create.
Code playground (Responses API + client change)
src/components/CodePlayground.tsx
Replaces createOpenAI/streamText with default OpenAI instantiation (dangerouslyAllowBrowser: true), calls openai.responses.create({ ..., stream }), iterates response events (response.output_text.delta) for streaming, and updates payload/response parsing to Responses API shape.
Dedicated streaming playground
src/components/OpenAIStreamingPlayground.tsx
Switches streaming from chat completions to Responses API, adapts payload (instructions/input) and event loop to handle response.output_text.delta events and append deltas; preserves overall error/finalization flow.
Model discovery & defaults
src/config/models.ts
Removes static SUPPORTED_MODELS and isSupportedModel; introduces getAvailableModels(auth?) with a 10-minute in-memory cache, invalidateModelsCache(), runtime model parsing (extractModelId), and updates DEFAULT_MODEL/CODING_MODEL to new bare IDs (e.g., "gpt-4.1-mini").
Chat completion model loading
src/components/ChatCompletion.tsx
Replaces direct /models fetch and filtering with getAvailableModels(auth), initializes availableModels to [DEFAULT_MODEL], updates model selection logic and error logging to use centralized model discovery.
Marketing / examples update
src/components/MarketingContent.tsx, src/pages/Index.tsx
Updates sample code model strings from unreal::mixtral-8x22b-instruct to mixtral-8x22b-instruct in multiple examples.
Local permissions config
.claude/settings.local.json
Adds local Claude settings file with an allow list for specific Bash commands (build/typecheck) and empty deny/ask lists.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant UI as Chat UI
  participant Logic as ChatPlayground Logic
  participant API as OPENAI_URL/responses

  User->>UI: Submit prompt
  UI->>Logic: onSend(message, enableStreaming)
  alt Streaming enabled (SSE fetch)
    Logic->>API: POST (stream: true) via fetch (AbortController)
    API-->>Logic: SSE events (data lines / event objects)
    loop per event
      alt event.type == response.output_text.delta
        Logic->>UI: append event.delta to placeholder assistant message
      else other events
        Note right of Logic: ignore / handle meta events
      end
    end
    API-->>Logic: stream end + headers
    Logic->>Logic: parse headers (price, tx, refund, callsRemaining) -> update lastRun
    Logic->>UI: finalize assistant message and receipt
  else Non-streaming (Responses API)
    Logic->>API: client.responses.create({stream:false, instructions, input})
    API-->>Logic: full response (output_text) + headers
    Logic->>UI: add assistant message from output_text
    Logic->>Logic: parse headers -> update lastRun
  end
  opt Cancel
    User->>UI: Cancel
    UI->>Logic: abort()
    Logic--x API: AbortController signal
    Logic->>UI: cleanup placeholder if no chunks arrived
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

I twitch my whiskers at each stream,
Tokens tumble, stitch a gleam.
Headers count my carrot store,
Abort or run — I hop for more.
Switch on the stream, and code will dream. 🐇✨

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The PR description is just the repository checklist template left unfilled and does not provide a summary of the actual code changes. It therefore omits required information from the template such as an included Changeset under .changeset/, build/test confirmations, and explicit notes about breaking or public API changes even though the diff shows exported API alterations (for example, getAvailableModels added, SUPPORTED_MODELS removed, and DEFAULT_MODEL/CODING_MODEL changed). Because of these omissions the description is insufficient for review. Replace the template with a filled PR description that summarizes the change, lists key affected files/components, and explicitly calls out public API or breaking changes and any migration steps; add a Changeset under .changeset/ (or add the skip-changeset label), run and report the result of bun run build and any tests (or add tests), and mark or explain the Quality checklist items so reviewers can validate build/test status and compatibility. After these updates the description can be re-evaluated and should meet the repository requirements.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title "feat: stream mode ai playground" succinctly identifies the primary change in this PR — adding streaming support to the AI playground — and aligns with the edits described in the diff. It is concise, focused, and not misleading about the main intent of the changes. The phrasing is clear enough for a teammate scanning history to understand the primary change.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/stream-openai-playground

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@laciferin2024 laciferin2024 changed the title stream: mode ai playgrond feat: stream mode ai playground Sep 16, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (7)
src/components/ChatPlayground.tsx (7)

91-91: Persist the streaming preference across reloads

Remember the user’s choice for better UX.

Apply:

-  const [enableStreaming, setEnableStreaming] = useState(false)
+  const [enableStreaming, setEnableStreaming] = useState<boolean>(() => {
+    try { return localStorage.getItem("enableStreaming") === "1" } catch { return false }
+  })
+  useEffect(() => {
+    try { localStorage.setItem("enableStreaming", enableStreaming ? "1" : "0") } catch {}
+  }, [enableStreaming])

703-714: Add explicit SSE Accept header (improves proxy compatibility)

Some gateways require it; harmless elsewhere.

Apply:

-          const headers: Record<string, string> = {
-            "Content-Type": "application/json",
-          }
+          const headers: Record<string, string> = {
+            "Content-Type": "application/json",
+            "Accept": "text/event-stream",
+          }

643-662: Target placeholder by id, not index (avoids stale index bugs)

Indices can drift; ids are stable and also simplify cleanup.

Apply:

-        const appendChunk = (assistantIndex: number, chunk: string) => {
+        const appendChunk = (assistantId: string, chunk: string) => {
           if (!chunk) return
           hasStreamed = true
           setMessages((prev) => {
-            const next = [...prev]
-            const current = next[assistantIndex]
-            if (!current || current.role !== "assistant") return prev
-            const existing = (current as unknown as { parts?: TextPart[] }).parts || []
-            const existingText = existing
-              .filter((p): p is TextPart => Boolean(p) && p.type === "text")
-              .map((p) => p.text ?? "")
-              .join("")
-            const updated: UIMessage = {
-              ...current,
-              parts: [{ type: "text", text: existingText + chunk }],
-            }
-            next[assistantIndex] = updated
-            return next
+            return prev.map((m) => {
+              if (m.id !== assistantId || m.role !== "assistant") return m
+              const existingText = ((m as unknown as { parts?: TextPart[] }).parts ?? [])
+                .filter((p): p is TextPart => Boolean(p) && p.type === "text")
+                .map((p) => p.text ?? "")
+                .join("")
+              return { ...m, parts: [{ type: "text", text: existingText + chunk }] }
+            })
           })
         }
...
-          const assistantIndex = history.length
-          setMessages((prev) => [
-            ...history,
-            { id: makeId(), role: "assistant", parts: [{ type: "text", text: "" }] },
-          ])
+          const placeholderId = makeId()
+          setMessages((prev) => [
+            ...prev,
+            { id: placeholderId, role: "assistant", parts: [{ type: "text", text: "" }] },
+          ])
...
-                if (plain) appendChunk(assistantIndex, plain)
+                if (plain) appendChunk(placeholderId, plain)

Also applies to: 696-702, 789-789


780-793: Improve SSE parsing: fallback field and error events

Some upstreams send choices[0].text or {error:{message}} in data frames.

Apply:

-                // OpenAI chat.completions: choices[0].delta.content
-                const delta = json?.choices?.[0]?.delta?.content
-                const plain = typeof delta === "string" ? delta : ""
+                // OpenAI chat.completions: choices[0].delta.content
+                const delta = json?.choices?.[0]?.delta?.content
+                const altText = json?.choices?.[0]?.text
+                const plain =
+                  typeof delta === "string" && delta.length > 0
+                    ? delta
+                    : typeof altText === "string"
+                    ? altText
+                    : ""
+                if ((json as any)?.error?.message) {
+                  throw new Error(String((json as any).error.message))
+                }

849-855: Clean up empty placeholder on user abort as well

Matches your error path behavior; otherwise an empty assistant bubble can linger.

Apply:

           if (
             name === "AbortError" ||
             /abort(ed)?/i.test(msgText)
           ) {
             // User-initiated stop
+            if (!hasStreamed) {
+              setMessages((prev) => {
+                const next = [...prev]
+                const last = next[next.length - 1] as UIMessage | undefined
+                const isEmptyAssistant =
+                  last && last.role === "assistant" && !getTextFromMessage(last as UIMessage)
+                if (isEmptyAssistant) next.pop()
+                return next
+              })
+            }
             setIsStreaming(false)
             return
           }

709-713: (Optional) Ask backend for usage in-stream

If supported by your proxy, include usage in the final SSE frame to populate lastRun.usage without a second call.

Apply:

             body: JSON.stringify({
               model,
               messages: openaiMessages,
               stream: true,
+              stream_options: { include_usage: true }
             }),

1365-1382: Disable the toggle while streaming to avoid mid-flight mode changes

Prevents confusing state during an active request.

Apply:

                 <Switch
                   checked={enableStreaming}
                   onCheckedChange={(c) => setEnableStreaming(Boolean(c))}
                   aria-label="Toggle streaming"
+                  disabled={isStreaming}
                 />
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3ee583a and fe713cc.

📒 Files selected for processing (1)
  • src/components/ChatPlayground.tsx (5 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/components/ChatPlayground.tsx (3)
src/lib/api.ts (1)
  • getCurrentChainId (469-485)
src/config/unreal.ts (1)
  • OPENAI_URL (8-9)
src/components/ui/switch.tsx (1)
  • Switch (27-27)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: E2E Tests
🔇 Additional comments (2)
src/components/ChatPlayground.tsx (2)

61-61: LGTM: Switch import

Import looks correct and consistent with your UI kit.


1106-1106: LGTM: hook deps include enableStreaming

Prevents stale captures when toggling the mode.

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Hiro <laciferin2024@protonmail.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
src/components/ChatPlayground.tsx (1)

799-826: Resolved: break after [DONE] and flush trailing decoder bytes.

This addresses the prior hang and decoder flush issue from the earlier review comment. Nicely done.

🧹 Nitpick comments (5)
src/components/ChatPlayground.tsx (5)

643-662: Avoid O(n) scans per streamed chunk.

appendChunk rebuilds text by filtering/joining all parts on every chunk. Since we only keep a single text part, directly read/append to that part to cut work on long streams.

Apply:

-            const existing = (current as unknown as { parts?: TextPart[] }).parts || []
-            const existingText = existing
-              .filter((p): p is TextPart => Boolean(p) && p.type === "text")
-              .map((p) => p.text ?? "")
-              .join("")
+            const parts = (current as unknown as { parts?: TextPart[] }).parts || []
+            const head = parts[0]
+            const existingText =
+              head && head.type === "text" && typeof head.text === "string"
+                ? head.text
+                : ""
             const updated: UIMessage = {
               ...current,
               parts: [{ type: "text", text: existingText + chunk }],
             }

771-795: SSE: Handle multi-line data: frames (optional).

Some SSE emitters send multiple data: lines per event that must be concatenated with \n. Current logic parses each line independently. Join data: lines within a block before JSON.parse for extra robustness.

-            const lines = block.split("\n")
-            for (const line of lines) {
-              const trimmed = line.trim()
-              if (!trimmed.startsWith("data:")) continue
-              const data = trimmed.slice(5).trim()
-              if (!data) continue
-              if (data === "[DONE]") return "DONE" as const
-              try {
-                const json = JSON.parse(data) as {
+            const lines = block.split("\n")
+            const dataLines = lines
+              .map((l) => l.trim())
+              .filter((l) => l.startsWith("data:"))
+              .map((l) => l.slice(5).trim())
+            if (dataLines.length === 0) return
+            if (dataLines.length === 1 && dataLines[0] === "[DONE]") return "DONE" as const
+            try {
+              const json = JSON.parse(dataLines.join("\n")) as {
                   id?: string
                   model?: string
                   choices?: Array<{ delta?: { content?: string }; text?: string }>
-                }
-                if (json?.model && !modelSeen) modelSeen = json.model
-                // OpenAI chat.completions: choices[0].delta.content
-                const delta = json?.choices?.[0]?.delta?.content
-                const plain = typeof delta === "string" ? delta : ""
-                if (plain) appendChunk(assistantIndex, plain)
-              } catch (_e) {
-                // ignore malformed chunks
-              }
-            }
+              }
+              if (json?.model && !modelSeen) modelSeen = json.model
+              const delta = json?.choices?.[0]?.delta?.content
+              const plain = typeof delta === "string" ? delta : ""
+              if (plain) appendChunk(assistantIndex, plain)
+            } catch (_e) {
+              // ignore malformed chunks
+            }

852-854: Remove empty placeholder when stream ends with zero tokens.

If the server finishes without emitting any content, we keep an empty assistant bubble. Drop it before returning.

-          setIsStreaming(false)
-          return
+          if (!hasStreamed) {
+            setMessages((prev) => {
+              const next = [...prev]
+              const last = next[next.length - 1] as UIMessage | undefined
+              const isEmptyAssistant =
+                last && last.role === "assistant" && !getTextFromMessage(last as UIMessage)
+              if (isEmptyAssistant) next.pop()
+              return next
+            })
+          }
+          setIsStreaming(false)
+          return

855-865: On user abort, also remove an empty placeholder.

Aborting before the first chunk leaves a blank assistant message. Apply the same cleanup in the abort branch.

           if (
             name === "AbortError" ||
             /abort(ed)?/i.test(msgText)
           ) {
-            // User-initiated stop
+            // User-initiated stop; remove empty placeholder if nothing streamed
+            if (!hasStreamed) {
+              setMessages((prev) => {
+                const next = [...prev]
+                const last = next[next.length - 1] as UIMessage | undefined
+                const isEmptyAssistant =
+                  last && last.role === "assistant" && !getTextFromMessage(last as UIMessage)
+                if (isEmptyAssistant) next.pop()
+                return next
+              })
+            }
             setIsStreaming(false)
             return
           }

1375-1392: Disable the toggle while a stream is active.

Prevents mid-flight UX confusion (toggle suggests mode change applies immediately). Minor UX polish.

-                <Switch
+                <Switch
                   checked={enableStreaming}
                   onCheckedChange={(c) => setEnableStreaming(Boolean(c))}
                   aria-label="Toggle streaming"
+                  disabled={isStreaming}
                 />
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fe713cc and fa199da.

📒 Files selected for processing (1)
  • src/components/ChatPlayground.tsx (5 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
src/components/ChatPlayground.tsx (2)
src/lib/api.ts (1)
  • getCurrentChainId (469-485)
src/config/unreal.ts (1)
  • OPENAI_URL (8-9)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: E2E Tests
🔇 Additional comments (4)
src/components/ChatPlayground.tsx (4)

61-61: Import for Switch looks correct.

Shadcn Switch import path and usage elsewhere align. No issues.


91-91: Gated beta flag added cleanly.

Local state enableStreaming is scoped and memoized via deps later. Good.


841-850: Explorer URL fallback when header URL is missing.

If the server omits *-tx-url headers but provides hashes (common), derive URLs via getExplorerTxUrl using the chain id.
[suggest_minor_issue]

           setLastRun({
@@
-            priceTx: { hash: priceTxHash, url: priceTxUrl },
-            costTx: { hash: costTxHash, url: costTxUrl },
+            priceTx: {
+              hash: priceTxHash,
+              url: priceTxUrl || getExplorerTxUrl(chainIdParsed ?? chainId, priceTxHash),
+            },
+            costTx: {
+              hash: costTxHash,
+              url: costTxUrl || getExplorerTxUrl(chainIdParsed ?? chainId, costTxHash),
+            },
@@
-            refund: {
-              amount: refundAmount,
-              tx: { hash: refundTxHash, url: refundTxUrl },
-            },
+            refund: {
+              amount: refundAmount,
+              tx: {
+                hash: refundTxHash,
+                url: refundTxUrl || getExplorerTxUrl(chainIdParsed ?? chainId, refundTxHash),
+              },
+            },

1116-1116: Dependency array includes the new gate.

enableStreaming is included; memoization will reflect mode toggles. Good.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/components/ChatPlayground.tsx (1)

895-905: Use Responses API response type for completion (you're calling client.responses.create)

completion is declared as OpenAI.Chat.ChatCompletion but is assigned from client.responses.create(...).withResponse().data — update the type to the Responses API return type.

Location: src/components/ChatPlayground.tsx:897

-        let completion: OpenAI.Chat.ChatCompletion | null = null
+        let completion: OpenAI.Responses.Response | null = null

If your SDK lacks OpenAI.Responses.Response, use:

-        let completion: OpenAI.Chat.ChatCompletion | null = null
+        let completion: Awaited<ReturnType<typeof client.responses.create>> | null = null
🧹 Nitpick comments (14)
.claude/settings.local.json (1)

3-10: Add Bun commands to allowlist (project uses Bun).

Your checklist and scripts reference Bun, but this allowlist only permits npm/npx. Add Bun equivalents to avoid friction.

Apply:

   "permissions": {
     "allow": [
       "Bash(npm run build:*)",
       "Bash(npm run typecheck:*)",
       "Bash(npm run type-check:*)",
-      "Bash(npx tsc:*)"
+      "Bash(npx tsc:*)",
+      "Bash(bun run build:*)",
+      "Bash(bun run typecheck:*)",
+      "Bash(bun run type-check:*)",
+      "Bash(bunx tsc:*)"
     ],
src/components/MarketingContent.tsx (1)

16-19: Docs consistency: update examples to Responses API (project migrated).

UI/playgrounds now use responses; these examples still show Chat Completions. Align to avoid confusion.

Apply:

-    curl: `curl -X POST "https://openai.unreal.art/v1/chat/completions" \
+    curl: `curl -X POST "https://openai.unreal.art/v1/responses" \
   -H "Authorization: Bearer your-api-key-here" \
   -H "Content-Type: application/json" \
   -d '{
-    "model": "mixtral-8x22b-instruct",
-    "messages": [{"role": "user", "content": "Hello world!"}]
+    "model": "mixtral-8x22b-instruct",
+    "instructions": "You are a helpful assistant.",
+    "input": "Hello world!"
   }'`,
-response = client.chat.completions.create(
-    model="mixtral-8x22b-instruct",
-    messages=[{"role": "user", "content": "Hello world!"}]
-)
-
-print(response.choices[0].message.content)`,
+response = client.responses.create(
+    model="mixtral-8x22b-instruct",
+    instructions="You are a helpful assistant.",
+    input="Hello world!"
+)
+
+print(response.output_text)`,
-const response = await client.chat.completions.create({
-  model: 'mixtral-8x22b-instruct',
-  messages: [{ role: 'user', content: 'Hello world!' }]
-});
-
-console.log(response.choices[0].message.content);`,
+const response = await client.responses.create({
+  model: 'mixtral-8x22b-instruct',
+  instructions: 'You are a helpful assistant.',
+  input: 'Hello world!'
+});
+
+console.log(response.output_text);`,

Also applies to: 27-33, 40-45

src/components/OpenAIStreamingPlayground.tsx (2)

85-90: Handle completion/error events for better UX.

Append deltas as you do, but also surface response.completed (usage, model) and response.error to set an error message immediately.

Example:

-      for await (const event of stream) {
-        if (event.type === 'response.output_text.delta') {
-          setResponse((prev) => prev + event.delta)
-        }
-      }
+      for await (const event of stream) {
+        if (event.type === 'response.output_text.delta') {
+          setResponse((prev) => prev + event.delta)
+        } else if (event.type === 'response.error') {
+          setError(event.error?.message || 'Stream error'); break
+        }
+      }

55-66: Optional: wire AbortController for cancel.

Small addition lets you reuse the Chat “Stop” UX pattern here later.

src/components/ChatPlayground.tsx (4)

804-808: SSE parsing: handle CRLF delimiters, not just LF.

indexOf("\n\n") misses \r\n\r\n streams, delaying token display until close.

Apply:

-            let idx
-            // Loop to handle multiple events per chunk
-            while ((idx = buffer.indexOf("\n\n")) !== -1) {
+            let idx
+            const sep = /\r?\n\r?\n/
+            // Loop to handle multiple events per chunk
+            while ((idx = buffer.search(sep)) !== -1) {
               const block = buffer.slice(0, idx)
-              buffer = buffer.slice(idx + 2)
+              const m = buffer.match(sep)
+              buffer = buffer.slice(idx + (m ? m[0].length : 2))
               const status = flushEvents(block)

771-795: Capture usage/model from completion event in streaming mode.

Store usage and final model from response.completed to populate receipts (parity with non‑streaming path).

Apply:

-          let modelSeen: string | undefined
+          let modelSeen: string | undefined
+          let usageSeen:
+            | { prompt_tokens?: number; completion_tokens?: number; total_tokens?: number }
+            | undefined
...
-                if (json?.model && !modelSeen) modelSeen = json.model
+                if (json?.model && !modelSeen) modelSeen = json.model
                 // OpenAI Responses API: event.delta for response.output_text.delta
                 const plain = json?.type === 'response.output_text.delta' && typeof json.delta === "string" ? json.delta : ""
                 if (plain) appendChunk(assistantIndex, plain)
+                if (json?.type === 'response.completed' && json?.response?.usage) {
+                  usageSeen = json.response.usage as typeof usageSeen
+                }
...
-          setLastRun({
+          setLastRun({
             ...
-            usage: undefined,
+            usage: usageSeen,
             model: modelSeen || model,

Also applies to: 827-851


709-714: Optional: preserve roles without flattening.

You flatten history into a single string. If the backend accepts structured content, prefer structured input to retain roles and enable future tool calls.


1375-1392: Tooltip copy tweak (nit).

“Receipts still use response headers” is implementation detail; consider “Costs shown from response headers” for clarity.

src/components/CodePlayground.tsx (2)

70-74: Also surface stream errors and completion for clarity.

Mirror the pattern used in Chat: handle response.error and optionally log completion metadata.

Apply:

-      for await (const event of stream) {
-        if (event.type === 'response.output_text.delta') {
-          setResponse((prev) => prev + event.delta)
-        }
-      }
+      for await (const event of stream) {
+        if (event.type === 'response.output_text.delta') {
+          setResponse((prev) => prev + event.delta)
+        } else if (event.type === 'response.error') {
+          setError(event.error?.message || 'Stream error'); break
+        }
+      }

54-66: Optional: add AbortController for cancel parity with Chat.

Lets you add a Stop button later without changing the streaming loop.

src/config/models.ts (4)

5-33: Eliminate the duplicated model ID source-of-truth; derive the type from the array

Defining both a literal union and the array is drift-prone. Recommend deriving the type from SUPPORTED_MODELS and removing this union.

Apply this diff to remove the union:

-export type UnrealModelId =
-  | "reel"
-  | "reel-v1"
-  | "r1-1776"
-  | "flux-1-dev-fp8"
-  | "llama4-scout-instruct-basic"
-  | "llama4-maverick-instruct-basic"
-  | "firesearch-ocr-v6"
-  | "llama-v3p1-405b-instruct"
-  | "mixtral-8x22b-instruct"
-  | "flux-kontext-max"
-  | "qwen3-coder-480b-a35b-instruct"
-  | "qwen3-235b-a22b-instruct-2507"
-  | "deepseek-r1-0528"
-  | "deepseek-r1-basic"
-  | "llama-v3p1-70b-instruct"
-  | "llama-v3p3-70b-instruct"
-  | "deepseek-r1"
-  | "qwen3-30b-a3b"
-  | "qwen3-30b-a3b-thinking-2507"
-  | "glm-4p5"
-  | "dobby-unhinged-llama-3-3-70b-new"
-  | "flux-1-schnell-fp8"
-  | "flux-kontext-pro"
-  | "dobby-mini-unhinged-plus-llama-3-1-8b"
-  | "deepseek-v3"
-  | "qwen3-235b-a22b"
-  | "kimi-k2-instruct"
-  | "qwen2p5-vl-32b-instruct"
-  | "playground-v2-5-1024px-aesthetic"

Add this just after the SUPPORTED_MODELS declaration (outside this range):

export type UnrealModelId = typeof SUPPORTED_MODELS[number];

63-64: Vision model listed; ensure UI gating

qwen2p5-vl-32b-instruct is VL. Confirm the streaming playground gates tools/UI when images aren’t supported yet, similar to the commented image model below.


71-73: Use the normalizer and a Set for O(1) checks and backward compatibility

This keeps the type guard accurate while accepting legacy-prefixed inputs.

 export function isSupportedModel(model: string): model is UnrealModelId {
-  return (SUPPORTED_MODELS as readonly string[]).includes(model)
+  return normalizeModelId(model) !== null
 }

67-70: Optional: add dev-time asserts for DEFAULT_MODEL/CODING_MODEL — repo clean of legacy prefixes

SUPPORTED_MODELS contains "mixtral-8x22b-instruct" and "qwen3-coder-480b-a35b-instruct" and isSupportedModel() exists in src/config/models.ts; ripgrep found no "unreal::" occurrences.

Apply this diff (optional guardrail):

 export const DEFAULT_MODEL: UnrealModelId = "mixtral-8x22b-instruct"
 export const CODING_MODEL: UnrealModelId =
   "qwen3-coder-480b-a35b-instruct"
+if (process.env.NODE_ENV !== "production") {
+  if (!isSupportedModel(DEFAULT_MODEL)) {
+    throw new Error(`DEFAULT_MODEL not in SUPPORTED_MODELS: ${DEFAULT_MODEL}`)
+  }
+  if (!isSupportedModel(CODING_MODEL)) {
+    throw new Error(`CODING_MODEL not in SUPPORTED_MODELS: ${CODING_MODEL}`)
+  }
+}

src/config/models.ts references are correct; no legacy unreal:: IDs found.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fa199da and 9edd8b8.

📒 Files selected for processing (7)
  • .claude/settings.local.json (1 hunks)
  • src/components/ChatPlayground.tsx (7 hunks)
  • src/components/CodePlayground.tsx (3 hunks)
  • src/components/MarketingContent.tsx (3 hunks)
  • src/components/OpenAIStreamingPlayground.tsx (1 hunks)
  • src/config/models.ts (1 hunks)
  • src/pages/Index.tsx (3 hunks)
✅ Files skipped from review due to trivial changes (1)
  • src/pages/Index.tsx
🧰 Additional context used
🧬 Code graph analysis (3)
src/components/OpenAIStreamingPlayground.tsx (1)
src/config/models.ts (1)
  • CODING_MODEL (68-69)
src/components/CodePlayground.tsx (2)
src/config/unreal.ts (1)
  • OPENAI_URL (8-9)
src/config/models.ts (1)
  • DEFAULT_MODEL (67-67)
src/components/ChatPlayground.tsx (2)
src/lib/api.ts (1)
  • getCurrentChainId (469-485)
src/config/unreal.ts (1)
  • OPENAI_URL (8-9)
🔇 Additional comments (1)
.claude/settings.local.json (1)

1-12: Confirm committing a “local” settings file is intentional.

If this is environment‑specific, consider committing a .example and git‑ignoring local overrides.

Comment on lines 35 to 65
export const SUPPORTED_MODELS: Readonly<UnrealModelId[]> = [
// "unreal::reel",
// "unreal::reel-v1",
"unreal::r1-1776",
"unreal::flux-1-dev-fp8",
"unreal::llama4-scout-instruct-basic",
"unreal::llama4-maverick-instruct-basic",
"unreal::firesearch-ocr-v6",
"unreal::llama-v3p1-405b-instruct",
"unreal::mixtral-8x22b-instruct",
"unreal::flux-kontext-max",
"unreal::qwen3-coder-480b-a35b-instruct",
"unreal::qwen3-235b-a22b-instruct-2507",
"unreal::deepseek-r1-0528",
"unreal::deepseek-r1-basic",
"unreal::llama-v3p1-70b-instruct",
"unreal::llama-v3p3-70b-instruct",
"unreal::deepseek-r1",
"unreal::qwen3-30b-a3b",
"unreal::qwen3-30b-a3b-thinking-2507",
"unreal::glm-4p5",
"unreal::dobby-unhinged-llama-3-3-70b-new",
"unreal::flux-1-schnell-fp8",
"unreal::flux-kontext-pro",
"unreal::dobby-mini-unhinged-plus-llama-3-1-8b",
"unreal::deepseek-v3",
"unreal::qwen3-235b-a22b",
"unreal::kimi-k2-instruct",
"unreal::qwen2p5-vl-32b-instruct",
// "unreal::playground-v2-5-1024px-aesthetic", //FIXME: later after UI support for image
// "reel",
// "reel-v1",
"r1-1776",
"flux-1-dev-fp8",
"llama4-scout-instruct-basic",
"llama4-maverick-instruct-basic",
"firesearch-ocr-v6",
"llama-v3p1-405b-instruct",
"mixtral-8x22b-instruct",
"flux-kontext-max",
"qwen3-coder-480b-a35b-instruct",
"qwen3-235b-a22b-instruct-2507",
"deepseek-r1-0528",
"deepseek-r1-basic",
"llama-v3p1-70b-instruct",
"llama-v3p3-70b-instruct",
"deepseek-r1",
"qwen3-30b-a3b",
"qwen3-30b-a3b-thinking-2507",
"glm-4p5",
"dobby-unhinged-llama-3-3-70b-new",
"flux-1-schnell-fp8",
"flux-kontext-pro",
"dobby-mini-unhinged-plus-llama-3-1-8b",
"deepseek-v3",
"qwen3-235b-a22b",
"kimi-k2-instruct",
"qwen2p5-vl-32b-instruct",
// "playground-v2-5-1024px-aesthetic", //FIXME: later after UI support for image
] as const
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Support legacy 'unreal::' IDs to avoid breaking persisted settings, deep links, or cached runs

If any stored value or incoming param still uses the old prefix, isSupportedModel will reject it. Normalize aliasing for backward compatibility.

Outside-of-range additions (place below the array):

export const SUPPORTED_MODELS_SET = new Set<string>(SUPPORTED_MODELS as readonly string[]);

export const LEGACY_MODEL_PREFIX = "unreal::";

export function normalizeModelId(input: string): UnrealModelId | null {
  const maybe = input.startsWith(LEGACY_MODEL_PREFIX)
    ? input.slice(LEGACY_MODEL_PREFIX.length)
    : input;
  return SUPPORTED_MODELS_SET.has(maybe) ? (maybe as UnrealModelId) : null;
}
🤖 Prompt for AI Agents
In src/config/models.ts around lines 35 to 65, the SUPPORTED_MODELS list
currently rejects legacy IDs prefixed with "unreal::" which breaks persisted
settings and deep links; add backward-compatibility utilities by exporting a Set
of supported model strings (SUPPORTED_MODELS_SET), a LEGACY_MODEL_PREFIX
constant set to "unreal::", and a normalizeModelId(input: string) function that
strips the legacy prefix if present and returns the matching UnrealModelId or
null; place these exports below the SUPPORTED_MODELS array and ensure types use
string/UnrealModelId as appropriate so existing callers can call
normalizeModelId to accept legacy IDs.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
src/components/ChatPlayground.tsx (1)

587-831: Enhance SSE stream processing for data: [DONE] handling.

The current implementation doesn't properly break the outer loop when "[DONE]" is received, which could cause the UI to hang if the server delays closing the stream. Additionally, any trailing bytes in the TextDecoder buffer aren't flushed.

Apply this diff to properly handle the "[DONE]" signal and flush the decoder:

           let buffer = ""
           let modelSeen: string | undefined
+          let finished = false
           const flushEvents = (block: string) => {
             const lines = block.split("\n")
             for (const line of lines) {
               const trimmed = line.trim()
               if (!trimmed.startsWith("data:")) continue
               const data = trimmed.slice(5).trim()
               if (!data) continue
               if (data === "[DONE]") return "DONE" as const
               try {
                 const json = JSON.parse(data) as {
                   type?: string
                   delta?: string
                   model?: string
                 }
                 if (json?.model && !modelSeen) modelSeen = json.model
                 // OpenAI Responses API: event.delta for response.output_text.delta
                 const plain = json?.type === 'response.output_text.delta' && typeof json.delta === "string" ? json.delta : ""
                 if (plain) appendChunk(assistantIndex, plain)
               } catch (_e) {
                 // ignore malformed chunks
               }
             }
             return undefined
           }

-          let finished = false
-
           while (true) {
             const { done, value } = await reader.read()
             if (done) break
             buffer += decoder.decode(value, { stream: true })
             // Process complete SSE events separated by blank lines
             let idx
             // Loop to handle multiple events per chunk
             while ((idx = buffer.indexOf("\n\n")) !== -1) {
               const block = buffer.slice(0, idx)
               buffer = buffer.slice(idx + 2)
               const status = flushEvents(block)
               if (status === "DONE") {
                 // Drain remaining
                 buffer = ""
                 finished = true
                 break
               }
             }
             if (finished) break
           }

           // Flush any remaining decoded bytes (in case the final chunk wasn't newline-terminated)
           const tail = decoder.decode()
           if (tail) {
             buffer += tail
             if (buffer) void flushEvents(buffer)
           }
src/config/models.ts (1)

73-97: Add backward compatibility for legacy model IDs.

The removal of model ID validation could break existing integrations that use the old "unreal::" prefix format.

Add legacy model ID support after the getAvailableModels function:

 export async function getAvailableModels(
   auth?: string
 ): Promise<UnrealModelId[]> {
   // ... existing implementation ...
 }

+export const LEGACY_MODEL_PREFIX = "unreal::"
+
+/**
+ * Normalizes model IDs by removing legacy prefixes
+ * @param input - The model ID to normalize
+ * @returns Normalized model ID or null if invalid
+ */
+export function normalizeModelId(input: string): UnrealModelId | null {
+  const normalized = input.startsWith(LEGACY_MODEL_PREFIX)
+    ? input.slice(LEGACY_MODEL_PREFIX.length)
+    : input
+  
+  // Optionally validate against available models
+  // const available = await getAvailableModels()
+  // return available.includes(normalized) ? normalized : null
+  
+  return normalized
+}
🧹 Nitpick comments (8)
src/components/ChatCompletion.tsx (1)

36-40: Consider type assertion safety.

The type assertions as UnrealModelId[] and as string[] are unnecessary since getAvailableModels already returns Promise<UnrealModelId[]> and UnrealModelId is defined as string.

Apply this diff to remove unnecessary type assertions:

-        const models = await getAvailableModels(auth);
-        if (models && models.length > 0) {
-          setAvailableModels(models as UnrealModelId[]);
-          setModel((prev) => ((models as string[]).includes(prev) ? prev : (models[0] as UnrealModelId)));
+        const models = await getAvailableModels(auth);
+        if (models && models.length > 0) {
+          setAvailableModels(models);
+          setModel((prev) => (models.includes(prev) ? prev : models[0]));
src/components/ChatPlayground.tsx (3)

279-283: Remove unnecessary type assertions for cleaner code.

Similar to the ChatCompletion component, the type assertions here are redundant.

Apply this diff to simplify:

-          setAvailableModels(models as UnrealModelId[])
+          setAvailableModels(models)
           setModel((prev) =>
-            (models as string[]).includes(prev)
-              ? prev
-              : (models[0] as UnrealModelId)
+            models.includes(prev) ? prev : models[0]
           )

658-660: Improve message format for non-streamed API compatibility.

The current approach of concatenating messages with role prefixes might not preserve conversation context well, especially for multi-turn conversations.

Consider preserving the conversation structure better:

-              instructions: openaiMessages.find(m => m.role === "system")?.content || "You are a helpful assistant.",
-              input: openaiMessages.filter(m => m.role !== "system").map(m => `${m.role}: ${m.content}`).join("\n"),
+              instructions: openaiMessages.find(m => m.role === "system")?.content || "You are a helpful assistant.",
+              input: openaiMessages
+                .filter(m => m.role !== "system")
+                .map(m => m.content)
+                .join("\n\n"),
+              // Optionally include conversation_history if the API supports it
+              conversation_history: openaiMessages.filter(m => m.role !== "system"),

693-714: Consider extracting header parsing logic.

The header parsing logic is duplicated between streaming and non-streaming paths. Consider extracting it to a reusable function.

Add a helper function to reduce duplication:

+  const parseResponseHeaders = (headers: Record<string, string>) => {
+    const num = (s?: string) => {
+      if (s == null) return undefined
+      const n = Number(s)
+      return Number.isFinite(n) ? n : undefined
+    }
+    return {
+      headerInputCost: num(headers["openai-input-cost"]),
+      headerOutputCost: num(headers["openai-output-cost"]),
+      headerTotalCost: num(headers["openai-total-cost"]),
+      priceTxHash: headers["openai-price-tx"],
+      priceTxUrl: headers["openai-price-tx-url"],
+      costTxHash: headers["openai-cost-tx"],
+      costTxUrl: headers["openai-cost-tx-url"],
+      refundAmount: num(headers["openai-refund-amount"]),
+      refundTxHash: headers["openai-refund-tx"],
+      refundTxUrl: headers["openai-refund-tx-url"],
+      paymentTokenUrl: headers["openai-payment-token"],
+      chainName: headers["openai-chain"],
+      chainIdParsed: headers["openai-chain-id"]
+        ? Number.isFinite(Number(headers["openai-chain-id"]))
+          ? Number(headers["openai-chain-id"])
+          : undefined
+        : undefined,
+      callsRemaining: num(headers["openai-calls-remaining"]) ?? undefined,
+      requestIdHeader: headers["x-request-id"] || headers["openai-request-id"] || null,
+    }
+  }

           // Parse explicit headers for price/cost and transactions
-          const num = (s?: string) => {
-            if (s == null) return undefined
-            const n = Number(s)
-            return Number.isFinite(n) ? n : undefined
-          }
-          const headerInputCost = num(headersObj["openai-input-cost"]) // UNREAL
-          const headerOutputCost = num(headersObj["openai-output-cost"]) // UNREAL
-          const headerTotalCost = num(headersObj["openai-total-cost"]) // UNREAL
-          const priceTxHash = headersObj["openai-price-tx"]
-          const priceTxUrl = headersObj["openai-price-tx-url"]
-          const costTxHash = headersObj["openai-cost-tx"]
-          const costTxUrl = headersObj["openai-cost-tx-url"]
-          const refundAmount = num(headersObj["openai-refund-amount"]) // UNREAL
-          const refundTxHash = headersObj["openai-refund-tx"]
-          const refundTxUrl = headersObj["openai-refund-tx-url"]
-          const paymentTokenUrl = headersObj["openai-payment-token"]
-          const chainName = headersObj["openai-chain"]
-          const chainIdHeader = headersObj["openai-chain-id"]
-          const chainIdParsed = chainIdHeader
-            ? Number.isFinite(Number(chainIdHeader))
-              ? Number(chainIdHeader)
-              : undefined
-            : undefined
-          const callsRemaining = num(headersObj["openai-calls-remaining"]) ?? undefined
-          const requestIdHeader =
-            headersObj["x-request-id"] || headersObj["openai-request-id"] || null
+          const parsedHeaders = parseResponseHeaders(headersObj)
+          const {
+            headerInputCost,
+            headerOutputCost,
+            headerTotalCost,
+            priceTxHash,
+            priceTxUrl,
+            costTxHash,
+            costTxUrl,
+            refundAmount,
+            refundTxHash,
+            refundTxUrl,
+            paymentTokenUrl,
+            chainName,
+            chainIdParsed,
+            callsRemaining,
+            requestIdHeader,
+          } = parsedHeaders
src/config/models.ts (4)

7-7: Consider adding JSDoc for the changed type definition.

The UnrealModelId type changed from a union of specific models to a generic string. This is a breaking change that should be documented.

Add documentation to clarify the change:

 // Model identifiers are determined at runtime from /v1/models
+/**
+ * UnrealModelId represents a model identifier string.
+ * Previously a union of specific models, now dynamically determined from the API.
+ * @since 2.0.0 - Changed from union type to string for runtime discovery
+ */
 export type UnrealModelId = string

18-28: Enhance type safety for model extraction.

The extractModelId function uses loose typing. Consider strengthening the type definitions.

Improve type safety with better typing:

-type ApiModelLike =
-  | { id?: unknown; model?: unknown; name?: unknown }
-  | string
-  | null
-  | undefined
+type ApiModelObject = {
+  id?: string | unknown
+  model?: string | unknown
+  name?: string | unknown
+}
+
+type ApiModelLike = ApiModelObject | string | null | undefined

 const extractModelId = (m: ApiModelLike): string | undefined => {
   if (typeof m === "string") return m
   if (m && typeof m === "object") {
-    const obj = m as { id?: unknown; model?: unknown; name?: unknown }
+    const obj = m as ApiModelObject
     const candidate = [obj.id, obj.model, obj.name].find(
       (v): v is string => typeof v === "string"
     )
     return candidate
   }
   return undefined
 }

85-93: Improve error handling granularity.

The catch block swallows all errors and only logs a warning. Consider distinguishing between network errors and parsing errors for better debugging.

Enhance error handling:

   try {
     const models = await fetchModelsFromApi(auth)
     if (models.length > 0) {
       _modelsCache = { models, expiresAt: now + TEN_MIN }
       return models
     }
   } catch (e) {
-    // Swallow and fallback below
-    console.warn("getAvailableModels: failed to fetch from API", e)
+    // Log different error types for better debugging
+    if (e instanceof TypeError && e.message.includes('fetch')) {
+      console.warn("getAvailableModels: network error fetching models", e)
+    } else if (e instanceof SyntaxError) {
+      console.warn("getAvailableModels: invalid JSON response from models API", e)
+    } else {
+      console.warn("getAvailableModels: unexpected error fetching models", e)
+    }
   }

99-101: Consider adding cache invalidation on auth changes.

The cache invalidation function could be automatically called when authentication changes to ensure fresh model lists for different users.

Consider adding an auth-aware cache invalidation pattern that components can use:

 export function invalidateModelsCache(): void {
   _modelsCache = null
 }
+
+/**
+ * Invalidates cache if the auth token has changed
+ * @param newAuth - The new authentication token
+ * @returns Whether the cache was invalidated
+ */
+export function invalidateCacheIfAuthChanged(newAuth?: string): boolean {
+  // Store last auth in module scope (consider WeakMap for multiple instances)
+  // This is a simplified example - adjust based on your auth management
+  if (_lastAuth !== newAuth) {
+    _lastAuth = newAuth
+    invalidateModelsCache()
+    return true
+  }
+  return false
+}
+
+let _lastAuth: string | undefined
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9edd8b8 and f41a9a1.

📒 Files selected for processing (3)
  • src/components/ChatCompletion.tsx (3 hunks)
  • src/components/ChatPlayground.tsx (10 hunks)
  • src/config/models.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
src/components/ChatPlayground.tsx (5)
src/config/models.ts (3)
  • DEFAULT_MODEL (103-103)
  • getAvailableModels (73-97)
  • UnrealModelId (7-7)
src/lib/api.ts (1)
  • getCurrentChainId (469-485)
src/config/unreal.ts (1)
  • OPENAI_URL (8-9)
.types/app/src/components/ui/tooltip.d.ts (3)
  • Tooltip (7-7)
  • TooltipTrigger (7-7)
  • TooltipContent (7-7)
src/components/ui/switch.tsx (1)
  • Switch (27-27)
src/config/models.ts (1)
src/config/unreal.ts (1)
  • OPENAI_URL (8-9)
src/components/ChatCompletion.tsx (1)
src/config/models.ts (3)
  • UnrealModelId (7-7)
  • DEFAULT_MODEL (103-103)
  • getAvailableModels (73-97)
🔇 Additional comments (2)
src/components/ChatCompletion.tsx (1)

9-10: LGTM! Clean migration to dynamic model loading.

The switch from static SUPPORTED_MODELS to dynamic getAvailableModels improves runtime flexibility and aligns with the model discovery pattern used across components.

src/components/ChatPlayground.tsx (1)

1324-1340: Great UX addition with the streaming toggle!

The streaming mode toggle with tooltip provides clear user control and sets appropriate expectations with the "beta" label.

@laciferin2024 laciferin2024 merged commit 067f7f8 into main Oct 16, 2025
5 of 7 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Nov 9, 2025
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants