Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
CodeCapy Review ₍ᐢ•(ܫ)•ᐢ₎
Codebase SummaryZapDev is an AI-powered development platform that enables real-time web application creation through conversational interactions with AI agents. Users can generate code, view live previews, and manage projects within an integrated interface featuring a file explorer and split-pane code preview. The platform leverages multiple AI providers (OpenRouter and now Cerebras for the z-ai/glm-4.7 model) to generate code fragments using a conversational agent. PR ChangesThe pull request introduces changes to the AI agent client logic by adding a new API client for the Cerebras provider, which is used when the model is 'z-ai/glm-4.7'. The env.example file is updated with a new CEREBRAS_API_KEY variable. In several files (src/agents/client.ts, src/agents/code-agent.ts, and src/agents/types.ts), the code now dynamically selects between the OpenRouter and Cerebras providers based on the model id. This change ensures that if the z-ai/glm-4.7 model is requested, the Cerebras endpoint is used, otherwise OpenRouter is the default. Setup Instructions
Generated Test Cases1: AI Agent Code Generation with Selected Model ❗️❗️❗️Description: Tests the complete user workflow of submitting a code generation task through the AI agent interface and verifies that the proper code preview is rendered. It also implicitly checks that the system selects the correct API client based on the model chosen (e.g. openai/gpt-5-nano). Prerequisites:
Steps:
Expected Result: The user should see the generated code rendered in the code preview area alongside the file explorer layout. The progress status messages should be visible during processing, and once completed, the output is clearly presented. 2: Fallback to OpenRouter for Non-Cerebras Models ❗️❗️❗️Description: Verifies that when a model other than 'z-ai/glm-4.7' is selected (e.g., 'google/gemini-2.5-flash-lite'), the system uses the OpenRouter provider correctly. Prerequisites:
Steps:
Expected Result: The generated code output appears in the preview pane, and the process completes successfully, indicating that the OpenRouter client was used for the selected model. 3: Error Handling for Missing Cerebras API Key ❗️❗️Description: Checks how the application handles errors when the Cerebras API key is not provided while requesting the model 'z-ai/glm-4.7'. This ensures that meaningful error feedback is provided to the user. Prerequisites:
Steps:
Expected Result: The user sees an error message indicating that the API key for Cerebras is missing or invalid. The error should be clearly displayed in the interface such that the user understands that the request could not be processed due to missing credentials. 4: Live Preview Layout Integrity During Code Generation ❗️❗️Description: Verifies that the live preview and split-pane layout (including file explorer and code preview areas) remain intact during and after the code generation process. Prerequisites:
Steps:
Expected Result: The UI should maintain its split-pane layout throughout the code generation process. Both the file explorer and the code preview should be clearly visible, with no visual disruptions or misalignments. Raw Changes AnalyzedFile: env.example
Changes:
@@ -21,6 +21,9 @@ NEXT_PUBLIC_POLAR_SERVER="production" # "sandbox" for testing, "production" fo
OPENROUTER_API_KEY=""
OPENROUTER_BASE_URL="https://openrouter.ai/api/v1"
+# Cerebras API (Z.AI GLM 4.7 model - ultra-fast inference)
+CEREBRAS_API_KEY="" # Get from https://cloud.cerebras.ai
+
# E2B
E2B_API_KEY=""
File: src/agents/client.ts
Changes:
@@ -1,11 +1,19 @@
import { createOpenAI } from "@ai-sdk/openai";
-// Use OpenAI provider with OpenRouter's API endpoint
export const openrouter = createOpenAI({
apiKey: process.env.OPENROUTER_API_KEY!,
baseURL: "https://openrouter.ai/api/v1",
});
+export const cerebras = createOpenAI({
+ apiKey: process.env.CEREBRAS_API_KEY || "",
+ baseURL: "https://api.cerebras.ai/v1",
+});
+
export function getModel(modelId: string) {
return openrouter(modelId);
}
+
+export function getClientForModel(modelId: string) {
+ return modelId === "z-ai/glm-4.7" ? cerebras : openrouter;
+}
File: src/agents/code-agent.ts
Changes:
@@ -4,7 +4,7 @@ import { ConvexHttpClient } from "convex/browser";
import { api } from "@/convex/_generated/api";
import type { Id } from "@/convex/_generated/dataModel";
-import { openrouter } from "./client";
+import { getClientForModel } from "./client";
import { createAgentTools } from "./tools";
import {
type Framework,
@@ -134,7 +134,9 @@ async function detectFramework(prompt: string): Promise<Framework> {
cacheKey,
async () => {
const { text } = await generateText({
- model: openrouter.chat("google/gemini-2.5-flash-lite"),
+ model: getClientForModel("google/gemini-2.5-flash-lite").chat(
+ "google/gemini-2.5-flash-lite"
+ ),
system: FRAMEWORK_SELECTOR_PROMPT,
prompt,
temperature: 0.3,
@@ -159,13 +161,17 @@ async function generateFragmentMetadata(
try {
const [titleResult, responseResult] = await Promise.all([
generateText({
- model: openrouter.chat("openai/gpt-5-nano"),
+ model: getClientForModel("openai/gpt-5-nano").chat(
+ "openai/gpt-5-nano"
+ ),
system: FRAGMENT_TITLE_PROMPT,
prompt: summary,
temperature: 0.3,
}),
generateText({
- model: openrouter.chat("openai/gpt-5-nano"),
+ model: getClientForModel("openai/gpt-5-nano").chat(
+ "openai/gpt-5-nano"
+ ),
system: RESPONSE_PROMPT,
prompt: summary,
temperature: 0.3,
@@ -419,20 +425,16 @@ export async function* runCodeAgent(
temperature: modelConfig.temperature,
};
- if ("frequencyPenalty" in modelConfig) {
+ if (
+ modelConfig.supportsFrequencyPenalty &&
+ "frequencyPenalty" in modelConfig
+ ) {
modelOptions.frequencyPenalty = modelConfig.frequencyPenalty;
}
- if (selectedModel === "z-ai/glm-4.7") {
- modelOptions.provider = {
- order: ["Z.AI"],
- allow_fallbacks: false,
- };
- }
-
console.log("[DEBUG] Beginning AI stream...");
const result = streamText({
- model: openrouter.chat(selectedModel),
+ model: getClientForModel(selectedModel).chat(selectedModel),
system: frameworkPrompt,
messages,
tools,
@@ -492,7 +494,7 @@ export async function* runCodeAgent(
yield { type: "status", data: "Generating summary..." };
const followUp = await generateText({
- model: openrouter.chat(selectedModel),
+ model: getClientForModel(selectedModel).chat(selectedModel),
system: frameworkPrompt,
messages: [
...messages,
@@ -577,7 +579,7 @@ ${validationErrors || lastErrorMessage || "No error details provided."}
5. PROVIDE SUMMARY with <task_summary> once fixed`;
const fixResult = await generateText({
- model: openrouter.chat(selectedModel),
+ model: getClientForModel(selectedModel).chat(selectedModel),
system: frameworkPrompt,
messages: [
...messages,
@@ -902,7 +904,7 @@ REQUIRED ACTIONS:
5. Provide a <task_summary> explaining what was fixed`;
const result = await generateText({
- model: openrouter.chat(fragmentModel),
+ model: getClientForModel(fragmentModel).chat(fragmentModel),
system: frameworkPrompt,
messages: [{ role: "user", content: fixPrompt }],
tools,
File: src/agents/types.ts
Changes:
@@ -31,27 +31,30 @@ export const MODEL_CONFIGS = {
provider: "anthropic",
description: "Fast and efficient for most coding tasks",
temperature: 0.7,
+ supportsFrequencyPenalty: true,
frequencyPenalty: 0.5,
},
"openai/gpt-5.1-codex": {
name: "GPT-5.1 Codex",
provider: "openai",
description: "OpenAI's flagship model for complex tasks",
temperature: 0.7,
+ supportsFrequencyPenalty: true,
frequencyPenalty: 0.5,
},
"z-ai/glm-4.7": {
name: "Z-AI GLM 4.7",
- provider: "z-ai",
- description: "Ultra-fast inference for speed-critical tasks",
+ provider: "cerebras",
+ description: "Ultra-fast inference for speed-critical tasks via Cerebras",
temperature: 0.7,
- frequencyPenalty: 0.5,
+ supportsFrequencyPenalty: false,
},
"moonshotai/kimi-k2-0905": {
name: "Kimi K2",
provider: "moonshot",
description: "Specialized for coding tasks",
temperature: 0.7,
+ supportsFrequencyPenalty: true,
frequencyPenalty: 0.5,
},
"google/gemini-3-pro-preview": {
|
📝 WalkthroughWalkthroughAdded Cerebras integration: new env var, package dependency, a Cerebras client and a getClientForModel(modelId) factory; updated model configs (including key rename) and routed code-agent model calls through the factory with conditional frequencyPenalty handling. Changes
Sequence DiagramsequenceDiagram
participant CodeAgent as Code Agent
participant Factory as getClientForModel()
participant Cerebras as Cerebras Client
participant OpenRouter as OpenRouter Client
CodeAgent->>Factory: getClientForModel("zai-glm-4.7")
Factory-->>CodeAgent: return Cerebras client
CodeAgent->>Cerebras: chat("zai-glm-4.7", params)
Cerebras-->>CodeAgent: response
CodeAgent->>Factory: getClientForModel("openai/gpt-5.1-codex")
Factory-->>CodeAgent: return OpenRouter client
CodeAgent->>OpenRouter: chat("openai/gpt-5.1-codex", params)
OpenRouter-->>CodeAgent: response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
🚀 Launching Scrapybara desktop... |
|
❌ Something went wrong: |
Deployment failedThis pull request failed while building automatically on Stormkit. You can preview the logs using the following link. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 8ecb820932
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @src/agents/client.ts:
- Around line 8-11: The Cerebras client initialization uses the wrong model ID
and allows an empty API key; update any occurrence of the model string
"z-ai/glm-4.7" to the correct "zai-glm-4.7" (search for the symbol/model literal
used with the cerebras client), and make the API key required when creating the
client (change process.env.CEREBRAS_API_KEY || "" to a runtime-checked value —
either use process.env.CEREBRAS_API_KEY! or explicitly throw a clear error if
CEREBRAS_API_KEY is missing) so createOpenAI(cerebras) fails fast with a helpful
message instead of silently initializing and later returning 401s.
🧹 Nitpick comments (1)
src/agents/code-agent.ts (1)
7-7: Client routing + frequencyPenalty gating are consistent; consider a small DRY/type-safety cleanup.The switch to
getClientForModel(...).chat(...)and thesupportsFrequencyPenaltyguard read cleanly. Two optional follow-ups:
- Reduce repetition by caching
const chatModel = getClientForModel(selectedModel).chat(selectedModel)once per run.- Consider typing
modelOptionsmore narrowly thanRecord<string, unknown>so option names stay type-checked.Also applies to: 136-143, 163-179, 428-433, 436-443, 496-514, 581-592, 906-913
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (4)
env.examplesrc/agents/client.tssrc/agents/code-agent.tssrc/agents/types.ts
🧰 Additional context used
📓 Path-based instructions (2)
src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
TypeScript strict mode enabled in ESLint with no-explicit-any (warn) and no-unused-vars (error, except underscore-prefixed)
Use modern framework patterns: Next.js App Router and React hooks
Files:
src/agents/client.tssrc/agents/types.tssrc/agents/code-agent.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Strict TypeScript usage - avoid using
anytype in code
Files:
src/agents/client.tssrc/agents/types.tssrc/agents/code-agent.ts
🧠 Learnings (3)
📚 Learning: 2025-12-14T11:08:35.008Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-12-14T11:08:35.008Z
Learning: Applies to src/inngest/**/*.{ts,tsx} : AI code generation agents must follow framework-specific prompts from `src/prompts/` directory
Applied to files:
src/agents/code-agent.ts
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use the `api` object from `convex/_generated/api.ts` to call public functions registered with `query`, `mutation`, or `action`
Applied to files:
src/agents/code-agent.ts
📚 Learning: 2025-12-14T11:08:17.520Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: .cursor/rules/convex_rules.mdc:0-0
Timestamp: 2025-12-14T11:08:17.520Z
Learning: Applies to convex/**/*.{ts,tsx} : Use the `internal` object from `convex/_generated/api.ts` to call internal functions registered with `internalQuery`, `internalMutation`, or `internalAction`
Applied to files:
src/agents/code-agent.ts
🧬 Code graph analysis (1)
src/agents/code-agent.ts (1)
src/agents/client.ts (1)
getClientForModel(17-19)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Codacy Security Scan
🔇 Additional comments (2)
env.example (1)
24-26: CEREBRAS_API_KEY addition looks good (server-only).Nice to keep it non-
NEXT_PUBLIC_*. Just ensure your deploy target (Vercel/Fly/etc.) and any internal setup docs mention this new required secret forz-ai/glm-4.7.src/agents/types.ts (1)
34-36: MODEL_CONFIGS changes are coherent; watch for “provider vs routing” drift.The
supportsFrequencyPenaltyadditions and explicitly disabling it forz-ai/glm-4.7align well with the new conditional application. One thing to keep in mind:provider: "cerebras"is now informational unless there’s other code routing by provider—right now routing is bymodelIdequality, so avoid relying onproviderfor client selection unless you refactor intentionally.Also applies to: 42-44, 45-51, 57-59
Replace custom OpenAI client configuration with the official Cerebras AI SDK for better compatibility and support. Update model ID from 'z-ai/glm-4.7' to 'zai-glm-4.7' to match Cerebras SDK naming conventions. Changes: - Install @ai-sdk/cerebras package (v2.0.5) - Update client.ts to use createCerebras() instead of createOpenAI() with custom baseURL - Update model ID across types, UI components, and tests - Maintain backward compatibility with existing model selection logic Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
CodeCapy Review ₍ᐢ•(ܫ)•ᐢ₎
Codebase SummaryZapDev is an AI-powered development platform that lets users create web applications using conversational AI agents in real-time sandboxes. The platform supports various AI providers and models to generate code and manage projects effectively. PR ChangesThis PR introduces integration for the Cerebras AI model via intelligent provider routing, automatically selecting the optimal AI provider based on model requirements. It also enhances frequency penalty configuration handling across models to improve response quality. Changes include updates to model identifiers (e.g. renaming 'z-ai/glm-4.7' to 'zai-glm-4.7'), modifications in agent client routing, and adjustments in configuration for frequency penalty application. Setup Instructions
Generated Test Cases1: Verify Model Dropdown Lists Updated Models ❗️❗️❗️Description: This test verifies that both the Project Form and Message Form UI display the updated model identifiers and descriptions, including the renamed 'Z-AI GLM 4.7' (now 'zai-glm-4.7'). It ensures that users see the correct model names, images, and descriptions when selecting an AI model. Prerequisites:
Steps:
Expected Result: The dropdown menus in both the Project Form and Message Form display the updated model identifiers, particularly showing 'Z-AI GLM 4.7' as 'zai-glm-4.7' with the correct icon and description. 2: Validate Cerebras Provider Routing for Z-AI Model ❗️❗️❗️Description: This test ensures that when the user selects the 'Z-AI GLM 4.7' model (now 'zai-glm-4.7'), the application routes AI requests to the Cerebras provider instead of the default OpenRouter. It simulates a real conversation to verify proper backend routing and response generation. Prerequisites:
Steps:
Expected Result: The conversation uses the Cerebras provider routing for the 'zai-glm-4.7' model. The user sees a response generated without error, confirming that the intelligent provider routing is working as expected. 3: Ensure Frequency Penalty Configuration Handling ❗️❗️Description: This test checks that the application handles frequency penalty settings correctly. For models that support frequency penalty (for example, GPT-5.1 Codex and others), the configuration is applied; for models that do not support it (like Cerebras for 'zai-glm-4.7'), the parameter is ignored. This is verified indirectly through the smooth generation of AI responses. Prerequisites:
Steps:
Expected Result: Responses are generated correctly for both types of models. For models supporting frequency penalty, the configuration is applied; for the Cerebras model, the system ignores the frequency penalty parameter. No error messages related to frequency penalty appear. 4: Handle Missing Cerebras API Key Gracefully ❗️❗️Description: This test verifies that when the Cerebras API key is missing from the environment, and a user selects the 'Z-AI GLM 4.7' model, the application handles the error gracefully by showing an appropriate user error message, preventing a crash or unclear failure. Prerequisites:
Steps:
Expected Result: The application displays a clear and user-friendly error message indicating that the Cerebras API key is missing. The UI remains responsive and does not crash, allowing the user to select another model or update the configuration. Raw Changes AnalyzedFile: bun.lock
Changes:
@@ -5,6 +5,7 @@
"": {
"name": "vibe",
"dependencies": {
+ "@ai-sdk/cerebras": "^2.0.5",
"@ai-sdk/openai": "^3.0.2",
"@clerk/backend": "^2.29.0",
"@clerk/nextjs": "^6.36.5",
@@ -114,13 +115,17 @@
"esbuild": "0.25.4",
},
"packages": {
+ "@ai-sdk/cerebras": ["@ai-sdk/cerebras@2.0.5", "", { "dependencies": { "@ai-sdk/openai-compatible": "2.0.4", "@ai-sdk/provider": "3.0.2", "@ai-sdk/provider-utils": "4.0.4" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-z7+btMNpeiOoVyXtMW+P1ZEWT1iJsUSlMtW1dCC67+t56GpTT+S7X++ROe5zbmNCVqQwd9iQTsEmj09H5y7eBg=="],
+
"@ai-sdk/gateway": ["@ai-sdk/gateway@3.0.4", "", { "dependencies": { "@ai-sdk/provider": "3.0.1", "@ai-sdk/provider-utils": "4.0.2", "@vercel/oidc": "3.0.5" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-OlccjNYZ5+4FaNyvs0kb3N5H6U/QCKlKPTGsgUo8IZkqfMQu8ALI1XD6l/BCuTKto+OO9xUPObT/W7JhbqJ5nA=="],
"@ai-sdk/openai": ["@ai-sdk/openai@3.0.2", "", { "dependencies": { "@ai-sdk/provider": "3.0.1", "@ai-sdk/provider-utils": "4.0.2" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-GONwavgSWtcWO+t9+GpGK8l7nIYh+zNtCL/NYDSeHxHiw6ksQS9XMRWrZyE5NpJ0EXNxSAWCHIDmb1WvTqhq9Q=="],
- "@ai-sdk/provider": ["@ai-sdk/provider@3.0.1", "", { "dependencies": { "json-schema": "^0.4.0" } }, "sha512-2lR4w7mr9XrydzxBSjir4N6YMGdXD+Np1Sh0RXABh7tWdNFFwIeRI1Q+SaYZMbfL8Pg8RRLcrxQm51yxTLhokg=="],
+ "@ai-sdk/openai-compatible": ["@ai-sdk/openai-compatible@2.0.4", "", { "dependencies": { "@ai-sdk/provider": "3.0.2", "@ai-sdk/provider-utils": "4.0.4" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-kzsXyybJKM3wtUtGZkNbvmpDwqpsvg/hTjlPZe3s/bCx3enVdAlRtXD853nnj6mZjteNCDLoR2OgVLuDpyRN5Q=="],
+
+ "@ai-sdk/provider": ["@ai-sdk/provider@3.0.2", "", { "dependencies": { "json-schema": "^0.4.0" } }, "sha512-HrEmNt/BH/hkQ7zpi2o6N3k1ZR1QTb7z85WYhYygiTxOQuaml4CMtHCWRbric5WPU+RNsYI7r1EpyVQMKO1pYw=="],
- "@ai-sdk/provider-utils": ["@ai-sdk/provider-utils@4.0.2", "", { "dependencies": { "@ai-sdk/provider": "3.0.1", "@standard-schema/spec": "^1.1.0", "eventsource-parser": "^3.0.6" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-KaykkuRBdF/ffpI5bwpL4aSCmO/99p8/ci+VeHwJO8tmvXtiVAb99QeyvvvXmL61e9Zrvv4GBGoajW19xdjkVQ=="],
+ "@ai-sdk/provider-utils": ["@ai-sdk/provider-utils@4.0.4", "", { "dependencies": { "@ai-sdk/provider": "3.0.2", "@standard-schema/spec": "^1.1.0", "eventsource-parser": "^3.0.6" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-VxhX0B/dWGbpNHxrKCWUAJKXIXV015J4e7qYjdIU9lLWeptk0KMLGcqkB4wFxff5Njqur8dt8wRi1MN9lZtDqg=="],
"@alloc/quick-lru": ["@alloc/quick-lru@5.2.0", "", {}, "sha512-UrcABB+4bUrFABwbluTIBErXwvbsU/V7TZWfmbgJfbkwiBuziS9gxdODUyuiecfdGQ85jglMW6juS3+z5TsKLw=="],
@@ -2512,6 +2517,14 @@
"zod-validation-error": ["zod-validation-error@4.0.2", "", { "peerDependencies": { "zod": "^3.25.0 || ^4.0.0" } }, "sha512-Q6/nZLe6jxuU80qb/4uJ4t5v2VEZ44lzQjPDhYJNztRQ4wyWc6VF3D3Kb/fAuPetZQnhS3hnajCf9CsWesghLQ=="],
+ "@ai-sdk/gateway/@ai-sdk/provider": ["@ai-sdk/provider@3.0.1", "", { "dependencies": { "json-schema": "^0.4.0" } }, "sha512-2lR4w7mr9XrydzxBSjir4N6YMGdXD+Np1Sh0RXABh7tWdNFFwIeRI1Q+SaYZMbfL8Pg8RRLcrxQm51yxTLhokg=="],
+
+ "@ai-sdk/gateway/@ai-sdk/provider-utils": ["@ai-sdk/provider-utils@4.0.2", "", { "dependencies": { "@ai-sdk/provider": "3.0.1", "@standard-schema/spec": "^1.1.0", "eventsource-parser": "^3.0.6" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-KaykkuRBdF/ffpI5bwpL4aSCmO/99p8/ci+VeHwJO8tmvXtiVAb99QeyvvvXmL61e9Zrvv4GBGoajW19xdjkVQ=="],
+
+ "@ai-sdk/openai/@ai-sdk/provider": ["@ai-sdk/provider@3.0.1", "", { "dependencies": { "json-schema": "^0.4.0" } }, "sha512-2lR4w7mr9XrydzxBSjir4N6YMGdXD+Np1Sh0RXABh7tWdNFFwIeRI1Q+SaYZMbfL8Pg8RRLcrxQm51yxTLhokg=="],
+
+ "@ai-sdk/openai/@ai-sdk/provider-utils": ["@ai-sdk/provider-utils@4.0.2", "", { "dependencies": { "@ai-sdk/provider": "3.0.1", "@standard-schema/spec": "^1.1.0", "eventsource-parser": "^3.0.6" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-KaykkuRBdF/ffpI5bwpL4aSCmO/99p8/ci+VeHwJO8tmvXtiVAb99QeyvvvXmL61e9Zrvv4GBGoajW19xdjkVQ=="],
+
"@ai-sdk/provider-utils/@standard-schema/spec": ["@standard-schema/spec@1.1.0", "", {}, "sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w=="],
"@aws-crypto/sha256-browser/@smithy/util-utf8": ["@smithy/util-utf8@2.3.0", "", { "dependencies": { "@smithy/util-buffer-from": "^2.2.0", "tslib": "^2.6.2" } }, "sha512-R8Rdn8Hy72KKcebgLiv8jQcQkXoLMOGGv5uI1/k0l+snqkOzQ1R0ChUBCxWMlBsFMekWjq0wRudIweFs7sKT5A=="],
@@ -2668,6 +2681,10 @@
"@vercel/functions/@vercel/oidc": ["@vercel/oidc@2.0.2", "", { "dependencies": { "@types/ms": "2.1.0", "ms": "2.1.3" } }, "sha512-59PBFx3T+k5hLTEWa3ggiMpGRz1OVvl9eN8SUai+A43IsqiOuAe7qPBf+cray/Fj6mkgnxm/D7IAtjc8zSHi7g=="],
+ "ai/@ai-sdk/provider": ["@ai-sdk/provider@3.0.1", "", { "dependencies": { "json-schema": "^0.4.0" } }, "sha512-2lR4w7mr9XrydzxBSjir4N6YMGdXD+Np1Sh0RXABh7tWdNFFwIeRI1Q+SaYZMbfL8Pg8RRLcrxQm51yxTLhokg=="],
+
+ "ai/@ai-sdk/provider-utils": ["@ai-sdk/provider-utils@4.0.2", "", { "dependencies": { "@ai-sdk/provider": "3.0.1", "@standard-schema/spec": "^1.1.0", "eventsource-parser": "^3.0.6" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-KaykkuRBdF/ffpI5bwpL4aSCmO/99p8/ci+VeHwJO8tmvXtiVAb99QeyvvvXmL61e9Zrvv4GBGoajW19xdjkVQ=="],
+
"ajv-formats/ajv": ["ajv@8.17.1", "", { "dependencies": { "fast-deep-equal": "^3.1.3", "fast-uri": "^3.0.1", "json-schema-traverse": "^1.0.0", "require-from-string": "^2.0.2" } }, "sha512-B/gBuNg5SiMTrPkC+A2+cW0RszwxYmn6VYxB/inlBStS5nx6xHIt/ehKRhIMhqusl7a8LjQoZnjCs5vhwxOQ1g=="],
"anymatch/picomatch": ["picomatch@2.3.1", "", {}, "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="],
@@ -2842,6 +2859,10 @@
"yup/type-fest": ["type-fest@2.19.0", "", {}, "sha512-RAH822pAdBgcNMAfWnCBU3CFZcfZ/i1eZjwFU/dsLKumyuuP3niueg2UAukXYF0E2AAoc82ZSSf9J0WQBinzHA=="],
+ "@ai-sdk/gateway/@ai-sdk/provider-utils/@standard-schema/spec": ["@standard-schema/spec@1.1.0", "", {}, "sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w=="],
+
+ "@ai-sdk/openai/@ai-sdk/provider-utils/@standard-schema/spec": ["@standard-schema/spec@1.1.0", "", {}, "sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w=="],
+
"@aws-crypto/sha256-browser/@smithy/util-utf8/@smithy/util-buffer-from": ["@smithy/util-buffer-from@2.2.0", "", { "dependencies": { "@smithy/is-array-buffer": "^2.2.0", "tslib": "^2.6.2" } }, "sha512-IJdWBbTcMQ6DA0gdNhh/BwrLkDR+ADW5Kr1aZmd4k3DIF6ezMV4R2NIAmT08wQJ3yUK82thHWmC/TnK/wpMMIA=="],
"@aws-crypto/util/@smithy/util-utf8/@smithy/util-buffer-from": ["@smithy/util-buffer-from@2.2.0", "", { "dependencies": { "@smithy/is-array-buffer": "^2.2.0", "tslib": "^2.6.2" } }, "sha512-IJdWBbTcMQ6DA0gdNhh/BwrLkDR+ADW5Kr1aZmd4k3DIF6ezMV4R2NIAmT08wQJ3yUK82thHWmC/TnK/wpMMIA=="],
@@ -2906,6 +2927,8 @@
"@typescript-eslint/typescript-estree/minimatch/brace-expansion": ["brace-expansion@2.0.2", "", { "dependencies": { "balanced-match": "^1.0.0" } }, "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ=="],
+ "ai/@ai-sdk/provider-utils/@standard-schema/spec": ["@standard-schema/spec@1.1.0", "", {}, "sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w=="],
+
"ajv-formats/ajv/json-schema-traverse": ["json-schema-traverse@1.0.0", "", {}, "sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug=="],
"body-parser/debug/ms": ["ms@2.0.0", "", {}, "sha512-Tpp60P6IUJDTuOq/5Z8cdskzJujfwqfOTkrwIwj7IRISpnkJnT6SyJ4PCPnGMoFjC9ddhal5KVIYtAt97ix05A=="],
File: env.example
Changes:
@@ -21,6 +21,9 @@ NEXT_PUBLIC_POLAR_SERVER="production" # "sandbox" for testing, "production" fo
OPENROUTER_API_KEY=""
OPENROUTER_BASE_URL="https://openrouter.ai/api/v1"
+# Cerebras API (Z.AI GLM 4.7 model - ultra-fast inference)
+CEREBRAS_API_KEY="" # Get from https://cloud.cerebras.ai
+
# E2B
E2B_API_KEY=""
File: package.json
Changes:
@@ -12,6 +12,7 @@
"convex:deploy": "bunx convex deploy"
},
"dependencies": {
+ "@ai-sdk/cerebras": "^2.0.5",
"@ai-sdk/openai": "^3.0.2",
"@clerk/backend": "^2.29.0",
"@clerk/nextjs": "^6.36.5",
File: src/agents/client.ts
Changes:
@@ -1,11 +1,19 @@
import { createOpenAI } from "@ai-sdk/openai";
+import { createCerebras } from "@ai-sdk/cerebras";
-// Use OpenAI provider with OpenRouter's API endpoint
export const openrouter = createOpenAI({
apiKey: process.env.OPENROUTER_API_KEY!,
baseURL: "https://openrouter.ai/api/v1",
});
+export const cerebras = createCerebras({
+ apiKey: process.env.CEREBRAS_API_KEY || "",
+});
+
export function getModel(modelId: string) {
return openrouter(modelId);
}
+
+export function getClientForModel(modelId: string) {
+ return modelId === "zai-glm-4.7" ? cerebras : openrouter;
+}
File: src/agents/code-agent.ts
Changes:
@@ -4,7 +4,7 @@ import { ConvexHttpClient } from "convex/browser";
import { api } from "@/convex/_generated/api";
import type { Id } from "@/convex/_generated/dataModel";
-import { openrouter } from "./client";
+import { getClientForModel } from "./client";
import { createAgentTools } from "./tools";
import {
type Framework,
@@ -134,7 +134,9 @@ async function detectFramework(prompt: string): Promise<Framework> {
cacheKey,
async () => {
const { text } = await generateText({
- model: openrouter.chat("google/gemini-2.5-flash-lite"),
+ model: getClientForModel("google/gemini-2.5-flash-lite").chat(
+ "google/gemini-2.5-flash-lite"
+ ),
system: FRAMEWORK_SELECTOR_PROMPT,
prompt,
temperature: 0.3,
@@ -159,13 +161,17 @@ async function generateFragmentMetadata(
try {
const [titleResult, responseResult] = await Promise.all([
generateText({
- model: openrouter.chat("openai/gpt-5-nano"),
+ model: getClientForModel("openai/gpt-5-nano").chat(
+ "openai/gpt-5-nano"
+ ),
system: FRAGMENT_TITLE_PROMPT,
prompt: summary,
temperature: 0.3,
}),
generateText({
- model: openrouter.chat("openai/gpt-5-nano"),
+ model: getClientForModel("openai/gpt-5-nano").chat(
+ "openai/gpt-5-nano"
+ ),
system: RESPONSE_PROMPT,
prompt: summary,
temperature: 0.3,
@@ -419,20 +425,16 @@ export async function* runCodeAgent(
temperature: modelConfig.temperature,
};
- if ("frequencyPenalty" in modelConfig) {
+ if (
+ modelConfig.supportsFrequencyPenalty &&
+ "frequencyPenalty" in modelConfig
+ ) {
modelOptions.frequencyPenalty = modelConfig.frequencyPenalty;
}
- if (selectedModel === "z-ai/glm-4.7") {
- modelOptions.provider = {
- order: ["Z.AI"],
- allow_fallbacks: false,
- };
- }
-
console.log("[DEBUG] Beginning AI stream...");
const result = streamText({
- model: openrouter.chat(selectedModel),
+ model: getClientForModel(selectedModel).chat(selectedModel),
system: frameworkPrompt,
messages,
tools,
@@ -492,7 +494,7 @@ export async function* runCodeAgent(
yield { type: "status", data: "Generating summary..." };
const followUp = await generateText({
- model: openrouter.chat(selectedModel),
+ model: getClientForModel(selectedModel).chat(selectedModel),
system: frameworkPrompt,
messages: [
...messages,
@@ -577,7 +579,7 @@ ${validationErrors || lastErrorMessage || "No error details provided."}
5. PROVIDE SUMMARY with <task_summary> once fixed`;
const fixResult = await generateText({
- model: openrouter.chat(selectedModel),
+ model: getClientForModel(selectedModel).chat(selectedModel),
system: frameworkPrompt,
messages: [
...messages,
@@ -902,7 +904,7 @@ REQUIRED ACTIONS:
5. Provide a <task_summary> explaining what was fixed`;
const result = await generateText({
- model: openrouter.chat(fragmentModel),
+ model: getClientForModel(fragmentModel).chat(fragmentModel),
system: frameworkPrompt,
messages: [{ role: "user", content: fixPrompt }],
tools,
File: src/agents/types.ts
Changes:
@@ -31,27 +31,30 @@ export const MODEL_CONFIGS = {
provider: "anthropic",
description: "Fast and efficient for most coding tasks",
temperature: 0.7,
+ supportsFrequencyPenalty: true,
frequencyPenalty: 0.5,
},
"openai/gpt-5.1-codex": {
name: "GPT-5.1 Codex",
provider: "openai",
description: "OpenAI's flagship model for complex tasks",
temperature: 0.7,
+ supportsFrequencyPenalty: true,
frequencyPenalty: 0.5,
},
- "z-ai/glm-4.7": {
+ "zai-glm-4.7": {
name: "Z-AI GLM 4.7",
- provider: "z-ai",
- description: "Ultra-fast inference for speed-critical tasks",
+ provider: "cerebras",
+ description: "Ultra-fast inference for speed-critical tasks via Cerebras",
temperature: 0.7,
- frequencyPenalty: 0.5,
+ supportsFrequencyPenalty: false,
},
"moonshotai/kimi-k2-0905": {
name: "Kimi K2",
provider: "moonshot",
description: "Specialized for coding tasks",
temperature: 0.7,
+ supportsFrequencyPenalty: true,
frequencyPenalty: 0.5,
},
"google/gemini-3-pro-preview": {
@@ -125,7 +128,7 @@ export function selectModelForTask(
);
if (needsSpeed && !hasComplexityIndicators) {
- chosenModel = "z-ai/glm-4.7";
+ chosenModel = "zai-glm-4.7";
}
if (hasComplexityIndicators || isVeryLongPrompt) {
File: src/modules/home/ui/components/project-form.tsx
Changes:
@@ -64,7 +64,7 @@ export const ProjectForm = () => {
{ id: "anthropic/claude-haiku-4.5" as ModelId, name: "Claude Haiku 4.5", image: "/haiku.svg", description: "Fast and efficient" },
{ id: "google/gemini-3-pro-preview" as ModelId, name: "Gemini 3 Pro", image: "/gemini.svg", description: "Google's most intelligent model with state-of-the-art reasoning" },
{ id: "openai/gpt-5.1-codex" as ModelId, name: "GPT-5.1 Codex", image: "/openai.svg", description: "OpenAI's flagship model for complex tasks" },
- { id: "z-ai/glm-4.7" as ModelId, name: "Z-AI GLM 4.7", image: "/globe.svg", description: "Ultra-fast inference for speed-critical tasks" },
+ { id: "zai-glm-4.7" as ModelId, name: "Z-AI GLM 4.7", image: "/globe.svg", description: "Ultra-fast inference for speed-critical tasks" },
{ id: "moonshotai/kimi-k2-0905" as ModelId, name: "Kimi K2", image: "/globe.svg", description: "Specialized for coding tasks" },
];
File: src/modules/projects/ui/components/message-form.tsx
Changes:
@@ -62,7 +62,7 @@ export const MessageForm = ({ projectId, onStreamingFiles }: Props) => {
{ id: "anthropic/claude-haiku-4.5" as ModelId, name: "Claude Haiku 4.5", image: "/haiku.svg", description: "Fast and efficient" },
{ id: "google/gemini-3-pro-preview" as ModelId, name: "Gemini 3 Pro", image: "/gemini.svg", description: "Google's most intelligent model with state-of-the-art reasoning" },
{ id: "openai/gpt-5.1-codex" as ModelId, name: "GPT-5.1 Codex", image: "/openai.svg", description: "OpenAI's flagship model for complex tasks" },
- { id: "z-ai/glm-4.7" as ModelId, name: "Z-AI GLM 4.7", image: "/globe.svg", description: "Ultra-fast inference for speed-critical tasks" },
+ { id: "zai-glm-4.7" as ModelId, name: "Z-AI GLM 4.7", image: "/globe.svg", description: "Ultra-fast inference for speed-critical tasks" },
{ id: "moonshotai/kimi-k2-0905" as ModelId, name: "Kimi K2", image: "/globe.svg", description: "Specialized for coding tasks" },
];
File: tests/model-selection.test.ts
Changes:
@@ -23,7 +23,7 @@ describe('Model Selection Logic', () => {
const prompt = 'Need a quick prototype landing page mockup.';
const result = selectModelForTask(prompt);
- expect(result).toBe('z-ai/glm-4.7');
+ expect(result).toBe('zai-glm-4.7');
});
it('keeps Claude Haiku when complexity indicators are present even if speed is requested', () => {
|
|
🚀 Launching Scrapybara desktop... |
|
❌ Something went wrong: |
Deployment failedThis pull request failed while building automatically on Stormkit. You can preview the logs using the following link. |
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/model-selection.test.ts (1)
1-2: Fix incorrect import paths in test file.Lines 1-2 import from
'../src/inngest/functions'and'../src/inngest/types', but thesrc/inngest/directory does not exist in the repository. BothselectModelForTaskandMODEL_CONFIGSare defined insrc/agents/types.ts(lines 72 and 28), and theFrameworktype is also exported fromsrc/agents/types.ts(line 3). Update the imports to:import { selectModelForTask, MODEL_CONFIGS } from '../src/agents/types'; import type { Framework } from '../src/agents/types';
🧹 Nitpick comments (1)
src/modules/projects/ui/components/message-form.tsx (1)
60-67: Model ID rename is correct; consider extracting shared modelOptions.The model ID change from
"z-ai/glm-4.7"to"zai-glm-4.7"aligns with the type definition updates.The
modelOptionsarray is duplicated between this file andsrc/modules/home/ui/components/project-form.tsx(lines 62-69). Consider extracting it to a shared constant to maintain consistency and reduce duplication.♻️ Suggested refactor to share modelOptions
Create a shared file (e.g.,
src/agents/model-options.ts):import type { ModelId } from "@/agents/types"; export const MODEL_OPTIONS = [ { id: "auto" as ModelId, name: "Auto", image: "/auto.svg", description: "Auto-selects the best model" }, { id: "anthropic/claude-haiku-4.5" as ModelId, name: "Claude Haiku 4.5", image: "/haiku.svg", description: "Fast and efficient" }, { id: "google/gemini-3-pro-preview" as ModelId, name: "Gemini 3 Pro", image: "/gemini.svg", description: "Google's most intelligent model with state-of-the-art reasoning" }, { id: "openai/gpt-5.1-codex" as ModelId, name: "GPT-5.1 Codex", image: "/openai.svg", description: "OpenAI's flagship model for complex tasks" }, { id: "zai-glm-4.7" as ModelId, name: "Z-AI GLM 4.7", image: "/globe.svg", description: "Ultra-fast inference for speed-critical tasks" }, { id: "moonshotai/kimi-k2-0905" as ModelId, name: "Kimi K2", image: "/globe.svg", description: "Specialized for coding tasks" }, ] as const;Then import and use in both components:
+import { MODEL_OPTIONS } from "@/agents/model-options"; - const modelOptions = [...]; + const modelOptions = MODEL_OPTIONS;
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (6)
package.jsonsrc/agents/client.tssrc/agents/types.tssrc/modules/home/ui/components/project-form.tsxsrc/modules/projects/ui/components/message-form.tsxtests/model-selection.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/agents/client.ts
🧰 Additional context used
📓 Path-based instructions (4)
tests/**/*.{test,spec}.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Write all tests in
/tests/directory using Jest patterns; include security, sanitization, and file operation tests
Files:
tests/model-selection.test.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Strict TypeScript usage - avoid using
anytype in code
Files:
tests/model-selection.test.tssrc/modules/home/ui/components/project-form.tsxsrc/modules/projects/ui/components/message-form.tsxsrc/agents/types.ts
src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
TypeScript strict mode enabled in ESLint with no-explicit-any (warn) and no-unused-vars (error, except underscore-prefixed)
Use modern framework patterns: Next.js App Router and React hooks
Files:
src/modules/home/ui/components/project-form.tsxsrc/modules/projects/ui/components/message-form.tsxsrc/agents/types.ts
package.json
📄 CodeRabbit inference engine (CLAUDE.md)
Always use
bunfor package management (bun install, bun add, bun remove). Never use npm or yarn.
Files:
package.json
🧠 Learnings (2)
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to src/prompts/framework-selector.ts : Support framework auto-detection priority: Explicit user mention → default Next.js → Enterprise indicators (Angular) → Material Design preference (Angular/Vue) → Performance critical (Svelte)
Applied to files:
tests/model-selection.test.tssrc/agents/types.ts
📚 Learning: 2025-12-14T11:07:46.225Z
Learnt from: CR
Repo: Jackson57279/zapdev PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-14T11:07:46.225Z
Learning: Applies to convex/**/*.ts : Use Convex database (`src/api/convex/*`) for all new data operations; never use deprecated PostgreSQL schema (`prisma/`)
Applied to files:
package.json
🧬 Code graph analysis (2)
src/modules/home/ui/components/project-form.tsx (2)
src/agents/types.ts (1)
ModelId(70-70)src/agents/index.ts (1)
ModelId(7-7)
src/modules/projects/ui/components/message-form.tsx (2)
src/agents/types.ts (1)
ModelId(70-70)src/agents/index.ts (1)
ModelId(7-7)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Codacy Security Scan
🔇 Additional comments (7)
src/modules/home/ui/components/project-form.tsx (1)
67-67: Model ID rename looks good and aligns with type definitions.The model ID change from
"z-ai/glm-4.7"to"zai-glm-4.7"is consistent with the updated key inMODEL_CONFIGS(src/agents/types.ts) and properly typed asModelId.src/agents/types.ts (4)
34-35: LGTM: supportsFrequencyPenalty flag additions.The new
supportsFrequencyPenaltyflags correctly indicate which models support this parameter. Models with this flag set totruealso have correspondingfrequencyPenaltyvalues configured.Also applies to: 42-43
57-58: LGTM: supportsFrequencyPenalty flag addition for Kimi K2.The flag addition is consistent with the pattern established for other models and correctly indicates frequency penalty support.
131-131: LGTM: selectModelForTask correctly uses renamed model key.The speed-focused model selection path now correctly references
"zai-glm-4.7", consistent with the renamed key inMODEL_CONFIGS.
45-51: Model successfully migrated to Cerebras provider.The key rename from
"z-ai/glm-4.7"to"zai-glm-4.7"and provider change to"cerebras"are correctly implemented. The removal offrequencyPenaltyand addition ofsupportsFrequencyPenalty: falseproperly reflect that Cerebras doesn't support this parameter. The Cerebras client is properly imported and initialized inclient.ts, with correct routing logic for the model.tests/model-selection.test.ts (1)
26-26: LGTM: Test expectation correctly updated.The test expectation now uses
'zai-glm-4.7'instead of'z-ai/glm-4.7', matching the renamed model key inMODEL_CONFIGS.package.json (1)
15-15: Version 2.0.5 is valid and current.The package is published and is the latest stable release. No security advisories found. No action needed.
Summary by CodeRabbit
New Features
Improvements
Chores
Tests
✏️ Tip: You can customize this high-level summary in your review settings.