Conversation
…diting - Enhanced file filtering with dual-mode approach (whitelist + blacklist) - Added fallback mechanism to prevent data loss when all files filtered - Improved download with error handling and 'Download All Files' option - Updated AI prompts with explicit page.tsx editing instructions - Added post-execution validation to check if entry points modified - Enhanced logging throughout file normalization and download pipeline
…rties in listForUser query
- Removed deprecated env.example file. - Replaced "Qwen 3 Max" with "Gemini 3 Pro" in model configurations. - Implemented usage checks to enforce Pro plan restrictions for the Gemini model in project and message forms. - Updated tests to reflect the new model preference for coding-focused tasks.
- Added a check for UUID format in the Polar product ID. - Implemented error handling and user guidance for invalid product ID formats. - Enhanced logging to assist users in locating the correct Product ID in the Polar Dashboard.
- Updated validation logic to allow both UUIDs and production IDs starting with 'prod_'. - Improved error messaging to guide users on valid product ID formats. - Removed outdated error handling for UUIDs, streamlining the configuration process.
…dling - Added checks to ensure the correct usage of production and sandbox product IDs based on the environment. - Enhanced error messaging for configuration mismatches to guide users in resolving issues. - Updated Polar client initialization to dynamically set the server based on the NODE_ENV variable.
- Updated the Polar client to support sandbox product IDs even in production environments, enabling testing and staging builds. - Added logging to notify when a sandbox environment is being used in production due to a detected sandbox Product ID. - Enhanced the checkout route documentation to clarify the handling of sandbox IDs.
…andling - Updated the Polar client to allow explicit specification of 'sandbox' or 'production' environments during initialization. - Simplified the checkout route by removing outdated environment mismatch checks and enhancing logging for better clarity on the server being used. - Improved documentation for the Polar client to reflect the new parameter for environment selection.
- Changed the expected format of the POLAR_ACCESS_TOKEN to start with "polar_oat_" for production. - Updated error handling to issue a warning instead of failing, allowing for flexibility with custom tokens or future formats. - Provided guidance for verifying the Organization Access Token in case of authentication failures.
- Added `@types/canvas-confetti` and `canvas-confetti` to dependencies. - Improved subscription access validation to match product names case-insensitively, allowing for variations like "Pro Plan" and "Enterprise Plan". - Updated success URLs in checkout routes to redirect to the home page instead of the dashboard.
- Updated the `hasProAccess` function to accept an optional `userId` parameter, allowing for more flexible access checks. - Modified calls to `hasProAccess` in `usage.ts` to pass the `userId`, ensuring accurate subscription validation. - Simplified environment handling in Polar client initialization by defaulting to production, removing unnecessary checks for sandbox IDs.
- Commented out the Pro plan enforcement logic for the Gemini model in `functions.ts`, `project-form.tsx`, and `message-form.tsx`. - Updated related UI components to reflect the temporary removal of access restrictions for non-Pro users.
- Replaced `requireAuth` with `getCurrentUserId` in `messages.ts` and `projects.ts` to streamline user authentication. - Updated error handling to return null or an empty array instead of throwing errors for unauthorized access or missing projects. - Enhanced project header component to handle undefined and null project states, providing user feedback when a project is not found.
Gemini API doesn't support penalty parameters. This commit: - Removes frequency_penalty from Gemini model configuration - Makes frequency_penalty conditionally applied only when defined - Fixes API error: "Penalty is not enabled for this model" 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
…d add new Prism languages, and improve download utility error handling for file normalization failures.
When using an organization access token, the organizationId is implicit from the token. Passing it explicitly causes a 422 validation error.
…tatus instead of `useConvexAuth`
…el adapter configuration - Fix getModelAdapter() to properly instantiate Gemini and OpenAI adapters with correct parameter structure - Add early validation of AI_GATEWAY_API_KEY environment variable - Add try-catch error handling around model adapter instantiation with detailed error messages - Add comprehensive error handling to framework selector agent with fallback to Next.js - Add error handling to main network.run() calls with informative error messages - Add error handling to auto-fix loop to break on network errors - Add error handling and logging to fragment title/response generator agents - Improve debugging output with '[DEBUG]' prefixes for adapter initialization Root cause: Inngest agent-kit expects specific parameter structures for adapters that don't match the spread configuration pattern. The commonConfig object was spreading parameters incorrectly, and the error handling was insufficient to provide useful debugging information.
… Stack authentication and tRPC.
…t before CommandExitError exceptions
…152) Co-authored-by: tembo[bot] <208362400+tembo[bot]@users.noreply.github.com>
|
You have run out of free Bugbot PR reviews for this billing cycle. This will reset on January 17. To receive reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial. |
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughThis PR implements a major architectural refactoring: removing Inngest-based event workflows in favor of direct API-based agent orchestration, integrating bot protection via BotID across API routes, introducing spec-driven code generation with approval workflows, establishing Convex-backed rate limiting and job queuing systems, and enhancing SEO and AI agent discovery with structured data and route improvements. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User Browser
participant API as /api/agent/generate
participant Gateway as AI Gateway
participant Sandbox as E2B Sandbox
participant Convex as Convex Backend
participant Stream as SSE Stream
User->>API: POST { prompt, framework, projectId, ... }
Note over API: Model Selection & Framework Detection
API->>Convex: Query project & user context
Convex-->>API: Project data, framework
API->>API: selectModelForTask(prompt, framework)
API->>Gateway: getAIProvider(modelId)
Note over API: Sandbox Preparation
API->>Sandbox: createSandboxWithRetry()
Sandbox-->>API: sandbox instance
API->>Sandbox: ensureDevServerRunning()
Sandbox-->>API: dev server ready
Note over API: Agent Loop & File Generation
API->>Stream: emit({ status: 'running', framework })
API->>Gateway: runAgentLoop(prompt, tools, context)
loop Agent Iterations
Gateway-->>API: AI response with tool calls
API->>Stream: emit({ tool: 'create_files', files: [...] })
API->>Sandbox: executeTools(fileOps, terminal)
Sandbox-->>API: tool results
end
Gateway-->>API: final AI output
API->>Stream: emit({ status: 'validating' })
Note over API: Validation & Auto-fix
API->>Sandbox: lint check
API->>Sandbox: build check
alt Errors Found
API->>Gateway: runAgentLoop(fix prompt)
Gateway-->>API: fixed code
end
Note over API: Fragment Creation
API->>Convex: createMessage & createFragment
Convex-->>API: fragment ID
API->>Stream: emit({ complete: true, fragment })
Stream-->>User: SSE close
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
🔍 Analyzing PR changes and preparing to run tests... |
There was a problem hiding this comment.
💡 Codex Review
zapdev/tests/model-selection.test.ts
Lines 1 to 2 in 0921410
The test suite now imports selectModelForTask and Framework from ../src/inngest/functions/types, but the entire src/inngest directory was removed in this commit. Running bun run test will immediately fail with a module-not-found error before any assertions run. The same stale import path is used in other tests (tests/security.test.ts, tests/file-operations.test.ts), so none of the suites can execute until the imports are updated to the new agent implementation location (e.g., src/app/api/agent/generate/route.ts) or the tests are rewritten.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
|
🚀 Launching Scrapybara desktop... |
|
❌ Something went wrong: |
There was a problem hiding this comment.
Actionable comments posted: 5
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (10)
src/app/api/import/github/process/route.ts (2)
17-19: Remove unreachable dead code.The
if (false)block will never execute and serves no purpose. Remove it to improve code clarity.🔎 Proposed fix
- if (false) { - return NextResponse.json({ error: "Unauthorized" }, { status: 401 }); - } -
82-86: Critical: GitHub import processing is not implemented.The import record is created in Convex with
PENDINGstatus, but there is no mechanism to process it. The commented Inngest job trigger (lines 83-86) was removed, and no alternative processing logic has been implemented. ThemarkProcessing,markComplete, andmarkFailedmutations exist but are never called for GitHub imports.This breaks the GitHub import feature—imports remain in
PENDINGstatus indefinitely with no actual repository processing occurring. Users will see "GitHub repository import started" but nothing will happen.Implement a processing mechanism to handle pending GitHub imports. This could use the existing
jobQueueinfrastructure or another background job system, but must be wired up from the import creation to process the repository data.src/app/api/import/github/callback/route.ts (1)
24-26: Remove dead code block.The
if (false)condition is always false, making this block unreachable. Remove this dead code.🔎 Proposed fix
- if (false) { - return NextResponse.redirect(new URL("/", request.url)); - } -convex/projects.ts (1)
283-302: Add missingreturns:validator and fix unsafe null spreads in consuming functions.The
getfunction lacks a requiredreturns:validator per coding guidelines. Additionally, two action functions (createWithMessageandcreateWithMessageAndAttachments) unsafely spread the result with...projectwithout handling the null case—if the query returnsnull, this will cause a runtime error when merging into the return object.Required changes:
- Add
returns:validator to thegetquery (should validate either a project object or null)- Add null checks in both action functions before spreading the project result, or handle the null case explicitly
The behavioral change to return
nullinstead of throwing is acceptable since callers likeproject-header.tsxand[projectId]/page.tsxalready check for null values. However, internal Convex functions must also handle the null case.src/app/api/import/figma/callback/route.ts (1)
24-26: Remove dead code.The
if (false)block is unreachable code and should be removed.🔎 Proposed fix
- if (false) { - return NextResponse.redirect(new URL("/", request.url)); - } -src/app/api/import/github/repos/route.ts (1)
29-31: Remove unreachable dead code.The
if (false)block is unreachable and serves no purpose. Remove it to improve code clarity.🔎 Proposed fix
- if (false) { - return NextResponse.json({ error: "Unauthorized" }, { status: 401 }); - } -src/app/api/import/figma/files/route.ts (1)
17-19: Remove unreachable dead code.The
if (false)block is unreachable and should be removed to improve code clarity.🔎 Proposed fix
- if (false) { - return NextResponse.json({ error: "Unauthorized" }, { status: 401 }); - } -tests/model-selection.test.ts (1)
1-2: Critical: Update import paths after Inngest removal.The import paths reference the old
inngest/functionsandinngest/typesmodules, but the PR removes Inngest. According to the relevant code snippets,MODEL_CONFIGSandselectModelForTaskare now exported fromsrc/app/api/agent/generate/route.ts.🔎 Proposed fix
-import { selectModelForTask, MODEL_CONFIGS } from '../src/inngest/functions'; -import type { Framework } from '../src/inngest/types'; +import { selectModelForTask, MODEL_CONFIGS } from '../src/app/api/agent/generate/route'; +import type { Framework } from '../src/app/api/agent/generate/route';src/app/api/import/github/auth/route.ts (1)
30-32: Remove unreachable dead code.The
if (false)condition creates unreachable code that should be removed for clarity, or if it's a placeholder for future logic, add a TODO comment explaining its purpose.🔎 Suggested fix
If this is truly unused:
const userId = stackUser.id; - if (false) { - return NextResponse.json({ error: "Unauthorized" }, { status: 401 }); - } if (!GITHUB_CLIENT_ID) {Or if it's a placeholder:
const userId = stackUser.id; + // TODO: Add additional authorization check here if (false) { return NextResponse.json({ error: "Unauthorized" }, { status: 401 }); }src/app/api/inngest/trigger/route.ts (1)
3-27: Missing authentication check on agent trigger endpoint.This route accepts POST requests to trigger AI code generation but doesn't verify the caller's identity. Compared to other routes like
transfer-sandboxandagent/tokenthat include bot verification or user authentication, this endpoint appears unprotected.Consider adding authentication to prevent unauthorized agent triggering:
🔎 Suggested fix
import { NextRequest, NextResponse } from "next/server"; +import { checkBotId } from "botid/server"; +import { getUser } from "@stackframe/stack"; export async function POST(request: NextRequest) { try { + const botVerification = await checkBotId(); + if (botVerification.isBot) { + return NextResponse.json( + { error: "Access denied" }, + { status: 403 } + ); + } + + const user = await getUser(); + if (!user) { + return NextResponse.json( + { error: "Unauthorized" }, + { status: 401 } + ); + } + const body = await request.json();
🟠 Major comments (27)
src/app/api/import/github/process/route.ts-35-35 (1)
35-35: Remove type casts to restore type safety.The
(api as any)casts on lines 35 and 64 bypass TypeScript's type checking and violate the coding guidelines for strict TypeScript usage. Theapiobject fromconvex/_generated/apishould have proper types foroauth.getConnectionandimports.createImport.As per coding guidelines, strict TypeScript usage requires avoiding the
anytype.🔎 Proposed fix
- const connection = await convex.query((api as any).oauth.getConnection, { + const connection = await convex.query(api.oauth.getConnection, { provider: "github", });- const importRecord = await convex.mutation((api as any).imports.createImport, { + const importRecord = await convex.mutation(api.imports.createImport, { projectId,Also applies to: 64-64
mcp/playwright-server.ts-11-23 (1)
11-23: Missing browser cleanup and error handling.The browser instance is never closed, which can lead to resource leaks. Additionally,
getPage()doesn't handle browser launch failures gracefully.🔎 Suggested improvements
// Add cleanup on process exit process.on('SIGINT', async () => { if (browser) { await browser.close(); } process.exit(0); }); process.on('SIGTERM', async () => { if (browser) { await browser.close(); } process.exit(0); }); // Wrap getPage with error handling async function getPage(): Promise<Page> { try { if (!browser) { browser = await chromium.launch({ headless: true }); } if (!page) { const context = await browser.newContext(); page = await context.newPage(); } return page; } catch (error) { console.error("[MCP] Failed to initialize browser:", error); throw error; } }mcp/playwright-server.ts-2-2 (1)
2-2: Avoid@ts-nocheck- violates TypeScript strict mode guidelines.The
@ts-nocheckdirective disables all TypeScript checking for this file, which conflicts with the coding guideline: "Strict TypeScript usage - avoid usinganytype in code."Consider fixing the type issues instead of suppressing them entirely.
src/lib/polar-client.ts-138-159 (1)
138-159: Error handling may swallow legitimate API errors.The catch block on line 146 assumes any error means "customer doesn't exist," but this could hide legitimate API errors like rate limiting, authentication failures, or network issues.
🔎 Suggested improvement
} catch (error) { - // Customer doesn't exist, create one - console.log(`Creating new Polar customer for user ${customerData.externalId}`); + // Check if this is a "not found" error specifically + const isNotFoundError = error instanceof Error && + (error.message.includes('not found') || error.message.includes('404')); + + if (!isNotFoundError) { + console.error(`Failed to fetch Polar customer: ${error}`); + throw error; + } + + console.log(`Creating new Polar customer for user ${customerData.externalId}`);test-stack.js-1-3 (1)
1-3: Duplication with test-stack.ts and unconventional location.Two concerns:
- Duplication: This file appears to duplicate
test-stack.tswith the only difference being the module system (CommonJS vs ES6). Having both versions creates maintenance burden.- Location: Like
test-stack.ts, this file is in the project root rather than a dedicated test directory.Recommendations:
- Consolidate to a single test file (prefer TypeScript for consistency with the codebase)
- Move to an appropriate directory (
tests/,scripts/, or__tests__/)- If both module systems are required for specific runtime environments, document why in a comment
convex/jobQueue.ts-157-184 (1)
157-184: Add return type validator.The
markFailedmutation is missing areturnsvalidator.As per coding guidelines, all Convex functions require return validators. See the pattern from
markCompletedcomment above.convex/jobQueue.ts-82-93 (1)
82-93: Avoid using.filter()in queries; use indexes instead.This query uses
.filter()to exclude completed jobs, which violates Convex coding guidelines. Per guidelines: "Do NOT usefilterin queries; instead define an index in the schema and usewithIndexfor query filtering."Consider adding a compound index on
(userId, status)in the schema and querying for non-COMPLETED statuses explicitly, or restructure to query by status if the cardinality allows.As per coding guidelines, filters should be replaced with schema indexes for query performance.
convex/jobQueue.ts-98-109 (1)
98-109: Avoid using.filter()in queries; use indexes instead.Similar to
getUserJobs, this query uses.filter()to exclude completed jobs, which violates Convex coding guidelines.As per coding guidelines, add a compound index on
(projectId, status)in the schema for better query performance.convex/jobQueue.ts-189-237 (1)
189-237: Add return type validator and consider scalability.Two issues:
- Missing
returnsvalidator (same as other mutations)- Line 192 uses
.collect()on all jobs, which could cause performance issues as the queue growsFor large job queues, collecting all records to compute stats could timeout or consume excessive memory.
Recommendations
- Add a
returnsvalidator for the stats object- Consider implementing aggregation using separate queries per status with indexes, or maintain running counts in a separate stats table that updates with each job state transition
As per coding guidelines, all Convex functions require return validators.
convex/jobQueue.ts-137-152 (1)
137-152: Add return type validator.The
markCompletedmutation is missing areturnsvalidator. Per guidelines: "ALWAYS include argument and return validators for all Convex functions."Add returns validator
export const markCompleted = mutation({ args: { jobId: v.id("jobQueue"), }, + returns: v.union(v.object({ + _id: v.id("jobQueue"), + type: v.string(), + projectId: v.id("projects"), + userId: v.string(), + payload: v.any(), + priority: v.union(v.literal("high"), v.literal("normal"), v.literal("low")), + status: v.union(v.literal("PENDING"), v.literal("PROCESSING"), v.literal("COMPLETED"), v.literal("FAILED")), + attempts: v.number(), + maxAttempts: v.number(), + createdAt: v.number(), + updatedAt: v.number(), + sandboxId: v.optional(v.string()), + error: v.optional(v.string()), + processedAt: v.optional(v.number()), + _creationTime: v.number(), + }), v.null()), handler: async (ctx, args) => {As per coding guidelines, all Convex functions require return validators.
convex/jobQueue.ts-242-279 (1)
242-279: Avoid.filter()and add return type validator.Two issues:
- Uses
.filter()which violates guidelines (should use indexes)- Missing
returnsvalidatorRecommendations
- Add compound index on
(status, processedAt)to efficiently query old completed/failed jobs- Add returns validator:
export const cleanup = mutation({ args: {}, + returns: v.object({ + deletedCount: v.number(), + totalOld: v.number(), + timestamp: v.number(), + }), handler: async (ctx) => {As per coding guidelines, avoid filters in queries and include return validators.
convex/messages.ts-411-451 (1)
411-451: Replaceanytype with proper TypeScript types.The
ctxparameter is typed asany, which violates the strict TypeScript guidelines. This reduces type safety and could lead to runtime errors.Recommended fix
+import type { MutationCtx } from "./_generated/server"; + export const addAttachmentInternal = async ( - ctx: any, + ctx: MutationCtx, userId: string, - messageId: string, + messageId: Id<"messages">, attachmentData: { - type: string; + type: "IMAGE" | "FIGMA_FILE" | "GITHUB_REPO"; url: string; size: number; width?: number; height?: number; - importId?: any; + importId?: Id<"imports">; sourceMetadata?: any; } -): Promise<string> => { +): Promise<Id<"attachments">> => { // Verify message ownership - const message = await ctx.db.get(messageId as any); + const message = await ctx.db.get(messageId); if (!message) { throw new Error("Message not found"); } const project = await ctx.db.get(message.projectId); if (!project || project.userId !== userId) { throw new Error("Unauthorized"); } const now = Date.now(); const attachmentId = await ctx.db.insert("attachments", { - messageId: messageId as any, + messageId, type: attachmentData.type,As per coding guidelines, avoid
anytype and use strict TypeScript types.Committable suggestion skipped: line range outside the PR's diff.
src/modules/home/ui/components/project-form.tsx-299-317 (1)
299-317: Complete pro-tier restriction implementation.The code has the infrastructure for locking pro-only models (isProOnly flag, PRO badge, disabled state) but intentionally disables enforcement with
isLocked = false. While the comment indicates this is temporary, it creates UX confusion where users see a PRO badge but can select the model anyway.🔎 Proper enforcement implementation
{modelOptions.map((option) => { const isSelected = selectedModel === option.id; - // const isGemini = option.id === "google/gemini-3-pro-preview"; - const isLocked = false; // Gemini 3 Pro restriction removed temporarily + const isLocked = option.isProOnly && !isProUser; // Check actual pro status return (You'll need to:
- Get pro user status (e.g., from useQuery or context)
- Use the
isProOnlyflag from model options- Consistently enforce on both frontend and backend
Committable suggestion skipped: line range outside the PR's diff.
src/stack.ts-1-14 (1)
1-14: Fallback values could mask configuration issues in production.The placeholder fallback values (
"placeholder-key","placeholder-secret", UUID defaults) will cause authentication failures if the environment variables aren't properly set. In production, it's better to fail fast with clear error messages rather than silently using invalid credentials.🔎 Add validation for production
import { StackServerApp } from "@stackframe/stack"; +// Validate required environment variables in production +if (process.env.NODE_ENV === "production") { + const required = [ + "NEXT_PUBLIC_STACK_PROJECT_ID", + "NEXT_PUBLIC_STACK_PUBLISHABLE_CLIENT_KEY", + "STACK_SECRET_SERVER_KEY" + ]; + + const missing = required.filter(key => !process.env[key]); + if (missing.length > 0) { + throw new Error( + `Missing required Stack environment variables: ${missing.join(", ")}` + ); + } +} + export const stackServerApp = new StackServerApp({ tokenStore: "nextjs-cookie", urls: { signIn: "/handler/sign-in", signUp: "/handler/sign-up", afterSignIn: "/", afterSignUp: "/", }, projectId: process.env.NEXT_PUBLIC_STACK_PROJECT_ID || "00000000-0000-4000-8000-000000000000", publishableClientKey: process.env.NEXT_PUBLIC_STACK_PUBLISHABLE_CLIENT_KEY || "placeholder-key", secretServerKey: process.env.STACK_SECRET_SERVER_KEY || "placeholder-secret", });src/app/dashboard/admin/e2b-health/page.tsx-13-15 (1)
13-15: Circuit breaker state is mocked in an admin monitoring dashboard.The comment indicates this is a placeholder for production use. For an admin health dashboard to be useful, you should fetch the actual circuit breaker state. Consider exposing the state via a Convex query or API endpoint.
Would you like me to help create a Convex query that exposes the circuit breaker state, or open an issue to track this?
explanations/E2B_ERROR_PREVENTION_QUICK_REFERENCE.md-12-14 (1)
12-14: Remove or rewrite documentation references to deleted Inngest integration.This quick reference document was added in the same commit that removes Inngest, but it contains code examples and file references that no longer exist. Lines 12-14 and 72-79 show imports from
@/inngest/circuit-breakerwhich doesn't exist. Lines 116, 125, and 133 referencesrc/inngest/files that were deleted. Either remove this documentation or rewrite it to reflect the post-Inngest E2B error handling approach.convex/e2bRateLimits.ts-109-137 (1)
109-137: Consider usinginternalMutationfor the cleanup cron job.Per coding guidelines, use
internalMutationfor private functions that are not part of the public API. Sincecleanupis intended for cron job use only, it should not be exposed publicly.🔎 Suggested fix
-import { mutation, query, internalMutation } from "./_generated/server"; +import { mutation, query, internalMutation } from "./_generated/server"; -export const cleanup = mutation({ +export const cleanup = internalMutation({ args: {}, + returns: v.object({ + deletedCount: v.number(), + totalOldRecords: v.number(), + timestamp: v.number(), + }), handler: async (ctx) => {As per coding guidelines for Convex functions.
src/modules/projects/ui/components/spec-planning-card.tsx-66-101 (1)
66-101: Missing loading state and unhandled fetch inhandleReject.Unlike
handleApprove, thehandleRejectfunction:
- Doesn't set a loading state before/during the fetch call (only resets
isRejectingafter)- Doesn't await or handle the fetch response
This could lead to UI state issues if the API call fails silently.
🔎 Suggested fix
const handleReject = async () => { if (!feedback.trim()) { toast.error("Please provide feedback for revision"); return; } try { + setIsRejecting(true); // Show loading state during API call const result = await rejectSpecMutation({ messageId, feedback: feedback.trim(), }); // Trigger spec re-generation with feedback - await fetch("/api/inngest/trigger", { + const response = await fetch("/api/inngest/trigger", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ projectId: result.projectId, value: `${result.messageContent}\n\nUser Feedback: ${feedback}`, - model: result.selectedModel || "openai/gpt-5.1-codex", + model: result.selectedModel || "auto", messageId, isSpecRevision: true, }), }); + if (!response.ok) { + throw new Error("Failed to trigger spec revision"); + } + toast.success("Spec rejected. AI is revising based on your feedback..."); - setIsRejecting(false); setFeedback(""); } catch (error) { if (error instanceof Error) { toast.error(error.message); } else { toast.error("Failed to reject spec"); } + } finally { + setIsRejecting(false); } };src/modules/projects/ui/components/spec-planning-card.tsx-42-52 (1)
42-52: Fetch response not awaited inhandleApprove.The fetch call to
/api/inngest/triggeris not awaited, so errors won't be caught and the success toast may show prematurely before the request completes.🔎 Suggested fix
// Trigger code generation via Inngest - await fetch("/api/inngest/trigger", { + const response = await fetch("/api/inngest/trigger", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ projectId: result.projectId, value: result.messageContent, model: result.selectedModel || "auto", specContent: result.specContent, isFromApprovedSpec: true, }), }); + if (!response.ok) { + throw new Error("Failed to trigger code generation"); + }src/app/api/webhooks/polar/route.ts-33-40 (1)
33-40: Logging sensitive webhook data may expose PII in production logs.Logging the full headers and body preview (including user metadata) could expose sensitive information like emails, user IDs, and tokens in production logs. Consider:
- Removing or reducing this logging for production
- Using a debug flag to conditionally enable verbose logging
- Redacting sensitive fields before logging
🔎 Proposed fix using conditional debug logging
+const DEBUG_WEBHOOKS = process.env.DEBUG_POLAR_WEBHOOKS === "true"; + export async function POST(request: NextRequest) { try { console.log("🔔 Webhook POST request received at /api/webhooks/polar"); - console.log("URL:", request.url); - console.log("Headers:", Object.fromEntries(request.headers.entries())); + if (DEBUG_WEBHOOKS) { + console.log("URL:", request.url); + // Omit sensitive headers like authorization + console.log("Content-Type:", request.headers.get("content-type")); + } const body = await request.text(); - console.log("Body length:", body.length); - console.log("Body preview:", body.substring(0, 200)); + if (DEBUG_WEBHOOKS) { + console.log("Body length:", body.length); + }Committable suggestion skipped: line range outside the PR's diff.
src/app/api/polar/create-checkout/route.ts-57-58 (1)
57-58: Hardcoded production server prevents sandbox testing during development.The
createPolarClientfunction accepts aserverparameter with options"sandbox" | "production"(as defined insrc/lib/polar-client.ts), but the route hardcodes it to"production". This prevents testing checkout flow in sandbox without code changes and risks creating real records in development. Useprocess.env.NODE_ENVto toggle between environments, following the pattern used elsewhere in the codebase (e.g.,src/app/api/import/figma/auth/route.ts).convex/specs.ts-6-41 (1)
6-41: Missingreturnsvalidator on mutation.Per the coding guidelines, all Convex functions must include explicit
returnsvalidators. TheupdateSpecmutation is missing a return type validator.🔎 Proposed fix
export const updateSpec = mutation({ args: { messageId: v.id("messages"), specContent: v.string(), status: v.union( v.literal("PLANNING"), v.literal("AWAITING_APPROVAL"), v.literal("APPROVED"), v.literal("REJECTED") ), }, + returns: v.object({ success: v.boolean() }), handler: async (ctx, args) => {As per coding guidelines, always include argument and return validators for all Convex functions.
convex/specs.ts-44-82 (1)
44-82: Missingreturnsvalidator onapproveSpecmutation.The return object shape should be explicitly validated.
🔎 Proposed fix
export const approveSpec = mutation({ args: { messageId: v.id("messages"), }, + returns: v.object({ + success: v.boolean(), + projectId: v.id("projects"), + messageContent: v.string(), + specContent: v.optional(v.string()), + selectedModel: v.optional(v.string()), + }), handler: async (ctx, args) => {As per coding guidelines, always include return validators for Convex functions.
src/app/api/agent/generate/route.ts-549-605 (1)
549-605: Agent loop lacks timeout protection.The
runAgentLoopfunction iterates up toMAX_AGENT_ITERATIONS(20) times, but there's no overall timeout. If the AI model responds slowly or gets stuck in a tool-call loop, this could run indefinitely per iteration.Consider adding an overall timeout for the agent loop.
🔎 Proposed fix
+const AGENT_LOOP_TIMEOUT_MS = 5 * 60 * 1000; // 5 minutes + async function runAgentLoop( model: ReturnType<typeof getAIProvider>, systemPrompt: string, initialMessages: Message[], sandboxId: string, state: AgentState, sendProgress: (event: ProgressEvent) => Promise<void>, temperature: number, ): Promise<string> { const messages = [...initialMessages]; let fullResponse = ''; const tools = createAgentTools(sandboxId, state, sendProgress); + const startTime = Date.now(); for (let iteration = 0; iteration < MAX_AGENT_ITERATIONS; iteration++) { + if (Date.now() - startTime > AGENT_LOOP_TIMEOUT_MS) { + throw new Error('Agent loop timeout exceeded'); + } + const result = await generateText({convex/specs.ts-128-152 (1)
128-152: Missingreturnsvalidator ongetSpecquery.🔎 Proposed fix
export const getSpec = query({ args: { messageId: v.id("messages"), }, + returns: v.object({ + specMode: v.optional(v.union( + v.literal("PLANNING"), + v.literal("AWAITING_APPROVAL"), + v.literal("APPROVED"), + v.literal("REJECTED") + )), + specContent: v.optional(v.string()), + selectedModel: v.optional(v.string()), + }), handler: async (ctx, args) => {As per coding guidelines, always include return validators for Convex functions.
convex/specs.ts-85-125 (1)
85-125: Missingreturnsvalidator onrejectSpecmutation.🔎 Proposed fix
export const rejectSpec = mutation({ args: { messageId: v.id("messages"), feedback: v.string(), }, + returns: v.object({ + success: v.boolean(), + projectId: v.id("projects"), + messageContent: v.string(), + specContent: v.optional(v.string()), + feedback: v.string(), + selectedModel: v.optional(v.string()), + }), handler: async (ctx, args) => {As per coding guidelines, always include return validators for Convex functions.
src/app/api/agent/generate/route.ts-216-223 (1)
216-223: Memory leak risk with sandbox cache.
SANDBOX_CACHEis a module-levelMapthat grows unboundedly. WhileclearCacheEntryschedules deletion after 5 minutes, the cache is never cleaned up when:
- Sandbox creation fails after caching
- The sandbox is killed externally
- Server runs for extended periods with high traffic
Consider adding a maximum cache size or using a proper LRU cache.
🔎 Proposed fix
const SANDBOX_CACHE = new Map<string, Sandbox>(); const CACHE_EXPIRY = 5 * 60 * 1000; +const MAX_CACHE_SIZE = 100; function clearCacheEntry(sandboxId: string) { setTimeout(() => { SANDBOX_CACHE.delete(sandboxId); }, CACHE_EXPIRY); } + +function addToCache(sandboxId: string, sandbox: Sandbox) { + // Evict oldest entries if cache is full + if (SANDBOX_CACHE.size >= MAX_CACHE_SIZE) { + const firstKey = SANDBOX_CACHE.keys().next().value; + if (firstKey) SANDBOX_CACHE.delete(firstKey); + } + SANDBOX_CACHE.set(sandboxId, sandbox); + clearCacheEntry(sandboxId); +}Committable suggestion skipped: line range outside the PR's diff.
🟡 Minor comments (13)
build_output.txt-1-2 (1)
1-2: Remove accidentally committed build output file.This file contains Windows TIMEOUT command error output and should not be committed to version control. It appears to be an accidental inclusion from a local build or script execution.
Please remove this file and consider adding
build_output.txtto.gitignoreto prevent future accidental commits.src/app/api/import/figma/callback/route.ts-90-90 (1)
90-90: Remove unnecessaryas anycast on Convex API.The
api.oauth.storeConnectionmutation is properly typed in the generated Convex API and can be accessed directly without theas anycast. Other mutations in the codebase (e.g.,api.subscriptions.createOrUpdateSubscription,api.usage.resetUsage) are called without casts. Change line 90 to:await convex.mutation(api.oauth.storeConnection, {src/lib/polar-client.ts-144-148 (1)
144-148: Logging user identifiers may have compliance implications.Lines 144, 148, and 157 log
externalIdwhich represents user identifiers. Consider whether this aligns with your PII/GDPR compliance requirements. Based on learnings, logging user identifiers like emails should be reviewed.src/modules/home/ui/components/project-form.tsx-67-69 (1)
67-69: Model pro-tier restrictions are not enforced in UI or backend.The
isProOnly: trueflag on Gemini is set but never used—no code checks it. TheisLockedvariable at line 299 is hardcoded tofalse, so the PRO badge never displays. More critically, the backend'sgenerateroute accepts the model selection from the client without validating the user's subscription tier, allowing any user to request Gemini regardless of their plan.The mismatch:
- UI defines
isProOnlybut doesn't enforce itisLocked = falsemeans no visual restriction- Backend has no tier check before processing pro-only models
Add subscription validation in the generate route handler before allowing pro-only model selection.
src/app/ai-info/page.tsx-53-59 (1)
53-59: Hardcoded aggregate rating values may be misleading.The
aggregateRatingcontains hardcoded values (ratingValue: '4.8',reviewCount: '2350') that appear fabricated. If these aren't based on actual user reviews, this could violate Google's structured data guidelines and potentially be considered misleading to search engines.Consider either removing this section until real review data is available, or fetching actual ratings from a data source.
src/components/file-explorer.tsx-102-102 (1)
102-102: Syntax issue: Double semicolons.Line 102 has
};;which is unnecessary and may trigger linter warnings.🔎 Suggested fix
-};; +};convex/e2bRateLimits.ts-7-35 (1)
7-35: Missing return validators on Convex functions.Per coding guidelines, all Convex functions must include
returnsvalidators. Each function here is missing its return type validator.🔎 Example fix for recordRequest
export const recordRequest = mutation({ args: { operation: v.string(), }, + returns: v.object({ + recorded: v.boolean(), + timestamp: v.number(), + }), handler: async (ctx, args) => {Apply similar patterns for
checkRateLimit,getStats, andcleanup.Also applies to: 40-68, 73-103, 109-137
src/lib/seo.ts-379-385 (1)
379-385: Hardcoded aggregate rating may be misleading.The
ratingValue: '4.8'andreviewCount: '2350'are hardcoded values. If these don't reflect actual user reviews, this could be considered misleading to search engines and users. Consider either:
- Removing the aggregate rating until real review data is available
- Adding a comment explaining the source of these numbers
explanations/SPEC_MODE_IMPLEMENTATION.md-6-7 (1)
6-7: Implementation date appears incorrect.The date "November 16, 2025" seems to be a typo or placeholder. Given the current date is December 2025, this should likely be "December 2025" or a past date when this was actually implemented.
explanations/E2B_ERROR_PREVENTION_IMPLEMENTATION.md-526-528 (1)
526-528: Remove local file system path from documentation.The path
/home/dih/.factory/specs/...should be removed from the documentation as it exposes a developer's local file system structure.🔎 Proposed fix
-**Specification**: `/home/dih/.factory/specs/2025-11-16-e2b-internal-server-error-prevention-strategy.md` +**Specification**: See internal specification documentsexplanations/SPEC_MODE_IMPLEMENTATION.md-308-308 (1)
308-308: Remove local file system path from documentation.The path
/home/dih/.factory/specs/...exposes a local developer's file system structure. This should be removed or replaced with a relative/generic path.🔎 Proposed fix
-## Documentation References - -- Original Spec: `/home/dih/.factory/specs/2025-11-16-spec-mode-for-gpt-5-1-codex-with-planning-ui.md` +## Documentation References + +- Original Spec: See internal specification documentssrc/app/api/webhooks/polar/route.ts-19-29 (1)
19-29: CORS headers likely unnecessary on webhook endpoint; remove if not required.Polar webhooks are server-to-server requests and don't require CORS headers. If this endpoint is only consumed by Polar's servers, remove the OPTIONS handler and CORS headers entirely. If browser-based access is needed, restrict
Access-Control-Allow-Originto specific domains rather than using"*".src/app/api/agent/generate/route.ts-90-101 (1)
90-101: Race condition in lazy singleton initialization.The
getConvexClient()function has a potential race condition. If two concurrent requests call this function simultaneously beforeconvexClientis initialized, both may create separate instances.While this is unlikely to cause functional issues with
ConvexHttpClient, it's a subtle concurrency bug.🔎 Proposed fix (initialize at module load)
-let convexClient: ConvexHttpClient | null = null; - -function getConvexClient(): ConvexHttpClient { - if (!convexClient) { - const url = process.env.NEXT_PUBLIC_CONVEX_URL; - if (!url) { - throw new Error('NEXT_PUBLIC_CONVEX_URL environment variable is not set'); - } - convexClient = new ConvexHttpClient(url); - } - return convexClient; -} +function getConvexClient(): ConvexHttpClient { + const url = process.env.NEXT_PUBLIC_CONVEX_URL; + if (!url) { + throw new Error('NEXT_PUBLIC_CONVEX_URL environment variable is not set'); + } + return new ConvexHttpClient(url); +}Alternatively, if connection reuse is important, use a module-level constant initialized at import time.
| return { | ||
| count, | ||
| limit: args.maxPerHour, | ||
| exceeded, | ||
| remaining, | ||
| resetAt: hourAgo + 60 * 60 * 1000, // When the oldest record expires | ||
| }; | ||
| }, |
There was a problem hiding this comment.
Bug: resetAt calculation is incorrect.
The formula hourAgo + 60 * 60 * 1000 equals now, not when the rate limit resets. The reset time should be based on when the oldest request in the window expires.
🔎 Suggested fix
+ // Find when the oldest request will expire (allowing a new slot)
+ const oldestTimestamp = recentRequests.length > 0
+ ? Math.min(...recentRequests.map(r => r.timestamp))
+ : now;
+ const resetAt = oldestTimestamp + 60 * 60 * 1000;
+
return {
count,
limit: args.maxPerHour,
exceeded,
remaining,
- resetAt: hourAgo + 60 * 60 * 1000, // When the oldest record expires
+ resetAt,
};Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In convex/e2bRateLimits.ts around lines 60-67, the resetAt is incorrectly set to
hourAgo + 60*60*1000 (which equals now); change it to compute when the oldest
request in the rolling hour window will expire by taking the timestamp of the
oldest recorded request (or if no records, use Date.now()) and adding the window
length (60*60*1000) to that value so resetAt = oldestTimestamp + 60*60*1000;
ensure you handle the empty-records case and use consistent time units (ms).
| @@ -0,0 +1,273 @@ | |||
| # Inngest AgentKit Debugging Guide | |||
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if the documentation file exists
echo "=== Checking if explanations/AGENTKIT_DEBUGGING_GUIDE.md exists ==="
if [ -f "explanations/AGENTKIT_DEBUGGING_GUIDE.md" ]; then
echo "File exists"
wc -l explanations/AGENTKIT_DEBUGGING_GUIDE.md
else
echo "File does not exist"
fi
echo ""
echo "=== Checking for Inngest-related files ==="
find . -type f -name "*inngest*" 2>/dev/null | head -20
echo ""
echo "=== Checking for inngest directory ==="
ls -la src/ 2>/dev/null | grep inngest
echo ""
echo "=== Checking for inngest imports in TypeScript/TSX files ==="
rg "from.*inngest|import.*inngest" --type ts --type tsx --max-count 10 2>/dev/null || echo "No inngest imports found"
echo ""
echo "=== Checking PR description or commit messages ==="
git log --oneline -20 2>/dev/null || echo "Cannot access git history"Repository: Jackson57279/zapdev
Length of output: 491
🏁 Script executed:
#!/bin/bash
# Read the file content to verify references
echo "=== File content with line numbers ==="
cat -n explanations/AGENTKIT_DEBUGGING_GUIDE.md | head -100
echo ""
echo "=== Checking lines around references mentioned in review ==="
echo "Lines 38, 67, 109, 270:"
sed -n '35,40p; 65,70p; 106,112p; 268,273p' explanations/AGENTKIT_DEBUGGING_GUIDE.mdRepository: Jackson57279/zapdev
Length of output: 4593
🏁 Script executed:
#!/bin/bash
# Check if the referenced file exists
echo "=== Checking for explanations/INNGEST_AGENTKIT_FIX.md ==="
if [ -f "explanations/INNGEST_AGENTKIT_FIX.md" ]; then
echo "File exists"
head -20 explanations/INNGEST_AGENTKIT_FIX.md
else
echo "File does not exist"
fi
echo ""
echo "=== Summary of all .md files in explanations directory ==="
ls -lh explanations/*.md 2>/dev/null | awk '{print $9, "(" $5 ")"}'Repository: Jackson57279/zapdev
Length of output: 4218
This documentation should be removed as part of the Inngest removal.
This PR removes Inngest, but explanations/AGENTKIT_DEBUGGING_GUIDE.md exclusively documents Inngest AgentKit debugging. The file contains broken references to deleted files:
src/inngest/functions.ts(lines 38, 109, 271)bunx inngest devcommand (line 67)explanations/INNGEST_AGENTKIT_FIX.md(line 270)
Since the src/inngest/ directory no longer exists and there are no Inngest imports in the codebase, this file is obsolete and should be deleted.
🤖 Prompt for AI Agents
In explanations/AGENTKIT_DEBUGGING_GUIDE.md at lines 1-1 this file documents
Inngest AgentKit only and references deleted files/commands
(src/inngest/functions.ts, bunx inngest dev,
explanations/INNGEST_AGENTKIT_FIX.md); remove this obsolete file from the repo
as part of the Inngest removal to eliminate broken references and dead
documentation.
| # Fixing Inngest AgentKit "Error making AI request" | ||
|
|
||
| ## Problem Description | ||
|
|
||
| The application was throwing a generic error message: **"Error making AI request"** when attempting to run code generation through the Inngest agent functions. This error provided no meaningful context about what went wrong, making debugging extremely difficult. | ||
|
|
||
| ## Root Cause Analysis | ||
|
|
||
| After investigation, **three critical issues** were identified in the Inngest AgentKit setup: | ||
|
|
||
| ### Issue 1: Incorrect Model Adapter Configuration | ||
|
|
||
| **Location:** `src/inngest/functions.ts` - `getModelAdapter()` function (line ~220) | ||
|
|
||
| The function was using an incorrect configuration pattern: | ||
|
|
||
| ```typescript | ||
| // BROKEN - This was the old code | ||
| const commonConfig = { | ||
| model: modelId, | ||
| apiKey: process.env.AI_GATEWAY_API_KEY!, | ||
| baseUrl: process.env.AI_GATEWAY_BASE_URL || "https://ai-gateway.vercel.sh/v1", | ||
| defaultParameters: { | ||
| temperature: temperature ?? config?.temperature ?? 0.7, | ||
| }, | ||
| }; | ||
|
|
||
| if (isGoogleModel) { | ||
| return gemini({ | ||
| ...commonConfig, // ❌ WRONG: spreading entire config | ||
| defaultParameters: { | ||
| generationConfig: { | ||
| temperature: commonConfig.defaultParameters.temperature, | ||
| }, | ||
| }, | ||
| }); | ||
| } | ||
|
|
||
| return openai(commonConfig); // ❌ WRONG: expects different parameter structure | ||
| ``` | ||
|
|
||
| **Problems:** | ||
| 1. **Gemini adapter** expects specific parameter names (`apiKey`, `baseUrl`, `model`) at root level, not spread from `commonConfig` | ||
| 2. **Parameter overwriting** - spreading `commonConfig` and then overwriting `defaultParameters` causes loss of configuration | ||
| 3. **Inconsistent parameter structure** - Gemini expects `generationConfig` nested structure, but OpenAI expects flat `temperature` | ||
| 4. **Missing error handling** - No try-catch around adapter initialization to provide helpful error messages | ||
|
|
||
| ### Issue 2: Missing Environment Variable Validation | ||
|
|
||
| The code wasn't validating that `AI_GATEWAY_API_KEY` was set before attempting to use it, leading to confusing runtime errors. | ||
|
|
||
| ### Issue 3: Insufficient Error Handling | ||
|
|
||
| The agent lifecycle and network.run() calls had minimal error handling, allowing generic errors to bubble up without context. | ||
|
|
||
| ## Solution Implemented | ||
|
|
||
| ### Step 1: Fix `getModelAdapter()` Function | ||
|
|
||
| ```typescript | ||
| function getModelAdapter( | ||
| modelId: keyof typeof MODEL_CONFIGS | string, | ||
| temperature?: number, | ||
| ) { | ||
| // Validate environment variables early | ||
| const apiKey = process.env.AI_GATEWAY_API_KEY; | ||
| if (!apiKey) { | ||
| throw new Error( | ||
| "AI_GATEWAY_API_KEY environment variable is not set. Cannot initialize AI models." | ||
| ); | ||
| } | ||
|
|
||
| const baseUrl = | ||
| process.env.AI_GATEWAY_BASE_URL || "https://ai-gateway.vercel.sh/v1"; | ||
|
|
||
| const config = | ||
| modelId in MODEL_CONFIGS | ||
| ? MODEL_CONFIGS[modelId as keyof typeof MODEL_CONFIGS] | ||
| : null; | ||
|
|
||
| const temp = temperature ?? config?.temperature ?? 0.7; | ||
|
|
||
| const isGoogleModel = | ||
| config?.provider === "google" || | ||
| modelId.startsWith("google/") || | ||
| modelId.includes("gemini"); | ||
|
|
||
| if (isGoogleModel) { | ||
| console.log("[DEBUG] Initializing Gemini adapter for model:", modelId); | ||
| try { | ||
| return gemini({ | ||
| apiKey, // ✅ Correct: explicit parameters | ||
| baseUrl, | ||
| model: modelId, | ||
| defaultParameters: { | ||
| generationConfig: { | ||
| temperature: temp, | ||
| }, | ||
| }, | ||
| }); | ||
| } catch (error) { | ||
| const errorMessage = | ||
| error instanceof Error ? error.message : String(error); | ||
| throw new Error( | ||
| `Failed to initialize Gemini adapter for model "${modelId}": ${errorMessage}` | ||
| ); | ||
| } | ||
| } | ||
|
|
||
| // Use OpenAI adapter for all other models | ||
| console.log("[DEBUG] Initializing OpenAI-compatible adapter for model:", modelId); | ||
| try { | ||
| return openai({ | ||
| apiKey, // ✅ Correct: explicit parameters | ||
| baseUrl, | ||
| model: modelId, | ||
| defaultParameters: { | ||
| temperature: temp, | ||
| }, | ||
| }); | ||
| } catch (error) { | ||
| const errorMessage = | ||
| error instanceof Error ? error.message : String(error); | ||
| throw new Error( | ||
| `Failed to initialize OpenAI adapter for model "${modelId}": ${errorMessage}` | ||
| ); | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| **Key Changes:** | ||
| - ✅ Early validation of `AI_GATEWAY_API_KEY` | ||
| - ✅ Explicit parameter passing (no spread operator) | ||
| - ✅ Correct parameter structure for each adapter type | ||
| - ✅ Try-catch with meaningful error messages | ||
| - ✅ Debug logging for adapter initialization | ||
|
|
||
| ### Step 2: Add Error Handling to Framework Selector | ||
|
|
||
| ```typescript | ||
| if (!project?.framework) { | ||
| try { | ||
| const frameworkSelectorAgent = createAgent({...}); | ||
| // ... framework selection logic ... | ||
| } catch (frameworkError) { | ||
| const errorMessage = | ||
| frameworkError instanceof Error | ||
| ? frameworkError.message | ||
| : String(frameworkError); | ||
| console.error("[ERROR] Framework selection failed:", errorMessage); | ||
| console.warn("[WARN] Falling back to default framework (Next.js)"); | ||
| selectedFramework = "nextjs"; // Graceful fallback | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| ### Step 3: Add Error Handling to Main Network Run | ||
|
|
||
| ```typescript | ||
| console.log("[DEBUG] Running network with input:", event.data.value); | ||
| let result; | ||
| try { | ||
| result = await network.run(event.data.value, { state }); | ||
| } catch (error) { | ||
| const errorMessage = | ||
| error instanceof Error ? error.message : String(error); | ||
| console.error("[ERROR] Network run failed with error:", errorMessage); | ||
| if (error instanceof Error && error.stack) { | ||
| console.error("[ERROR] Stack trace:", error.stack); | ||
| } | ||
| throw new Error( | ||
| `Code generation failed: ${errorMessage}. Please ensure API credentials are valid and try again.` | ||
| ); | ||
| } | ||
| ``` | ||
|
|
||
| ### Step 4: Add Error Handling to Auto-fix Loop | ||
|
|
||
| ```typescript | ||
| try { | ||
| result = await network.run( | ||
| `CRITICAL ERROR DETECTED...`, | ||
| { state: result.state }, | ||
| ); | ||
| } catch (autoFixError) { | ||
| const fixErrorMessage = | ||
| autoFixError instanceof Error | ||
| ? autoFixError.message | ||
| : String(autoFixError); | ||
| console.error( | ||
| `[ERROR] Auto-fix attempt ${autoFixAttempts} failed:`, | ||
| fixErrorMessage | ||
| ); | ||
| break; // Exit auto-fix loop on network error | ||
| } | ||
| ``` | ||
|
|
||
| ### Step 5: Add Error Handling to Metadata Generators | ||
|
|
||
| ```typescript | ||
| if (!isError && hasSummary && hasFiles) { | ||
| try { | ||
| let titleModel; | ||
| try { | ||
| titleModel = getModelAdapter("google/gemini-2.5-flash-lite", 0.3); | ||
| } catch (adapterError) { | ||
| const errorMessage = | ||
| adapterError instanceof Error | ||
| ? adapterError.message | ||
| : String(adapterError); | ||
| console.error( | ||
| "[ERROR] Failed to initialize model adapter for metadata generation:", | ||
| errorMessage | ||
| ); | ||
| throw adapterError; | ||
| } | ||
|
|
||
| const fragmentTitleGenerator = createAgent({...}); | ||
| const responseGenerator = createAgent({...}); | ||
|
|
||
| // ... run agents ... | ||
| } catch (gatewayError) { | ||
| const errorMessage = | ||
| gatewayError instanceof Error | ||
| ? gatewayError.message | ||
| : String(gatewayError); | ||
| console.error( | ||
| "[ERROR] Failed to generate fragment metadata:", | ||
| errorMessage | ||
| ); | ||
| // Continue with undefined output instead of failing | ||
| fragmentTitleOutput = undefined; | ||
| responseOutput = undefined; | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| ## Benefits of the Fix | ||
|
|
||
| 1. **Correct adapter configuration** - Models are properly initialized with correct parameters | ||
| 2. **Early validation** - Environment variables are checked before use | ||
| 3. **Detailed error messages** - When errors occur, developers get meaningful context | ||
| 4. **Graceful fallbacks** - Framework selector falls back to Next.js on error | ||
| 5. **Better debugging** - Debug logging helps trace execution flow | ||
| 6. **Improved reliability** - Auto-fix loop gracefully exits on network errors | ||
|
|
||
| ## Testing the Fix | ||
|
|
||
| To verify the fix works: | ||
|
|
||
| 1. **Check environment variables** are set: | ||
| ```bash | ||
| echo $AI_GATEWAY_API_KEY | ||
| echo $AI_GATEWAY_BASE_URL | ||
| ``` | ||
|
|
||
| 2. **Run a code generation request** and observe: | ||
| - `[DEBUG]` logs showing adapter initialization | ||
| - Clear error messages if API key is missing | ||
| - Framework selector logs showing selected framework | ||
| - Successful network.run() execution | ||
|
|
||
| 3. **Monitor logs** for error patterns: | ||
| - If you see `[ERROR] Network run failed with error:`, check the following error message | ||
| - If you see `[ERROR] Framework selection failed:`, the system will fall back to Next.js | ||
| - All errors now include stack traces for better debugging | ||
|
|
||
| ## Related Files | ||
|
|
||
| - `src/inngest/functions.ts` - Main Inngest function with agent orchestration | ||
| - `src/prompt.ts` - Prompt exports | ||
| - `src/prompts/` - Framework-specific prompts | ||
| - `package.json` - Dependencies including `@inngest/agent-kit` | ||
|
|
||
| ## Environment Variables Required | ||
|
|
||
| ```bash | ||
| AI_GATEWAY_API_KEY=your_api_key_here | ||
| AI_GATEWAY_BASE_URL=https://ai-gateway.vercel.sh/v1 | ||
| NEXT_PUBLIC_CONVEX_URL=your_convex_url | ||
| E2B_API_KEY=your_e2b_api_key | ||
| ``` | ||
|
|
||
| ## Migration Notes | ||
|
|
||
| This is a **drop-in fix** with no breaking changes: | ||
| - No API changes | ||
| - No configuration changes needed | ||
| - No database migrations required | ||
| - Backwards compatible with existing projects | ||
|
|
||
| Simply deploy the updated `src/inngest/functions.ts` file. |
There was a problem hiding this comment.
Critical: Documentation inconsistent with PR objectives.
This PR is titled "Remove inngest" and the AI summary states: "removing Inngest-based event workflows in favor of direct API-based agent orchestration." However, this file provides comprehensive documentation on fixing and maintaining Inngest integration, including model adapter configuration, error handling, and environment variable setup.
This creates a significant inconsistency:
- If Inngest is being removed, this documentation should not be added
- If Inngest is being retained, the PR title and objectives are misleading
Based on learnings, Inngest was previously used for job orchestration and AI agent workflows, which aligns with the PR's goal to replace it with direct API orchestration.
Recommendation: Remove this documentation file from the PR, or clarify why Inngest fix documentation is being added when the PR objective is to remove Inngest.
🤖 Prompt for AI Agents
In explanations/INNGEST_AGENTKIT_FIX.md around lines 1 to 292, the documentation
added describes fixes and maintenance for an Inngest-based integration while the
PR is titled and described as "Remove inngest", causing a misleading
inconsistency; either remove this documentation file from the PR or update the
PR metadata and title to reflect that Inngest support is being retained, and if
retaining keep the documentation but move it to a dedicated "inngest-fixes" or
docs-only PR and add a short rationale in the PR description explaining why
Inngest content is included.
| "dependencies": { | ||
| "@ai-sdk/anthropic": "^3.0.1", | ||
| "@ai-sdk/google": "^3.0.1", | ||
| "@ai-sdk/openai": "^3.0.1", |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
@ai-sdk/anthropic @ai-sdk/google @ai-sdk/openai latest version npm
💡 Result:
Latest npm versions (as of Dec 27, 2025):
- @ai-sdk/openai — 2.0.24. [1]
- @ai-sdk/anthropic — 2.0.17. [2]
- @ai-sdk/google — 2.0.11. [3]
Sources: [1] npmjs — @ai-sdk/openai, [2] npmjs — @ai-sdk/anthropic, [3] npmjs — @ai-sdk/google.
🏁 Script executed:
# Read the full package.json to see actual versions
cat package.jsonRepository: Jackson57279/zapdev
Length of output: 4166
Update AI SDK package versions to match published releases.
The specified versions (@ai-sdk/anthropic, @ai-sdk/google, @ai-sdk/openai at ^3.0.1) exceed the latest published npm versions (2.0.17, 2.0.11, and 2.0.24 respectively). Package installation will fail with these versions. Update to:
"@ai-sdk/anthropic": "^2.0.17",
"@ai-sdk/google": "^2.0.11",
"@ai-sdk/openai": "^2.0.24",
🤖 Prompt for AI Agents
In package.json around lines 15 to 18, the @ai-sdk dependency versions are set
to ^3.0.1 which do not exist on npm and will cause install failures; update the
three entries to the latest published release versions by changing
"@ai-sdk/anthropic" to ^2.0.17, "@ai-sdk/google" to ^2.0.11, and
"@ai-sdk/openai" to ^2.0.24, then run npm install (or yarn) and verify the
lockfile is updated.
| export async function POST(req: NextRequest) { | ||
| const encoder = new TextEncoder(); | ||
| const stream = new TransformStream(); | ||
| const writer = stream.writable.getWriter(); | ||
|
|
||
| const sendProgress = async (event: ProgressEvent) => { | ||
| const message = `data: ${JSON.stringify(event)}\n\n`; | ||
| await writer.write(encoder.encode(message)); | ||
| }; | ||
|
|
||
| (async () => { | ||
| const convex = getConvexClient(); | ||
| let sandboxId: string | null = null; | ||
| let selectedFramework: Framework = 'nextjs'; | ||
| let selectedModel: keyof typeof MODEL_CONFIGS = 'anthropic/claude-haiku-4.5'; | ||
|
|
||
| try { | ||
| const body = await req.json(); | ||
| const { | ||
| projectId, | ||
| value: prompt, | ||
| model: requestedModel = 'auto', | ||
| mode = 'fast', | ||
| } = body; | ||
|
|
There was a problem hiding this comment.
Missing authentication on public API endpoint.
The POST handler accepts requests without any authentication verification. The handler extracts projectId from the request body and queries project data, but never verifies that the requesting user has permission to access or generate code for this project.
This is a security risk: any unauthenticated request could trigger expensive sandbox creation and AI model calls.
🔎 Proposed fix
+import { auth } from '@clerk/nextjs/server';
+
export async function POST(req: NextRequest) {
const encoder = new TextEncoder();
const stream = new TransformStream();
const writer = stream.writable.getWriter();
const sendProgress = async (event: ProgressEvent) => {
const message = `data: ${JSON.stringify(event)}\n\n`;
await writer.write(encoder.encode(message));
};
(async () => {
const convex = getConvexClient();
let sandboxId: string | null = null;
let selectedFramework: Framework = 'nextjs';
let selectedModel: keyof typeof MODEL_CONFIGS = 'anthropic/claude-haiku-4.5';
try {
+ const { userId } = await auth();
+ if (!userId) {
+ await sendProgress({ type: 'error', message: 'Unauthorized' });
+ return;
+ }
+
const body = await req.json();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function POST(req: NextRequest) { | |
| const encoder = new TextEncoder(); | |
| const stream = new TransformStream(); | |
| const writer = stream.writable.getWriter(); | |
| const sendProgress = async (event: ProgressEvent) => { | |
| const message = `data: ${JSON.stringify(event)}\n\n`; | |
| await writer.write(encoder.encode(message)); | |
| }; | |
| (async () => { | |
| const convex = getConvexClient(); | |
| let sandboxId: string | null = null; | |
| let selectedFramework: Framework = 'nextjs'; | |
| let selectedModel: keyof typeof MODEL_CONFIGS = 'anthropic/claude-haiku-4.5'; | |
| try { | |
| const body = await req.json(); | |
| const { | |
| projectId, | |
| value: prompt, | |
| model: requestedModel = 'auto', | |
| mode = 'fast', | |
| } = body; | |
| import { auth } from '@clerk/nextjs/server'; | |
| export async function POST(req: NextRequest) { | |
| const encoder = new TextEncoder(); | |
| const stream = new TransformStream(); | |
| const writer = stream.writable.getWriter(); | |
| const sendProgress = async (event: ProgressEvent) => { | |
| const message = `data: ${JSON.stringify(event)}\n\n`; | |
| await writer.write(encoder.encode(message)); | |
| }; | |
| (async () => { | |
| const convex = getConvexClient(); | |
| let sandboxId: string | null = null; | |
| let selectedFramework: Framework = 'nextjs'; | |
| let selectedModel: keyof typeof MODEL_CONFIGS = 'anthropic/claude-haiku-4.5'; | |
| try { | |
| const { userId } = await auth(); | |
| if (!userId) { | |
| await sendProgress({ type: 'error', message: 'Unauthorized' }); | |
| return; | |
| } | |
| const body = await req.json(); | |
| const { | |
| projectId, | |
| value: prompt, | |
| model: requestedModel = 'auto', | |
| mode = 'fast', | |
| } = body; |
🤖 Prompt for AI Agents
In src/app/api/agent/generate/route.ts around lines 607 to 631, the POST handler
lacks any authentication/authorization checks before reading projectId and
performing expensive work; add an auth guard at the start of the handler that
(1) extracts and validates the request identity (session token, API key, or
existing auth helper used elsewhere in the codebase), (2) loads the requesting
user and verifies they have permission to access the provided projectId (return
401 if unauthenticated, 403 if not authorized), and (3) short-circuits and
closes the streaming response when unauthorized so sandbox creation and model
calls never run; use the app's existing auth utilities and project-access check
functions and log auth failures.
Summary by CodeRabbit
New Features
Improvements
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.