Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughAdds an AI chat feature: a new ChatInterface UI component integrated into the Discover page and a Supabase Edge Function (/ai) using Hono that calls Gemini and optionally TMDB/Supabase to build context. Updates configs for Deno/Supabase, editor tooling, linting, TypeScript exclusions, and repo ignores. Minor UI and utility additions. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor U as User
participant D as Discover Page
participant CI as ChatInterface (React)
participant EF as Supabase Edge Function (/ai)
participant SB as Supabase DB
participant TM as TMDB API
participant GM as Google Gemini
U->>D: Open Discover
D->>CI: Render with userId/userEmail
U->>CI: Enter message / click quick prompt
CI->>EF: POST /functions/v1/ai/chat {message, userId?}
note over EF: Validate input, read env<br/>Configure Supabase connection
alt userId provided
EF->>SB: Fetch interests/likes/watched/searches
SB-->>EF: User context data
end
opt TMDB_API_KEY present
EF->>TM: Fetch trending/popular
TM-->>EF: Top results
end
EF->>GM: Generate content (prompt with contexts)
GM-->>EF: AI text response
EF-->>CI: { response }
CI-->>U: Render AI message
rect rgba(250,230,230,0.6)
alt Error
EF-->>CI: 500 { error }
CI-->>U: Fallback error message
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests
Comment |
There was a problem hiding this comment.
Actionable comments posted: 20
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
utils/supabase/server.ts (1)
6-8:cookies()is sync; removeawaitandasync.Current code makes the factory unnecessarily async and may fail TS checks.
Apply this diff:
-export async function createClient() { - const cookieStore = await cookies(); +export function createClient() { + const cookieStore = cookies();components/RecommendationFilter.tsx (1)
111-119: Click bubbling toggles the panel twice.Clicks on Clear and Chevron also trigger the header’s onClick, causing double-toggle or unintended collapse.
Apply this diff:
- <button - onClick={clearAllFilters} + <button + onClick={(e) => { e.stopPropagation(); clearAllFilters(); }} className="p-1.5 text-gray-400 hover:text-gray-600 dark:hover:text-gray-300 transition-colors" title="Clear filters" > <X size={14} /> </button> @@ - <button - onClick={() => setIsExpanded(!isExpanded)} + <button + onClick={(e) => { e.stopPropagation(); setIsExpanded(v => !v); }} className="p-1.5 hover:bg-gray-100 dark:hover:bg-gray-800 rounded-lg transition-colors" title={isExpanded ? "Hide filters" : "Show filters"} > - <ChevronDown + <ChevronDown className={`transform transition-transform duration-200 ${isExpanded ? 'rotate-180' : ''}`} size={20} - color="orange" /> </button>Also applies to: 121-129
utils/supabase/queries.ts (2)
3-16: Fix double await and use server client in server contextsRemove redundant
awaitand prefer the server Supabase client module to ensure cookies/RLS work in API routes.-import { createClient } from "@/utils/supabase/client"; +import { createClient } from "@/utils/supabase/server"; @@ - const { data, error } = await (await supabase) + const { data, error } = await supabase .from("user_interests") .select("genre_id") .eq("user_id", userId);If this module is shared by browser code too, consider injecting the client instead (accept a Supabase client param) to avoid mixing server/client imports. Based on learnings
69-91: logUserSearch: require a valid user and coerce typesAvoid inserting rows with
user_id = nulland ensureclickedis boolean.export async function logUserSearch( userId: string | undefined, query: string, movieId?: number | null, genreIds?: number[] | null, clicked?: boolean) { const supabase = createClient(); - const { error } = await supabase.from('user_searches').insert([ + if (!userId) { + // silently skip or return an error, depending on your UX + return; + } + const { error } = await supabase.from('user_searches').insert([ { user_id: userId, query, movie_id: movieId ?? null, genre_ids: genreIds ?? null, - clicked + clicked: !!clicked } ])
🧹 Nitpick comments (38)
package.json (1)
26-26: Consider pinning LangChain to a patch range to reduce churn.LangChain releases can move quickly; switching from caret to tilde minimizes unexpected breaks without sacrificing security updates.
Apply:
- "langchain": "^0.3.34", + "langchain": "~0.3.34",supabase/functions/ai/deno.json (1)
1-3: Add minimal Deno compiler/lint settings for better DX.Initialize libs and enable fmt/lint so CI and editors catch issues early.
Apply:
{ - "imports": {} + "imports": {}, + "compilerOptions": { + "lib": ["deno.ns", "dom"] + }, + "fmt": { + "lineWidth": 100 + }, + "lint": { + "rules": { + "tags": ["recommended"] + } + } }eslint.config.mjs (1)
22-22: LGTM: ignoring Supabase functions is appropriate here.Prevents Node ESLint rules from flagging Deno code. Consider adding a separate Deno lint task (deno lint) for that folder.
Optional package.json script:
- "lint:deno": "deno lint supabase/functions"
supabase/functions/ai/.npmrc (1)
1-3: Keep or remove the function-level .npmrc based on your private package needs. Deno (and Supabase Edge Functions) respect .npmrc for npm: specifiers, so this comment-only file serves as a template for private registry auth. If you’re not importing private packages, remove it to avoid confusion; otherwise, populate it per Deno’s private registry support and Supabase docs: supabase.com/docs/guides/functions/import-maps#importing-from-private-registries..vscode/settings.json (2)
6-20: Unnecessarily broaddeno.unstableflags.This enables capabilities (ffi, fs, net, webgpu, unsafe-proto, etc.) that won’t run on Supabase Edge Functions and can mask deploy-time failures. Trim to only what you actually use.
Apply this minimal diff (or remove the block entirely if not needed):
- "deno.unstable": [ - "bare-node-builtins", - "byonm", - "sloppy-imports", - "unsafe-proto", - "webgpu", - "broadcast-channel", - "worker-options", - "cron", - "kv", - "ffi", - "fs", - "http", - "net" - ], + // "deno.unstable": ["kv"], // keep only if you actually use it
21-23: Formatter applies to all TS files.This sets Deno as the default formatter for every TS file, including Node/Next code. Prefer moving this to
supabase/functions/.vscode/settings.json.- "[typescript]": { - "editor.defaultFormatter": "denoland.vscode-deno" - }, + // Consider relocating this block to supabase/functions/.vscode/settings.jsonutils/supabase/server.ts (1)
10-12: Env presence check (optional).Guard missing
NEXT_PUBLIC_SUPABASE_URL/ANON_KEYearly with a clear error.- return createServerClient( - process.env.NEXT_PUBLIC_SUPABASE_URL!, - process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!, + const url = process.env.NEXT_PUBLIC_SUPABASE_URL; + const key = process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY; + if (!url || !key) throw new Error("Supabase env vars are not set."); + return createServerClient( + url, + key,components/RecommendationFilter.tsx (1)
93-94: Make the header control accessible.Add button semantics and keyboard support; keep the existing onClick.
- <div className="flex items-center justify-between p-3 cursor-pointer" onClick={() => setIsExpanded(!isExpanded)}> + <div + className="flex items-center justify-between p-3 cursor-pointer" + role="button" + tabIndex={0} + aria-expanded={isExpanded} + onKeyDown={(e) => { if (e.key === 'Enter' || e.key === ' ') { e.preventDefault(); setIsExpanded(v => !v); } }} + onClick={() => setIsExpanded(!isExpanded)} + >utils/ai/model.ts (3)
24-29: Harden temperature parsing and clamp to [0,1].Avoid passing NaN or out-of-range values.
- const temperature = - options?.temperature ?? - (process.env.GEMINI_TEMPERATURE ? Number(process.env.GEMINI_TEMPERATURE) : 0.2); + const envTemp = process.env.GEMINI_TEMPERATURE; + const parsed = envTemp !== undefined ? Number(envTemp) : undefined; + const baseTemp = options?.temperature ?? (Number.isFinite(parsed!) ? parsed : 0.2); + const temperature = Math.min(1, Math.max(0, baseTemp as number));
17-22: Support common env var names for the API key.Many setups use GOOGLE_API_KEY; accept both.
- const apiKey = process.env.GEMINI_API_KEY; + const apiKey = process.env.GEMINI_API_KEY || process.env.GOOGLE_API_KEY;
43-43: Top‑level instantiation throws if the key is missing.This can crash builds/tests that don’t set the key. Consider lazy initialization via a getter to fail at first use instead of import time.
-export const Gemini = createGeminiModel(); +let _Gemini: ChatGoogleGenerativeAI | null = null; +export function getGemini() { + if (!_Gemini) _Gemini = createGeminiModel(); + return _Gemini; +} // (Optionally keep `export const Gemini = getGemini();` if current API expects it.)utils/ai/chains/chat.ts (2)
10-18: Add basic validation, safer replacements, and error handling.Trim inputs, tolerate missing placeholders, and catch model errors to avoid noisy upstream failures.
export async function chatChain({ context, question }: ChatInput) { - const prompt = generalQAPrompt - .replace("{{context}}", context) - .replace("{{question}}", question); - - const response = await Gemini.invoke(prompt); - - return response; + const safeContext = (context ?? "").trim(); + const safeQuestion = (question ?? "").trim(); + const prompt = generalQAPrompt + .replaceAll("{{context}}", safeContext) + .replaceAll("{{question}}", safeQuestion); + try { + const response = await Gemini.invoke(prompt); + return response; + } catch (err) { + // Surface a clean, typed error for callers + throw new Error("chatChain: AI invocation failed"); + } }
10-18: Consider capping context length to stay within model token limits.Truncate overly large context strings to avoid model errors/timeouts.
utils/ai/chains/recommend.ts (2)
10-22: Add validation, safer replacements, and error handling.Mirror chat chain hardening to keep behavior consistent.
export async function recommendMoviesChain({ userProfile, availableMovies, }: RecommendInput) { // Fill the template - const prompt = recommendMoviesPrompt - .replace("{{userProfile}}", userProfile) - .replace("{{availableMovies}}", availableMovies); - - const response = await Gemini.invoke(prompt); - - return response; // raw text from Gemini + const profile = (userProfile ?? "").trim(); + const movies = (availableMovies ?? "").trim(); + const prompt = recommendMoviesPrompt + .replaceAll("{{userProfile}}", profile) + .replaceAll("{{availableMovies}}", movies); + try { + const response = await Gemini.invoke(prompt); + return response; // raw text from Gemini + } catch { + throw new Error("recommendMoviesChain: AI invocation failed"); + } }
5-13: Prefer structured output (JSON) from the model for downstream parsing.If feasible, update the prompt to request JSON and parse here to return typed results.
components/AI/ChatInterface.tsx (12)
29-31: Remove unused environment variables tied to the fetch approach.They become obsolete after switching to supabase.functions.invoke.
-const supabaseUrl = process.env.NEXT_PUBLIC_SUPABASE_URL; -const supabaseAnonKey = process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY; +// Supabase client is created on demand in handleSendMessage()
140-151: Improve toggle button accessibility label.Reflect open/close state for screen readers.
- <button - onClick={() => setIsOpen(!isOpen)} - className="bg-orange-500 hover:bg-orange-600 text-white rounded-full p-4 shadow-lg transition-all duration-300 hover:scale-110 group" - aria-label="Open chat" - > + <button + onClick={() => setIsOpen(!isOpen)} + className="bg-orange-500 hover:bg-orange-600 text-white rounded-full p-4 shadow-lg transition-all duration-300 hover:scale-110 group" + aria-label={isOpen ? "Close chat" : "Open chat"} + >
119-124: Use onKeyDown; onKeyPress is deprecated.Aligns with React 18 behavior.
- const handleKeyPress = (e: React.KeyboardEvent) => { + const handleKeyDown = (e: React.KeyboardEvent) => { if (e.key === 'Enter' && !e.shiftKey) { e.preventDefault(); handleSendMessage(); } };And update input prop accordingly (see below).
245-253: Wire the updated keyboard handler.- onKeyPress={handleKeyPress} + onKeyDown={handleKeyDown}
176-223: Add ARIA live region to announce new messages.Improves screen reader experience without visual changes.
- <div className="flex-1 overflow-y-auto p-4 space-y-4"> + <div + className="flex-1 overflow-y-auto p-4 space-y-4" + role="log" + aria-live="polite" + aria-relevant="additions text" + >
3-3: Remove unused import if no longer needed.If you don’t use createClient after refactors elsewhere, drop it to keep the file clean.
61-73: Guard against rapid double-submits.You already gate on isLoading; consider disabling the input immediately (optimistic) to avoid race clicks on slow devices.
75-118: Optional: add AbortController to cancel in-flight request on unmount.Prevents setting state on unmounted component if the user closes the chat mid-request.
195-206: Time format i18n.Consider using Intl.DateTimeFormat with locale from navigator for consistency across browsers.
224-241: UX: quick questions could submit directly.Consider dispatching immediately on click to reduce friction.
126-131: Minor: externalize canned prompts.Move quickQuestions to a constants module for reuse/testing.
77-93: Security logging hygiene.Avoid logging full error bodies from the function if they may contain sensitive tokens. Log minimal context.
app/api/ai/chat/route.ts (4)
9-13: Avoid logging PII/model output in productionMessage/user markers, fetched counts, and full model output can leak PII. Gate logs by env and redact values.
+ const isDev = process.env.NODE_ENV !== "production"; - console.log("=== Chat API Debug Start ==="); + isDev && console.debug("=== Chat API Debug Start ==="); - console.log("Received:", { message: message?.substring(0, 50), userId }); + isDev && console.debug("Received:", { message: text?.substring(0, 50), userId: userId && "[session]" }); ... - console.log("User data fetched:", { + isDev && console.debug("User data fetched:", { interestsCount: interests.length, likesCount: likes.length, searchesCount: searches.length, watchedCount: watched.length }); ... - console.log("Movie data fetched:", { + isDev && console.debug("Movie data fetched:", { trendingCount: trending.results?.length || 0, popularCount: popular.results?.length || 0 }); ... - console.log("Context built successfully, length:", context.length); + isDev && console.debug("Context length:", context.length); ... - console.log("Calling AI chat chain..."); + isDev && console.debug("Calling AI chat chain..."); - console.log("AI response received:", aiResponse.content); + isDev && console.debug("AI response received (truncated):", resultText?.slice(0, 200)); - console.error("=== Chat API Error ==="); - console.error("Error details:", error); - console.error("Stack trace:", error instanceof Error ? error.stack : 'No stack trace'); + console.error("=== Chat API Error ==="); + isDev && console.error("Error details:", error); + isDev && console.error("Stack:", error instanceof Error ? error.stack : 'No stack trace');Also applies to: 56-73, 155-161, 173-181, 187-195
56-73: Reduce latency: fetch heavy lists only when neededTrending/popular and movie detail calls run for every request with a userId, even for non‑recommendation Q&A. Consider lazy‑fetching when the question implies recommendations.
Also applies to: 91-117, 137-153
165-171: Env check earlier for fast‑failYou can short‑circuit before doing any external fetch when GEMINI_API_KEY is missing.
24-46: Bound context sizeEven with truncation, the JSON context can grow rapidly. Cap it (e.g., 10–15KB) to control token/latency.
- context = JSON.stringify(contextData); + context = JSON.stringify(contextData); + if (context.length > 15000) { + context = context.slice(0, 15000) + "..."; + }utils/ai/prompts.ts (3)
23-37: Constrain recs to provided list to avoid hallucinationsExplicitly tell the model to only choose from
availableMoviesand ask clarifying questions if none match.Based on the user profile, recommend 5 movies from the above list. Return each as: - Title - Short description - Why it matches the user +Rules: +- Only select from the "available movies" list provided above. +- If none are a good fit, ask one clarifying question instead of inventing titles.
58-69: Add a guardrail against out‑of‑context suggestionsMake it clear not to recommend content outside the supplied lists when lists are present.
8. When recommending multiple movies, present them in a clear, readable format +9. Do not recommend titles outside the provided lists (trending/popular) if those lists are present in context
1-16: Nit: end-of-file newline and minor consistencyEnsure a trailing newline and consistent placeholder braces in comments/examples.
Also applies to: 70-106
utils/supabase/queries.ts (2)
18-33: Standardize error-handling semantics across getters
getUserIntereststhrows while others return[]. Pick one convention (prefer[]+ log) so callers don’t need per‑function special cases.- if (error) throw error; + if (error) { + console.error("Error fetching interests:", error); + return []; + }Also update callers if you keep throwing.
Also applies to: 35-50, 52-66
18-33: DB/RLS/indexing adviceEnsure RLS prevents cross‑user reads/writes (policies using
auth.uid()), and add indexes:
- user_movie_interactions (user_id, action)
- user_interests (user_id)
- user_searches (user_id, created_at desc)
Also applies to: 35-50
supabase/functions/ai/index.ts (2)
25-31: Harden input validation and size-limit the prompt.Validate the type and length of message to avoid abuse and runaway costs.
-const { message } = await c.req.json() +const { message } = await c.req.json() ... -if (!message) { +if (typeof message !== 'string' || !message.trim()) { return c.json({ error: 'Message is required' }, 400) } +if (message.length > 2000) { + return c.json({ error: 'Message too long' }, 400) +}
112-123: Parallelize TMDB requests and add a network timeout.Prevents head-of-line blocking and stuck requests.
-const trendingResponse = await fetch( - `https://api.themoviedb.org/3/trending/movie/day?api_key=${tmdbKey}` -) -const trending = await trendingResponse.json() - -const popularResponse = await fetch( - `https://api.themoviedb.org/3/movie/popular?api_key=${tmdbKey}` -) -const popular = await popularResponse.json() +const tmdbTimeoutMs = 8000 +const tmdbController = new AbortController() +const tmdbTimer = setTimeout(() => tmdbController.abort('timeout'), tmdbTimeoutMs) +const [trendingResponse, popularResponse] = await Promise.all([ + fetch(`https://api.themoviedb.org/3/trending/movie/day?api_key=${tmdbKey}`, { signal: tmdbController.signal }), + fetch(`https://api.themoviedb.org/3/movie/popular?api_key=${tmdbKey}`, { signal: tmdbController.signal }), +]) +clearTimeout(tmdbTimer) +const [trending, popular] = await Promise.all([trendingResponse.json(), popularResponse.json()])Also applies to: 124-140
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (24)
.vscode/extensions.json(1 hunks).vscode/settings.json(1 hunks)app/api/ai/chat/route.ts(1 hunks)app/discover/page.tsx(3 hunks)app/globals.css(1 hunks)app/services/queryClient.ts(1 hunks)components/AI/ChatInterface.tsx(1 hunks)components/RecommendationFilter.tsx(6 hunks)eslint.config.mjs(1 hunks)package.json(2 hunks)supabase/.gitignore(1 hunks)supabase/config.toml(1 hunks)supabase/functions/_shared/cors.ts(1 hunks)supabase/functions/ai/.npmrc(1 hunks)supabase/functions/ai/deno.json(1 hunks)supabase/functions/ai/index.ts(1 hunks)utils/ai/chains/chat.ts(1 hunks)utils/ai/chains/explain.ts(1 hunks)utils/ai/chains/recommend.ts(1 hunks)utils/ai/index.ts(1 hunks)utils/ai/model.ts(1 hunks)utils/ai/prompts.ts(1 hunks)utils/supabase/queries.ts(1 hunks)utils/supabase/server.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (5)
utils/ai/chains/explain.ts (2)
utils/ai/prompts.ts (1)
explainMoviePrompt(7-16)utils/ai/model.ts (1)
Gemini(43-43)
utils/ai/chains/chat.ts (3)
utils/ai/index.ts (1)
chatChain(4-4)utils/ai/prompts.ts (1)
generalQAPrompt(42-69)utils/ai/model.ts (1)
Gemini(43-43)
utils/ai/chains/recommend.ts (2)
utils/ai/prompts.ts (1)
recommendMoviesPrompt(23-37)utils/ai/model.ts (1)
Gemini(43-43)
app/api/ai/chat/route.ts (3)
utils/supabase/queries.ts (4)
getUserInterests(5-16)getUserLikes(18-33)getUserSearches(52-66)getUserWatched(35-50)app/services/queryClient.ts (1)
QueryService(18-41)utils/ai/chains/chat.ts (1)
chatChain(10-18)
supabase/functions/ai/index.ts (1)
utils/supabase/server.ts (1)
createClient(6-31)
🪛 ast-grep (0.39.5)
components/AI/ChatInterface.tsx
[warning] 196-196: Usage of dangerouslySetInnerHTML detected. This bypasses React's built-in XSS protection. Always sanitize HTML content using libraries like DOMPurify before injecting it into the DOM to prevent XSS attacks.
Context: dangerouslySetInnerHTML
Note: [CWE-79] Improper Neutralization of Input During Web Page Generation [REFERENCES]
- https://reactjs.org/docs/dom-elements.html#dangerouslysetinnerhtml
- https://cwe.mitre.org/data/definitions/79.html
(react-unsafe-html-injection)
🪛 Biome (2.1.2)
components/AI/ChatInterface.tsx
[error] 197-197: Avoid passing content using the dangerouslySetInnerHTML prop.
Setting content using code can expose users to cross-site scripting (XSS) attacks
(lint/security/noDangerouslySetInnerHtml)
🔇 Additional comments (9)
app/services/queryClient.ts (1)
1-1: LGTM!The header comment addition is fine and keeps the file behavior unchanged.
.vscode/extensions.json (1)
1-3: Ensure Deno is scoped to supabase/functions only
In .vscode/settings.json, confirm these entries exist and are set as follows:{ "deno.enable": false, "deno.enablePaths": ["supabase/functions"], "deno.lint": true, "deno.unstable": true }components/AI/ChatInterface.tsx (3)
95-103: Adjust data access after switching to supabase.functions.invoke.Use the returned data to populate the AI message.
- const data = await response.json(); - const aiMessage: Message = { id: (Date.now() + 1).toString(), - content: data.response || "I'm sorry, I couldn't process your request at the moment.", + content: (data as any)?.response || "I'm sorry, I couldn't process your request at the moment.", sender: 'ai', timestamp: new Date(), };Please confirm the Edge Function response shape is { response: string }.
195-199: Style note.whitespace-pre-wrap already preserves newlines; no need to insert
s in formatting logic.
1-10: Icon imports look good and tree-shakable.No concerns.
app/api/ai/chat/route.ts (1)
11-13: No changes needed
The server-side helper atutils/supabase/server.tsalready exports an asynccreateClient()usingcreateServerClientwith Next.jscookiessupport, so the setup is correct.supabase/functions/ai/index.ts (3)
19-21: Health endpoint looks good.
13-17: Update CORS origins: remove trailing slash and add 127.0.0.1-app.use('/*', cors({ - origin: ['http://localhost:3000', 'https://seen-it-aymo.vercel.app/'], +app.use('/*', cors({ + origin: ['http://localhost:3000', 'http://127.0.0.1:3000', 'https://seen-it-aymo.vercel.app'], allowHeaders: ['Content-Type', 'Authorization'], allowMethods: ['POST', 'GET', 'OPTIONS'], }))Run a preflight against your deployed function (replace
<project>) and confirmAccess-Control-Allow-Originmatches:curl -i -X OPTIONS 'https://<project>.functions.supabase.co/ai/chat' \ -H 'Origin: https://seen-it-aymo.vercel.app' \ -H 'Access-Control-Request-Method: POST'
181-190: safetySettings categories are valid as is: “HARM_CATEGORY_HARASSMENT” and “HARM_CATEGORY_HATE_SPEECH” remain supported by the latest Gemini generateContent API—no changes needed.
| "[typescript]": { | ||
| "editor.defaultFormatter": "denoland.vscode-deno" | ||
| }, | ||
| "deno.enable": true |
There was a problem hiding this comment.
Deno is enabled for the whole workspace — likely to break Node/Next TypeScript tooling.
Scope Deno to only supabase/functions via deno.enablePaths and turn off global enablement.
Apply this diff:
- "deno.enable": true
+ "deno.enable": falseOptional: move the Deno-specific formatter to supabase/functions/.vscode/settings.json to avoid formatting non-Deno TS files.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "deno.enable": true | |
| "deno.enable": false |
🤖 Prompt for AI Agents
In .vscode/settings.json around line 24, Deno is enabled globally
("deno.enable": true) which will interfere with Node/Next TypeScript tooling;
change this to disable global Deno (set to false or remove the key) and add a
"deno.enablePaths" entry scoped to only "supabase/functions" so Deno is only
active for that folder; optionally move any Deno-specific formatter settings
into supabase/functions/.vscode/settings.json to avoid applying Deno formatting
to non-Deno TS files.
app/api/ai/chat/route.ts
Outdated
| const { message, userId } = await request.json(); | ||
| console.log("Received:", { message: message?.substring(0, 50), userId }); | ||
|
|
There was a problem hiding this comment.
Blocker: Authorization flaw — userId is trusted from request body
Anyone can supply another user’s ID and fetch their interests/likes/watched/searches. Derive userId from a trusted server-side session (Supabase/NextAuth) and ignore any client‑provided userId.
Apply this diff (adjust import/path to your server helper):
import { NextRequest, NextResponse } from "next/server";
import { getUserInterests, getUserLikes, getUserSearches, getUserWatched } from "@/utils/supabase/queries";
import { QueryService } from "@/app/services/queryClient";
import { chatChain } from "@/utils/ai";
+import { createClient as createServerSupabaseClient } from "@/utils/supabase/server";
export async function POST(request: NextRequest) {
try {
console.log("=== Chat API Debug Start ===");
- const { message, userId } = await request.json();
- console.log("Received:", { message: message?.substring(0, 50), userId });
+ const { message } = await request.json();
+ const supabase = createServerSupabaseClient();
+ const { data: { user } } = await supabase.auth.getUser();
+ const userId = user?.id;
+ console.log("Received:", { message: message?.substring(0, 50), userId: userId ? "[session]" : undefined });
if (!message) {
return NextResponse.json(
{ error: "Message is required" },
{ status: 400 }
);
}
// Build context from user data if userId is provided
let context = "General movie database context.";
- if (userId) {
+ if (userId) {
try {
console.log("Fetching user data for:", userId);If you don’t have a server client yet, I can wire it up. Based on learnings
Also applies to: 24-162
app/api/ai/chat/route.ts
Outdated
| if (!message) { | ||
| return NextResponse.json( | ||
| { error: "Message is required" }, | ||
| { status: 400 } | ||
| ); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Harden input validation and size limits
Trim input, enforce a reasonable max length, and return 413 for oversized prompts to protect the model/backend.
Apply:
- if (!message) {
+ const text = typeof message === "string" ? message.trim() : "";
+ if (!text) {
return NextResponse.json(
{ error: "Message is required" },
{ status: 400 }
);
}
+ if (text.length > 2000) {
+ return NextResponse.json(
+ { error: "Message too long" },
+ { status: 413 }
+ );
+ }And later pass text to the chain instead of message.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!message) { | |
| return NextResponse.json( | |
| { error: "Message is required" }, | |
| { status: 400 } | |
| ); | |
| } | |
| const text = typeof message === "string" ? message.trim() : ""; | |
| if (!text) { | |
| return NextResponse.json( | |
| { error: "Message is required" }, | |
| { status: 400 } | |
| ); | |
| } | |
| if (text.length > 2000) { | |
| return NextResponse.json( | |
| { error: "Message too long" }, | |
| { status: 413 } | |
| ); | |
| } |
🤖 Prompt for AI Agents
In app/api/ai/chat/route.ts around lines 14 to 19, the code currently only
checks for presence of message; trim the incoming message, validate length
against a reasonable MAX_PROMPT_LENGTH (e.g. define a constant), return
NextResponse.json({ error: "Message is required" }, { status: 400 }) if empty
after trim, and return status 413 with a clear error when trimmed text exceeds
MAX_PROMPT_LENGTH; finally, use the trimmed variable (e.g. text) when passing
input to the chain instead of the original message.
app/api/ai/chat/route.ts
Outdated
| const aiResponse = await chatChain({ | ||
| context, | ||
| question: message, | ||
| }); | ||
|
|
||
| console.log("AI response received:", aiResponse.content); | ||
|
|
||
| return NextResponse.json({ | ||
| response: aiResponse.content || "I'm sorry, I couldn't generate a response at the moment." | ||
| }); |
There was a problem hiding this comment.
Make chatChain return handling robust (string vs object)
chatChain currently returns response (see utils/ai/chains/chat.ts). Here you assume .content. Guard both cases.
- const aiResponse = await chatChain({
- context,
- question: message,
- });
-
- console.log("AI response received:", aiResponse.content);
-
- return NextResponse.json({
- response: aiResponse.content || "I'm sorry, I couldn't generate a response at the moment."
- });
+ const aiResponse = await chatChain({ context, question: text });
+ const resultText = typeof aiResponse === "string" ? aiResponse : aiResponse?.content ?? "";
+ console.log("AI response received:", resultText?.slice(0, 200));
+ return NextResponse.json({
+ response: resultText || "I'm sorry, I couldn't generate a response at the moment."
+ });Also applies to: 180-181
🤖 Prompt for AI Agents
In app/api/ai/chat/route.ts around lines 175-184 (also address 180-181), the
code assumes chatChain returns an object with a .content property which can
throw if it returns a string or null; update handling to safely unwrap the
response: check for null/undefined, then if aiResponse is a string use it
directly, otherwise if it's an object use aiResponse.content (with a fallback to
an empty string), log the resolved string value instead of assuming .content,
and return that resolved string or the existing fallback message to
NextResponse.json.
| const formatAIResponse = (text: string): string => { | ||
| return text | ||
| // Convert **bold** to proper formatting (we'll handle this in CSS) | ||
| .replace(/\*\*(.*?)\*\*/g, '<strong>$1</strong>') | ||
| // Convert *italic* to proper formatting | ||
| .replace(/\*(.*?)\*/g, '<em>$1</em>') | ||
| // Ensure proper line breaks | ||
| .replace(/\n\n/g, '<br><br>') | ||
| .replace(/\n/g, '<br>'); | ||
| }; |
There was a problem hiding this comment.
Replace HTML string formatter with a safe inline parser.
Supports bold, italic, and preserves newlines via CSS without injecting HTML.
-const formatAIResponse = (text: string): string => {
- return text
- // Convert **bold** to proper formatting (we'll handle this in CSS)
- .replace(/\*\*(.*?)\*\*/g, '<strong>$1</strong>')
- // Convert *italic* to proper formatting
- .replace(/\*(.*?)\*/g, '<em>$1</em>')
- // Ensure proper line breaks
- .replace(/\n\n/g, '<br><br>')
- .replace(/\n/g, '<br>');
-};
+const renderAIResponse = (text: string): React.ReactNode[] => {
+ const nodes: React.ReactNode[] = [];
+ const re = /(\*\*[^*]+\*\*|\*[^*]+\*)/g;
+ let last = 0;
+ let m: RegExpExecArray | null;
+ while ((m = re.exec(text)) !== null) {
+ if (m.index > last) nodes.push(text.slice(last, m.index));
+ const token = m[0];
+ if (token.startsWith("**")) {
+ nodes.push(<strong key={nodes.length}>{token.slice(2, -2)}</strong>);
+ } else {
+ nodes.push(<em key={nodes.length}>{token.slice(1, -1)}</em>);
+ }
+ last = m.index + token.length;
+ }
+ if (last < text.length) nodes.push(text.slice(last));
+ return nodes;
+};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const formatAIResponse = (text: string): string => { | |
| return text | |
| // Convert **bold** to proper formatting (we'll handle this in CSS) | |
| .replace(/\*\*(.*?)\*\*/g, '<strong>$1</strong>') | |
| // Convert *italic* to proper formatting | |
| .replace(/\*(.*?)\*/g, '<em>$1</em>') | |
| // Ensure proper line breaks | |
| .replace(/\n\n/g, '<br><br>') | |
| .replace(/\n/g, '<br>'); | |
| }; | |
| const renderAIResponse = (text: string): React.ReactNode[] => { | |
| const nodes: React.ReactNode[] = []; | |
| const re = /(\*\*[^*]+\*\*|\*[^*]+\*)/g; | |
| let last = 0; | |
| let m: RegExpExecArray | null; | |
| while ((m = re.exec(text)) !== null) { | |
| // Push any plain-text preceding the next markup token | |
| if (m.index > last) { | |
| nodes.push(text.slice(last, m.index)); | |
| } | |
| const token = m[0]; | |
| if (token.startsWith("**")) { | |
| // **bold** | |
| nodes.push( | |
| <strong key={nodes.length}> | |
| {token.slice(2, -2)} | |
| </strong> | |
| ); | |
| } else { | |
| // *italic* | |
| nodes.push( | |
| <em key={nodes.length}> | |
| {token.slice(1, -1)} | |
| </em> | |
| ); | |
| } | |
| last = m.index + token.length; | |
| } | |
| // Push any remaining plain-text after the last token | |
| if (last < text.length) { | |
| nodes.push(text.slice(last)); | |
| } | |
| return nodes; | |
| }; |
utils/ai/chains/explain.ts
Outdated
| const prompt = explainMoviePrompt | ||
| .replace("{{movieTitle}}", movieTitle) | ||
| .replace("{{movieContext}}", movieContext); |
There was a problem hiding this comment.
Protect AI prompt substitution from special $ patterns
String.prototype.replace treats $ sequences in the replacement string as special tokens. Because movieContext is stringified JSON, values like $200 or $1 get mangled (e.g., $200 becomes empty) when we inject them, so Gemini never sees the full context. Use a functional replacer (or another dollar-safe approach) to preserve the raw payload.
- const prompt = explainMoviePrompt
- .replace("{{movieTitle}}", movieTitle)
- .replace("{{movieContext}}", movieContext);
+ const prompt = explainMoviePrompt
+ .replace(/{{movieTitle}}/g, () => movieTitle)
+ .replace(/{{movieContext}}/g, () => movieContext);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const prompt = explainMoviePrompt | |
| .replace("{{movieTitle}}", movieTitle) | |
| .replace("{{movieContext}}", movieContext); | |
| const prompt = explainMoviePrompt | |
| .replace(/{{movieTitle}}/g, () => movieTitle) | |
| .replace(/{{movieContext}}/g, () => movieContext); |
🤖 Prompt for AI Agents
In utils/ai/chains/explain.ts around lines 14 to 16, the replacement of
"{{movieContext}}" uses a string replacement which treats dollar sequences in
the replacement as special tokens and mangles JSON like "$200"; switch to a
dollar-safe replacer by passing a function as the replacement for movieContext
(or pre-escape dollars by replacing `$` with `$$`) so the raw movieContext is
preserved when injected into explainMoviePrompt.
utils/ai/model.ts
Outdated
| // utils/ai/model.ts | ||
| import { ChatGoogleGenerativeAI } from "@langchain/google-genai"; | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Server-only guard missing.
Add Next.js server-only import to prevent client bundling.
// utils/ai/model.ts
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
+import "server-only";📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // utils/ai/model.ts | |
| import { ChatGoogleGenerativeAI } from "@langchain/google-genai"; | |
| // utils/ai/model.ts | |
| import { ChatGoogleGenerativeAI } from "@langchain/google-genai"; | |
| import "server-only"; |
🤖 Prompt for AI Agents
In utils/ai/model.ts lines 1-3, the file imports a server-only package but lacks
the Next.js server-only guard; add import 'server-only' as the very first import
in this module (before any other imports) to mark it server-only and prevent
client bundling.
utils/ai/model.ts
Outdated
| ); | ||
| } | ||
|
|
||
| const modelName = options?.modelName ?? process.env.GEMINI_MODEL ?? "gemini-pro"; |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Model default may be outdated. Verify current model IDs.
Consider defaulting to a current 1.5 model (e.g., pro/flash) if appropriate.
🌐 Web query:
What are the current, recommended Gemini chat model IDs supported by @langchain/google-genai (e.g., "gemini-1.5-pro", "gemini-1.5-flash") and is "gemini-pro" deprecated?
💡 Result:
Short answer
-
Recommended model IDs (use the official Google model names): gemini-1.5-pro, gemini-1.5-flash (and their variant suffixes like -001 / -latest), and newer 2.x names such as gemini-2.0-flash and the 2.5 family (gemini-2.5-pro / gemini-2.5-flash) — check Google’s Gemini models page for exact stable/latest variant IDs. [1][2]
-
"gemini-pro" is a LangChain/SDK shorthand used in examples but not the canonical Google model identifier. Prefer the full model IDs (e.g., "gemini-1.5-pro" or "gemini-2.5-pro-001"). There’s no single global “gemini-pro” model name in Google’s API; use the specific model code or variant instead. [0][1]
Sources
- LangChain @langchain/google-genai docs (examples showing model param usage). [0]
- Google Gemini API models page (official model IDs, stable/latest variants, deprecation notices). [1][2]
References
[0] LangChain Google Generative AI docs — ChatGoogleGenerativeAI / examples.
[1] Google AI — Gemini models (model names, variants, deprecation info).
[2] Google AI changelog / news (model lifecycle updates).
Default to a full Gemini model ID
Replace the fallback "gemini-pro" with a canonical Google Gemini model identifier, for example:
const modelName = options?.modelName
?? process.env.GEMINI_MODEL
?? "gemini-1.5-pro";Or pick the latest stable variant (e.g. "gemini-2.5-pro-001").
🤖 Prompt for AI Agents
In utils/ai/model.ts around line 24, the fallback model name currently uses the
non-canonical string "gemini-pro"; update the fallback to a full, canonical
Google Gemini model identifier (for example use "gemini-1.5-pro" or the latest
stable variant like "gemini-2.5-pro-001") so that when options.modelName and
process.env.GEMINI_MODEL are absent the code selects a valid, specific Gemini
model ID.
| @@ -1,3 +1,5 @@ | |||
| // utils/supabase/queries.ts | |||
|
|
|||
| import { createClient } from "@/utils/supabase/client"; | |||
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Confirm correct Supabase helper import
If this file is consumed in the browser, keep client; if only server, prefer server. Verify usage sites.
🏁 Script executed:
#!/bin/bash
# Find all imports of utils/supabase/queries and list server/client usage
rg -nP 'from\s+[\"\\\']@/utils/supabase/queries[\"\\\']' -C2
rg -nP 'from\s+[\"\\\']@/utils/supabase/(server|client)[\"\\\']' utils/supabase -C2Length of output: 170
🏁 Script executed:
#!/bin/bash
set -euxo pipefail
# Find all imports of the Supabase queries helper across the repo
rg --color=never -nP "from\s+['\"]@/utils/supabase/queries['\"]" -C2 .
# Find all direct imports of the client helper
rg --color=never -nP "from\s+['\"]@/utils/supabase/client['\"]" -C2 utils/supabase
# Find all direct imports of the server helper
rg --color=never -nP "from\s+['\"]@/utils/supabase/server['\"]" -C2 utils/supabaseLength of output: 3171
Separate Supabase query helpers by runtime context
The utils/supabase/queries.ts module is imported in both server (e.g. app/api/ai/chat/route.ts) and client components (e.g. app/favorites/page.tsx, app/discover/page.tsx, components/Home/HomeHeroSection.tsx, components/Search/SearchComponent.tsx, app/profile/page.tsx), yet it always uses the client helper. Server‐side usage must use the server helper to avoid runtime errors or credential exposure.
• Create queries.server.ts that imports createServerClient from @/utils/supabase/server for all server‐only functions.
• Create queries.client.ts that imports createClient from @/utils/supabase/client for client‐only functions.
• Update each usage site to import from the appropriate module based on its runtime context.
🤖 Prompt for AI Agents
In utils/supabase/queries.ts around line 3, this module currently imports the
client helper and is used from both server and client runtimes; split
runtime-specific helpers by creating two new modules: create queries.server.ts
that imports createServerClient from @/utils/supabase/server and moves all
server-only functions there, and create queries.client.ts that imports
createClient from @/utils/supabase/client and moves client-only functions there;
then update all import sites to reference queries.server.ts from server routes
(e.g., app/api/ai/chat/route.ts) and queries.client.ts from client components
(e.g., app/favorites/page.tsx, app/discover/page.tsx, components/*,
app/profile/page.tsx) to ensure server code uses the server helper and client
code uses the client helper.
| @@ -1,3 +1,5 @@ | |||
| // utils/supabase/server.ts | |||
|
|
|||
| import { createServerClient } from "@supabase/ssr"; | |||
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Enforce server-only usage.
Prevent accidental client imports.
Apply this diff:
import { createServerClient } from "@supabase/ssr";
import { cookies } from "next/headers";
+import "server-only";🤖 Prompt for AI Agents
In utils/supabase/server.ts around line 3, ensure this module can only be used
on the server by replacing the raw re-export of createServerClient with a
guarded factory: import createServerClient as before, then export a function
that checks if (typeof window !== "undefined") throw new
Error("utils/supabase/server must be used on the server"); otherwise call and
return createServerClient(...) so any accidental client-side import fails fast
and prevents client usage.
839460a to
7613b38
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (3)
.gitignore(1 hunks)package.json(2 hunks)tsconfig.json(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- .gitignore
| "embla-carousel-react": "^8.6.0", | ||
| "hono": "^4.9.9", | ||
| "langchain": "^0.3.34", | ||
| "lucide-react": "^0.543.0", |
There was a problem hiding this comment.
Install @langchain/core to satisfy LangChain's peer dependency
Both langchain and @langchain/google-genai require @langchain/core to be installed explicitly. Without it, npm install will warn and the runtime/import will fail when these packages resolve their peer dependency.(js.langchain.com)
Apply this diff to add the missing dependency:
"hono": "^4.9.9",
"langchain": "^0.3.34",
+ "@langchain/core": "^0.3.0",Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In package.json at line ~28, add the missing runtime dependency
"@langchain/core" to dependencies to satisfy the peer dependency required by
langchain and @langchain/google-genai; set its version to match the installed
langchain package version (or a compatible semver range), save it under
"dependencies", then run npm install (or yarn) to update lockfile so that
installs no longer warn or fail at runtime.
Summary by CodeRabbit