-
Notifications
You must be signed in to change notification settings - Fork 0
Sweetmantech/myc 3645 migrate all tools from chat into the new vercel api codebase #37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sweetmantech/myc 3645 migrate all tools from chat into the new vercel api codebase #37
Conversation
…ate-tools-imagetools Migrate tools - imageTools
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
WalkthroughReplaces inline validation in image generation route with a validator; adds image generation + upload helper; registers new MCP tools for images, catalogs, and sora2; adds catalog analysis, Supabase helpers, OpenAI video wrappers, Telegram utilities, and related validation schemas; introduces DEFAULT_MODEL constant. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant APIRoute as Image API Route
participant Validator as validateGenerateImageQuery
participant ImageProcessor as generateAndProcessImage
participant ImageGen as generateImage
participant Uploader as uploadImageAndCreateMoment
Client->>APIRoute: GET /api/image/generate?prompt=...&account_id=...(&files=...)
APIRoute->>Validator: validateGenerateImageQuery(request)
Validator->>Validator: parse searchParams & safeParse schema
alt validation fails
Validator-->>APIRoute: NextResponse (400 + CORS)
APIRoute-->>Client: 400 error
else validation succeeds
Validator-->>APIRoute: { prompt, account_id }
APIRoute->>ImageProcessor: generateAndProcessImage(prompt, account_id, files?)
ImageProcessor->>ImageProcessor: validate inputs, map FileInput→FilePart
ImageProcessor->>ImageGen: generateImage(prompt, fileParts)
ImageGen-->>ImageProcessor: { image } (or error)
alt image generated
ImageProcessor->>Uploader: uploadImageAndCreateMoment({ image, prompt, account: account_id })
Uploader-->>ImageProcessor: { imageUrl }
ImageProcessor-->>APIRoute: { imageUrl }
APIRoute-->>Client: 200 { imageUrl }
else generation failed
ImageProcessor-->>APIRoute: throw / error
APIRoute-->>Client: 500 error
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
lib/mcp/tools/images/registerEditImageTool.ts (1)
18-31: Avoid hard-coding PNG MIME type unless inputs are guaranteed PNGThe
filesarray always usestype: "image/png":[ { url: args.imageUrl, type: "image/png", }, ]If callers can supply non-PNG images (e.g., JPEG/WebP), this MIME type may be inaccurate and could affect downstream handling in
generateAndProcessImage/ the underlying model.If inputs aren’t guaranteed PNG, consider either:
- Deriving the media type from the URL or metadata, or
- Extending the schema to accept a
mimeType/imageTypefield and wiring that through instead of hard-coding"image/png".lib/image/generateAndProcessImage.ts (1)
14-40: Align JSDoc with return type and verifyFilePartexpectations for file inputsTwo small points here:
The JSDoc
@returnssays “The Arweave URL of the generated/processed image”, but the function actually returns aGenerateAndProcessImageResultobject with animageUrlfield. Consider updating the comment to reflect the object shape.When mapping
FileInputtoFilePart:const fileParts: FilePart[] | undefined = files?.map(file => ({ type: "file" as const, data: file.url, mediaType: file.type, }));
datais being set to a URL string. Please double-check theFileParttype from theaipackage to confirm that passing a URL asdatawithmediaTypeis supported; if it expects binary content instead, you may need to adjust this mapping (e.g., fetch or otherwise supply the actual file bytes, or use the recommended URL mechanism for that library).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
app/api/image/generate/route.ts(2 hunks)lib/image/generateAndProcessImage.ts(1 hunks)lib/image/validateEditImageQuery.ts(1 hunks)lib/image/validateGenerateImageQuery.ts(1 hunks)lib/mcp/tools/images/index.ts(1 hunks)lib/mcp/tools/images/registerEditImageTool.ts(1 hunks)lib/mcp/tools/images/registerGenerateImageTool.ts(1 hunks)lib/mcp/tools/index.ts(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (7)
lib/mcp/tools/images/registerEditImageTool.ts (2)
lib/image/validateEditImageQuery.ts (2)
editImageQuerySchema(4-6)EditImageQuery(8-8)lib/image/generateAndProcessImage.ts (2)
GenerateAndProcessImageResult(5-7)generateAndProcessImage(22-59)
lib/mcp/tools/images/registerGenerateImageTool.ts (2)
lib/image/validateGenerateImageQuery.ts (2)
generateImageQuerySchema(5-15)GenerateImageQuery(17-17)lib/image/generateAndProcessImage.ts (2)
GenerateAndProcessImageResult(5-7)generateAndProcessImage(22-59)
lib/image/generateAndProcessImage.ts (1)
lib/arweave/uploadImageAndCreateMoment.ts (1)
uploadImageAndCreateMoment(27-74)
lib/mcp/tools/images/index.ts (2)
lib/mcp/tools/images/registerGenerateImageTool.ts (1)
registerGenerateImageTool(17-40)lib/mcp/tools/images/registerEditImageTool.ts (1)
registerEditImageTool(14-43)
lib/image/validateGenerateImageQuery.ts (1)
lib/networking/getCorsHeaders.ts (1)
getCorsHeaders(6-12)
app/api/image/generate/route.ts (1)
lib/image/validateGenerateImageQuery.ts (1)
validateGenerateImageQuery(25-54)
lib/mcp/tools/index.ts (1)
lib/mcp/tools/images/index.ts (1)
registerAllImageTools(10-13)
🔇 Additional comments (6)
lib/mcp/tools/index.ts (1)
5-17: Image tools correctly integrated into global MCP tool registrationImporting
registerAllImageToolsand invoking it insideregisterAllToolskeeps the single entry point pattern for tool setup and cleanly adds the new image capabilities without changing the existing API surface.app/api/image/generate/route.ts (1)
4-34: Centralized query validation and early-return flow look solidDelegating query parsing to
validateGenerateImageQueryand short-circuiting when it returns aNextResponsesimplifies the handler and ensures consistent 400 responses with CORS headers. Using the validated{ prompt, account_id }while still readingfilesfromsearchParamspreserves previous behavior.lib/mcp/tools/images/index.ts (1)
1-13: Image tool aggregator is clean and consistent
registerAllImageToolscleanly groups the generate and edit image tools behind a single entry point, matching the pattern used for other tool families.lib/mcp/tools/images/registerGenerateImageTool.ts (1)
1-40:generate_imagetool registration and handler wiring look correctThe tool uses the shared
generateImageQuerySchema, passespromptandaccount_idthrough togenerateAndProcessImage, and returns a well-formed MCP response payload. The structure is consistent with the rest of the MCP tooling.lib/image/validateGenerateImageQuery.ts (1)
5-53: Shared query validation helper is well-structuredThe
generateImageQuerySchemaandvalidateGenerateImageQueryhelper cleanly centralize query parsing and error handling. UsingsafeParseonsearchParamsand returning a structured 400 with CORS headers on failure should make both the route handler and any future consumers more consistent and easier to maintain.lib/image/validateEditImageQuery.ts (1)
4-6: > Likely an incorrect or invalid review comment.
| [ | ||
| { | ||
| url: args.imageUrl, | ||
| type: "image/png", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The edit_image tool hardcodes the image MIME type as "image/png" when passing the image to generateAndProcessImage, which will fail for images in other formats (JPG, WebP, GIF, etc.).
View Details
📝 Patch Details
diff --git a/lib/image/getMimeTypeFromUrl.ts b/lib/image/getMimeTypeFromUrl.ts
new file mode 100644
index 0000000..87a288b
--- /dev/null
+++ b/lib/image/getMimeTypeFromUrl.ts
@@ -0,0 +1,44 @@
+/**
+ * Extracts the MIME type from an image URL based on file extension.
+ * Supports PNG, JPEG, WebP, GIF, SVG, HEIC, and HEIF formats.
+ * Falls back to image/jpeg if the extension cannot be determined.
+ *
+ * @param url - The image URL
+ * @returns The MIME type for the image
+ */
+export function getMimeTypeFromUrl(url: string): string {
+ try {
+ // Extract the path from the URL
+ const urlObj = new URL(url);
+ const pathname = urlObj.pathname.toLowerCase();
+
+ // Extract file extension (handles query parameters)
+ const match = pathname.match(/\.([a-z0-9]+)(?:\?|$)/i);
+ const extension = match?.[1]?.toLowerCase();
+
+ // Map extensions to MIME types
+ switch (extension) {
+ case "png":
+ return "image/png";
+ case "jpg":
+ case "jpeg":
+ return "image/jpeg";
+ case "webp":
+ return "image/webp";
+ case "gif":
+ return "image/gif";
+ case "svg":
+ return "image/svg+xml";
+ case "heic":
+ return "image/heic";
+ case "heif":
+ return "image/heif";
+ default:
+ // Fallback to JPEG if extension cannot be determined
+ return "image/jpeg";
+ }
+ } catch {
+ // If URL parsing fails, default to image/jpeg
+ return "image/jpeg";
+ }
+}
diff --git a/lib/mcp/tools/images/registerEditImageTool.ts b/lib/mcp/tools/images/registerEditImageTool.ts
index e7a00cf..177fa29 100644
--- a/lib/mcp/tools/images/registerEditImageTool.ts
+++ b/lib/mcp/tools/images/registerEditImageTool.ts
@@ -4,6 +4,7 @@ import {
type GenerateAndProcessImageResult,
} from "@/lib/image/generateAndProcessImage";
import { editImageQuerySchema, type EditImageQuery } from "@/lib/image/validateEditImageQuery";
+import { getMimeTypeFromUrl } from "@/lib/image/getMimeTypeFromUrl";
/**
* Registers the "edit_image" tool on the MCP server.
@@ -25,7 +26,7 @@ export function registerEditImageTool(server: McpServer): void {
[
{
url: args.imageUrl,
- type: "image/png",
+ type: getMimeTypeFromUrl(args.imageUrl),
},
],
);
Analysis
Hardcoded MIME type in edit_image tool fails for non-PNG images
What fails: The registerEditImageTool() function hardcodes type: "image/png" for all images passed to the Gemini API, causing JPEG, WebP, GIF, and other image formats to be sent with incorrect MIME type metadata.
How to reproduce:
// Call edit_image with a JPEG image URL
const result = await registerEditImageTool(server);
// Tool receives a URL like: https://example.com/photo.jpg
// But declares it as: type: "image/png"Result: The Gemini API receives incorrect MIME type metadata for non-PNG images. According to Google Gemini API validation behavior, the API validates MIME types and can reject requests with unsupported types or mismatched content types, leading to API errors or incorrect image processing.
Expected: The MIME type should be extracted from the URL to accurately reflect the actual image format (image/jpeg for .jpg files, image/webp for .webp files, etc.), or users should be able to specify the format explicitly.
Fix: Created getMimeTypeFromUrl() utility function in lib/image/getMimeTypeFromUrl.ts that extracts the MIME type from the URL file extension with proper fallback handling, and updated registerEditImageTool() to use it instead of hardcoding image/png.
…ate-tools-catalogtools Migrate tools - catalogTools
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Nitpick comments (12)
lib/catalog/processBatchesInParallel.ts (1)
18-27: Consider limiting concurrent batch processing for large datasets.When
songs.lengthis very large (e.g., 10,000 songs), this creates 100 concurrent AI API calls viaPromise.all. This could hit rate limits, cause memory pressure, or result in timeouts.Consider using a concurrency limiter for large batch counts:
import pLimit from 'p-limit'; const MAX_CONCURRENT_BATCHES = 5; export async function processBatchesInParallel( songs: CatalogSongWithArtists[], criteria: string, ): Promise<CatalogSongWithArtists[]> { const batches = []; for (let i = 0; i < songs.length; i += BATCH_SIZE) { batches.push(songs.slice(i, i + BATCH_SIZE)); } const limit = pLimit(MAX_CONCURRENT_BATCHES); const batchPromises = batches.map(batch => limit(() => analyzeCatalogBatch(batch, criteria)) ); const results = await Promise.all(batchPromises); return results.flat(); }Alternatively, if you prefer not to add a dependency, implement a simple queue:
const MAX_CONCURRENT_BATCHES = 5; export async function processBatchesInParallel( songs: CatalogSongWithArtists[], criteria: string, ): Promise<CatalogSongWithArtists[]> { const batches = []; for (let i = 0; i < songs.length; i += BATCH_SIZE) { batches.push(songs.slice(i, i + BATCH_SIZE)); } const results: CatalogSongWithArtists[][] = []; for (let i = 0; i < batches.length; i += MAX_CONCURRENT_BATCHES) { const chunk = batches.slice(i, i + MAX_CONCURRENT_BATCHES); const chunkResults = await Promise.all( chunk.map(batch => analyzeCatalogBatch(batch, criteria)) ); results.push(...chunkResults); } return results.flat(); }lib/catalog/analyzeCatalogBatch.ts (1)
26-40: Consider token limit implications for large batches.With
BATCH_SIZE = 100, the prompt could become very large when songs have long names or many artists. While 100 songs is reasonable for most cases, consider monitoring prompt token usage to ensure it stays within model limits (especially for older or smaller models).Optional: Add logging to track prompt size for monitoring:
const songsData = songs.map(s => ({ isrc: s.isrc, name: s.name, artist: s.artists.map(a => a.name).join(", "), })); const songsJson = JSON.stringify(songsData, null, 2); // Optional: Log for monitoring if (process.env.NODE_ENV === 'development') { console.log(`Batch analysis prompt size: ~${songsJson.length} chars for ${songs.length} songs`); } const { object } = await generateObject({ model: DEFAULT_MODEL, schema: z.object({ selected_song_isrcs: z .array(z.string()) .describe("Array of song ISRCs that match the criteria"), }), prompt: `Given these songs and the criteria: "${criteria}", select the song ISRCs that are most relevant. Songs: ${songsJson} Return only the ISRCs of songs that match the criteria.`, });lib/mcp/tools/catalogs/registerInsertCatalogSongsTool.ts (1)
34-45: Consider adding error handling for the async operation.If
insertCatalogSongsFunctionthrows, the error will propagate unhandled. Wrapping in try-catch with a meaningful error response improves reliability and user feedback.async (args: InsertCatalogSongsQuery) => { - const response = await insertCatalogSongsFunction(args.songs); - - return { - content: [ - { - type: "text", - text: JSON.stringify(response), - }, - ], - }; + try { + const response = await insertCatalogSongsFunction(args.songs); + return { + content: [ + { + type: "text", + text: JSON.stringify(response), + }, + ], + }; + } catch (error) { + return { + content: [ + { + type: "text", + text: JSON.stringify({ + error: error instanceof Error ? error.message : "Failed to insert catalog songs", + }), + }, + ], + isError: true, + }; + } },lib/catalog/validateGetCatalogsQuery.ts (1)
26-38: Themissing_fieldsvalue is an array, which may not match the expected response format.
firstError.pathis a(string | number)[]. If the intent is to return field names as a list, this is fine. However, if consumers expect a string, this could cause issues. Consider whether this should befirstError.path.join('.')or kept as-is with documentation.return NextResponse.json( { status: "error", - missing_fields: firstError.path, + missing_fields: firstError.path.length > 0 ? firstError.path : [firstError.code], error: firstError.message, },lib/catalog/validateInsertCatalogSongsQuery.ts (1)
3-11: Consider adding optional fields supported byinsertCatalogSongsFunction.Based on the relevant code snippet from
lib/catalog/insertCatalogSongs.ts, the function accepts additional optional fields (name,album,notes,artists). If these should be exposed via the MCP tool, extend the schema:const catalogSongInputSchema = z.object({ catalog_id: z .string() .min(1, "catalog_id is required") .describe( "Catalog ID to which the song will be added. Get this from the select_catalogs tool.", ), isrc: z.string().min(1, "isrc is required").describe("Song ISRC to associate to the catalog"), + name: z.string().optional().describe("Song name"), + album: z.string().optional().describe("Album name"), + notes: z.string().optional().describe("Additional notes"), + artists: z.array(z.string()).optional().describe("Array of artist names"), });lib/catalog/analyzeFullCatalog.ts (2)
34-44: Unbounded parallel fetches may overwhelm the database for large catalogs.For catalogs with many songs (e.g., 10,000 songs = 100 pages),
Promise.allfires all requests simultaneously. Consider chunking the parallel requests to limit concurrency.+const MAX_CONCURRENT_PAGES = 10; + +async function fetchPagesInChunks( + catalogId: string, + pageNumbers: number[], + firstPageSongs: CatalogSongWithArtists[], +): Promise<CatalogSongWithArtists[]> { + const results: CatalogSongWithArtists[] = []; + for (let i = 0; i < pageNumbers.length; i += MAX_CONCURRENT_PAGES) { + const chunk = pageNumbers.slice(i, i + MAX_CONCURRENT_PAGES); + const chunkResults = await Promise.all( + chunk.map(async pageNum => + pageNum === 1 + ? firstPageSongs + : (await selectCatalogSongsWithArtists({ catalogId, page: pageNum, limit: BATCH_SIZE })).songs, + ), + ); + results.push(...chunkResults.flat()); + } + return results; +}Then replace lines 36-44 with:
const allSongs = await fetchPagesInChunks(catalogId, pageNumbers, firstPage.songs);
21-28: Add error handling for database query failures.If
selectCatalogSongsWithArtiststhrows (network issue, database error), the error propagates unhandled. Consider wrapping with try-catch and providing a meaningful error message.lib/supabase/account_catalogs/selectAccountCatalogs.ts (1)
21-55: Consider adding pagination support for large datasets.The function returns all matching rows without pagination. For accounts with many catalogs, this could return large datasets. Consider adding optional
pageandlimitparameters similar toselectCatalogSongsWithArtists.lib/catalog/insertCatalogSongs.ts (4)
36-52: ISRC grouping silently drops falsy ISRCs and overwrites metadata on duplicates
dataByIsrc:
- Skips entries where
song.isrcis falsy, butisrcis typed as a requiredstring. If empty strings are possible from upstream, those rows will be ignored without signaling an error.- For multiple rows with the same ISRC but different
name/album/notes/artists, the last one wins.If this is not intended, consider:
- Validating
isrcearlier and rejecting/flagging invalid rows instead of silently dropping them.- Deciding a deterministic precedence strategy (e.g., first non-empty, or merging
artistsacross duplicates) and documenting it.
57-66: Consider deduplicating(catalog_id, isrc)pairs before callinginsertCatalogSongsYou currently call
insertCatalogSongswith one row per input element, which means duplicates in the input (same catalog and ISRC) will produce duplicate relationship inserts unlessinsertCatalogSongsitself handles upserts/constraints.If
insertCatalogSongsexpects unique pairs, consider:
- Deduping via a
Setkeyed by${catalog_id}:${isrc}, or- Letting
insertCatalogSongsaccept adistinctflag and handle it centrally.This avoids potential constraint violations and unnecessary work if clients send redundant rows.
68-75: Returned “created catalog songs” may include pre-existing relationshipsThe combination of
selectCatalogSongsWithArtists({ isrcs }), and- filtering only by
catalog_id(uniqueCatalogIds.includes(catalogSong.catalog_id))means the result can include catalog-song rows that already existed for those ISRCs before this call, as long as they match one of the catalogs in
uniqueCatalogIds.If the intent is “full current state for those catalogs and ISRCs”, this is fine; if the intent is “only relationships newly created by this operation”, you’ll likely need to:
- Use the return value of
insertCatalogSongs(if it returns inserted rows), or- Filter by a more specific key set (e.g., the exact
(catalog_id, isrc)pairs you attempted to insert, possibly tied to a timestamp or primary key).Worth clarifying the expected semantics for callers.
77-85: Pagination fields are effectively static single-page metadataThe pagination object always reports:
page: 1total_pages: 1limit: filteredCatalogSongs.lengthSo this is logically “all results, single page”, regardless of input size. If the callers only ever use this in a non-paginated context, that’s fine, but if they expect real pagination semantics (e.g., they pass page/limit somewhere upstream), consider:
- Threading requested
page/limitthrough the function and applying slicing here, or- Removing/renaming the pagination wrapper to avoid implying generic paging support.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
lib/catalog/analyzeCatalogBatch.ts(1 hunks)lib/catalog/analyzeFullCatalog.ts(1 hunks)lib/catalog/insertCatalogSongs.ts(1 hunks)lib/catalog/processBatchesInParallel.ts(1 hunks)lib/catalog/refineResults.ts(1 hunks)lib/catalog/validateGetCatalogSongsQuery.ts(1 hunks)lib/catalog/validateGetCatalogsQuery.ts(1 hunks)lib/catalog/validateInsertCatalogSongsQuery.ts(1 hunks)lib/const.ts(1 hunks)lib/mcp/tools/catalogs/index.ts(1 hunks)lib/mcp/tools/catalogs/registerGetCatalogSongsTool.ts(1 hunks)lib/mcp/tools/catalogs/registerGetCatalogsTool.ts(1 hunks)lib/mcp/tools/catalogs/registerInsertCatalogSongsTool.ts(1 hunks)lib/mcp/tools/index.ts(2 hunks)lib/supabase/account_catalogs/selectAccountCatalogs.ts(1 hunks)lib/supabase/catalog_songs/selectCatalogSongsWithArtists.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (11)
lib/mcp/tools/catalogs/registerGetCatalogSongsTool.ts (2)
lib/catalog/validateGetCatalogSongsQuery.ts (2)
getCatalogSongsQuerySchema(3-16)GetCatalogSongsQuery(18-18)lib/catalog/analyzeFullCatalog.ts (1)
analyzeFullCatalog(21-54)
lib/catalog/analyzeCatalogBatch.ts (2)
lib/supabase/catalog_songs/selectCatalogSongsWithArtists.ts (1)
CatalogSongWithArtists(4-8)lib/const.ts (1)
DEFAULT_MODEL(8-8)
lib/catalog/refineResults.ts (2)
lib/supabase/catalog_songs/selectCatalogSongsWithArtists.ts (1)
CatalogSongWithArtists(4-8)lib/catalog/processBatchesInParallel.ts (1)
processBatchesInParallel(14-27)
lib/catalog/processBatchesInParallel.ts (2)
lib/supabase/catalog_songs/selectCatalogSongsWithArtists.ts (1)
CatalogSongWithArtists(4-8)lib/catalog/analyzeCatalogBatch.ts (1)
analyzeCatalogBatch(14-44)
lib/mcp/tools/catalogs/registerGetCatalogsTool.ts (2)
lib/catalog/validateGetCatalogsQuery.ts (2)
getCatalogsQuerySchema(5-10)GetCatalogsQuery(12-12)lib/supabase/account_catalogs/selectAccountCatalogs.ts (1)
selectAccountCatalogs(21-55)
lib/mcp/tools/catalogs/registerInsertCatalogSongsTool.ts (2)
lib/catalog/validateInsertCatalogSongsQuery.ts (2)
insertCatalogSongsQuerySchema(13-18)InsertCatalogSongsQuery(20-20)lib/catalog/insertCatalogSongs.ts (1)
insertCatalogSongsFunction(32-86)
lib/catalog/validateGetCatalogsQuery.ts (1)
lib/networking/getCorsHeaders.ts (1)
getCorsHeaders(6-12)
lib/mcp/tools/index.ts (2)
lib/mcp/tools/images/index.ts (1)
registerAllImageTools(10-13)lib/mcp/tools/catalogs/index.ts (1)
registerAllCatalogTools(11-15)
lib/supabase/account_catalogs/selectAccountCatalogs.ts (1)
types/database.types.ts (1)
Tables(3755-3780)
lib/catalog/analyzeFullCatalog.ts (2)
lib/supabase/catalog_songs/selectCatalogSongsWithArtists.ts (2)
CatalogSongWithArtists(4-8)selectCatalogSongsWithArtists(42-113)lib/catalog/refineResults.ts (1)
refineResults(14-25)
lib/mcp/tools/catalogs/index.ts (3)
lib/mcp/tools/catalogs/registerGetCatalogsTool.ts (1)
registerGetCatalogsTool(13-42)lib/mcp/tools/catalogs/registerGetCatalogSongsTool.ts (1)
registerGetCatalogSongsTool(14-54)lib/mcp/tools/catalogs/registerInsertCatalogSongsTool.ts (1)
registerInsertCatalogSongsTool(14-47)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Vercel Agent Review
🔇 Additional comments (6)
lib/const.ts (1)
8-8: DEFAULT_MODEL is fine; confirm this is truly the global default across environmentsThe literal
"openai/gpt-5-mini"as a sharedDEFAULT_MODELexport looks reasonable. Just make sure this is the model you want in all environments (dev/stage/prod); if not, considerprocess.env.DEFAULT_MODEL ?? "openai/gpt-5-mini"so you can override without code changes.lib/mcp/tools/index.ts (1)
5-6: LGTM! Clean integration of new tool categories.The imports and registration calls for image and catalog tools follow the existing pattern and integrate cleanly with the tool registration flow.
Also applies to: 18-19
lib/supabase/catalog_songs/selectCatalogSongsWithArtists.ts (1)
4-8: LGTM! Appropriate type exposure.Making
CatalogSongWithArtistspublicly exported enables downstream catalog processing modules to reference this type directly, which aligns with the broader migration objectives.lib/mcp/tools/catalogs/index.ts (1)
1-15: LGTM! Clean barrel module pattern.The catalog tool registration aggregation follows the established pattern and provides a clean single entry point for registering all catalog-related MCP tools.
lib/catalog/validateGetCatalogSongsQuery.ts (1)
1-18: LGTM!Clean Zod schema definition with appropriate validations and descriptive metadata. The schema correctly enforces required fields and provides helpful descriptions for MCP tool documentation.
lib/catalog/insertCatalogSongs.ts (1)
1-6: Imports and overall structure look consistentThe imports match usage in this module, and the high-level flow (normalize → processSongsInput → insert relationships → re-query with artists) is coherent.
| export async function analyzeCatalogBatch( | ||
| songs: CatalogSongWithArtists[], | ||
| criteria: string, | ||
| ): Promise<CatalogSongWithArtists[]> { | ||
| // Use AI to select relevant songs from this batch | ||
| const { object } = await generateObject({ | ||
| model: DEFAULT_MODEL, | ||
| schema: z.object({ | ||
| selected_song_isrcs: z | ||
| .array(z.string()) | ||
| .describe("Array of song ISRCs that match the criteria"), | ||
| }), | ||
| prompt: `Given these songs and the criteria: "${criteria}", select the song ISRCs that are most relevant. | ||
|
|
||
| Songs: | ||
| ${JSON.stringify( | ||
| songs.map(s => ({ | ||
| isrc: s.isrc, | ||
| name: s.name, | ||
| artist: s.artists.map(a => a.name).join(", "), | ||
| })), | ||
| null, | ||
| 2, | ||
| )} | ||
|
|
||
| Return only the ISRCs of songs that match the criteria.`, | ||
| }); | ||
|
|
||
| // Filter songs based on AI selection | ||
| return songs.filter(song => (object.selected_song_isrcs as string[]).includes(song.isrc)); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add error handling for AI generation call.
The generateObject call lacks error handling. AI API calls can fail due to rate limits, network issues, invalid prompts, or model errors.
Wrap the AI call in a try-catch:
export async function analyzeCatalogBatch(
songs: CatalogSongWithArtists[],
criteria: string,
): Promise<CatalogSongWithArtists[]> {
+ try {
// Use AI to select relevant songs from this batch
const { object } = await generateObject({
model: DEFAULT_MODEL,
schema: z.object({
selected_song_isrcs: z
.array(z.string())
.describe("Array of song ISRCs that match the criteria"),
}),
prompt: `Given these songs and the criteria: "${criteria}", select the song ISRCs that are most relevant.
Songs:
${JSON.stringify(
songs.map(s => ({
isrc: s.isrc,
name: s.name,
artist: s.artists.map(a => a.name).join(", "),
})),
null,
2,
)}
Return only the ISRCs of songs that match the criteria.`,
});
// Filter songs based on AI selection
const selectedIsrcs = object.selected_song_isrcs;
if (!Array.isArray(selectedIsrcs)) {
console.warn('AI response did not include valid selected_song_isrcs array');
return [];
}
return songs.filter(song => selectedIsrcs.includes(song.isrc));
+ } catch (error) {
+ console.error(`Failed to analyze batch of ${songs.length} songs:`, error);
+ // Return empty array or all songs depending on desired failure mode
+ return []; // Conservative: exclude batch on error
+ // OR: return songs; // Permissive: include all songs on error
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function analyzeCatalogBatch( | |
| songs: CatalogSongWithArtists[], | |
| criteria: string, | |
| ): Promise<CatalogSongWithArtists[]> { | |
| // Use AI to select relevant songs from this batch | |
| const { object } = await generateObject({ | |
| model: DEFAULT_MODEL, | |
| schema: z.object({ | |
| selected_song_isrcs: z | |
| .array(z.string()) | |
| .describe("Array of song ISRCs that match the criteria"), | |
| }), | |
| prompt: `Given these songs and the criteria: "${criteria}", select the song ISRCs that are most relevant. | |
| Songs: | |
| ${JSON.stringify( | |
| songs.map(s => ({ | |
| isrc: s.isrc, | |
| name: s.name, | |
| artist: s.artists.map(a => a.name).join(", "), | |
| })), | |
| null, | |
| 2, | |
| )} | |
| Return only the ISRCs of songs that match the criteria.`, | |
| }); | |
| // Filter songs based on AI selection | |
| return songs.filter(song => (object.selected_song_isrcs as string[]).includes(song.isrc)); | |
| } | |
| export async function analyzeCatalogBatch( | |
| songs: CatalogSongWithArtists[], | |
| criteria: string, | |
| ): Promise<CatalogSongWithArtists[]> { | |
| try { | |
| // Use AI to select relevant songs from this batch | |
| const { object } = await generateObject({ | |
| model: DEFAULT_MODEL, | |
| schema: z.object({ | |
| selected_song_isrcs: z | |
| .array(z.string()) | |
| .describe("Array of song ISRCs that match the criteria"), | |
| }), | |
| prompt: `Given these songs and the criteria: "${criteria}", select the song ISRCs that are most relevant. | |
| Songs: | |
| ${JSON.stringify( | |
| songs.map(s => ({ | |
| isrc: s.isrc, | |
| name: s.name, | |
| artist: s.artists.map(a => a.name).join(", "), | |
| })), | |
| null, | |
| 2, | |
| )} | |
| Return only the ISRCs of songs that match the criteria.`, | |
| }); | |
| // Filter songs based on AI selection | |
| const selectedIsrcs = object.selected_song_isrcs; | |
| if (!Array.isArray(selectedIsrcs)) { | |
| console.warn('AI response did not include valid selected_song_isrcs array'); | |
| return []; | |
| } | |
| return songs.filter(song => selectedIsrcs.includes(song.isrc)); | |
| } catch (error) { | |
| console.error(`Failed to analyze batch of ${songs.length} songs:`, error); | |
| // Return empty array or all songs depending on desired failure mode | |
| return []; // Conservative: exclude batch on error | |
| // OR: return songs; // Permissive: include all songs on error | |
| } | |
| } |
| }); | ||
|
|
||
| // Filter songs based on AI selection | ||
| return songs.filter(song => (object.selected_song_isrcs as string[]).includes(song.isrc)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Validate AI response before type assertion.
Line 43 uses a type assertion without validating the structure. While the schema constrains selected_song_isrcs to be an array of strings, runtime validation adds robustness against unexpected AI responses.
Apply this diff to add validation:
// Filter songs based on AI selection
- return songs.filter(song => (object.selected_song_isrcs as string[]).includes(song.isrc));
+ const selectedIsrcs = object.selected_song_isrcs;
+ if (!Array.isArray(selectedIsrcs)) {
+ console.warn('AI response did not include valid selected_song_isrcs array');
+ return [];
+ }
+
+ return songs.filter(song => selectedIsrcs.includes(song.isrc));
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In lib/catalog/analyzeCatalogBatch.ts around line 43, the code blindly asserts
object.selected_song_isrcs as string[] and uses .includes, which can throw if
the AI response is malformed; validate at runtime by checking
Array.isArray(object.selected_song_isrcs) and that every element is a string (or
coerce/filter non-strings), assign a safe string[] (e.g., [] when invalid), then
use that validated array in songs.filter(...). If invalid, either log/record the
anomaly or fall back to an empty list so the filter is safe.
| export async function refineResults( | ||
| songs: CatalogSongWithArtists[], | ||
| criteria: string, | ||
| ): Promise<CatalogSongWithArtists[]> { | ||
| if (songs.length <= MAX_RESULTS) return songs; | ||
|
|
||
| // Process in parallel batches - AI naturally selects best matches from whatever set it's given | ||
| const filtered = await processBatchesInParallel(songs, criteria); | ||
|
|
||
| // Recursively refine if still too many | ||
| return refineResults(filtered, criteria); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add recursion depth limit to prevent stack overflow.
The recursive refinement lacks a depth limit and assumes processBatchesInParallel will always reduce the result count. If AI filtering is ineffective or inconsistent, this could lead to excessive recursion or even stack overflow.
Apply this diff to add a depth limit:
+const MAX_DEPTH = 10;
+
/**
* Recursively filters song selection until results are under MAX_RESULTS
* Single Responsibility: Ensure result count stays within LLM context limits
*
* @param songs - The songs to refine
* @param criteria - The criteria to use to refine the songs
+ * @param depth - Current recursion depth (internal use)
* @returns The refined songs
*/
export async function refineResults(
songs: CatalogSongWithArtists[],
criteria: string,
+ depth: number = 0,
): Promise<CatalogSongWithArtists[]> {
if (songs.length <= MAX_RESULTS) return songs;
+
+ if (depth >= MAX_DEPTH) {
+ console.warn(`Max refinement depth (${MAX_DEPTH}) reached with ${songs.length} songs. Truncating results.`);
+ return songs.slice(0, MAX_RESULTS);
+ }
// Process in parallel batches - AI naturally selects best matches from whatever set it's given
const filtered = await processBatchesInParallel(songs, criteria);
+
+ // Guard against cases where filtering doesn't reduce the count
+ if (filtered.length >= songs.length) {
+ console.warn(`Refinement made no progress. Truncating ${filtered.length} songs to ${MAX_RESULTS}.`);
+ return filtered.slice(0, MAX_RESULTS);
+ }
// Recursively refine if still too many
- return refineResults(filtered, criteria);
+ return refineResults(filtered, criteria, depth + 1);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function refineResults( | |
| songs: CatalogSongWithArtists[], | |
| criteria: string, | |
| ): Promise<CatalogSongWithArtists[]> { | |
| if (songs.length <= MAX_RESULTS) return songs; | |
| // Process in parallel batches - AI naturally selects best matches from whatever set it's given | |
| const filtered = await processBatchesInParallel(songs, criteria); | |
| // Recursively refine if still too many | |
| return refineResults(filtered, criteria); | |
| } | |
| const MAX_DEPTH = 10; | |
| export async function refineResults( | |
| songs: CatalogSongWithArtists[], | |
| criteria: string, | |
| depth: number = 0, | |
| ): Promise<CatalogSongWithArtists[]> { | |
| if (songs.length <= MAX_RESULTS) return songs; | |
| if (depth >= MAX_DEPTH) { | |
| console.warn(`Max refinement depth (${MAX_DEPTH}) reached with ${songs.length} songs. Truncating results.`); | |
| return songs.slice(0, MAX_RESULTS); | |
| } | |
| // Process in parallel batches - AI naturally selects best matches from whatever set it's given | |
| const filtered = await processBatchesInParallel(songs, criteria); | |
| // Guard against cases where filtering doesn't reduce the count | |
| if (filtered.length >= songs.length) { | |
| console.warn(`Refinement made no progress. Truncating ${filtered.length} songs to ${MAX_RESULTS}.`); | |
| return filtered.slice(0, MAX_RESULTS); | |
| } | |
| // Recursively refine if still too many | |
| return refineResults(filtered, criteria, depth + 1); | |
| } |
🤖 Prompt for AI Agents
In lib/catalog/refineResults.ts around lines 14-25, the recursive refineResults
has no depth limit; add a MAX_RECURSION_DEPTH constant and an optional depth
parameter (e.g. depth = 0) to the function signature, increment depth on each
recursive call and stop recursing when depth >= MAX_RECURSION_DEPTH by returning
the current songs (or logging/warning) instead of recursing further; update the
initial call sites to use the new signature where necessary so recursion is
bounded.
| export function validateGetCatalogsQuery(request: NextRequest): NextResponse | GetCatalogsQuery { | ||
| const { searchParams } = new URL(request.url); | ||
| const params = Object.fromEntries(searchParams.entries()); | ||
|
|
||
| const validationResult = getCatalogsQuerySchema.safeParse(params); | ||
|
|
||
| if (!validationResult.success) { | ||
| const firstError = validationResult.error.issues[0]; | ||
| return NextResponse.json( | ||
| { | ||
| status: "error", | ||
| missing_fields: firstError.path, | ||
| error: firstError.message, | ||
| }, | ||
| { | ||
| status: 400, | ||
| headers: getCorsHeaders(), | ||
| }, | ||
| ); | ||
| } | ||
|
|
||
| return validationResult.data; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Search for all usages of validateGetCatalogsQuery
rg -n "validateGetCatalogsQuery" --type=ts --type=tsx --type=js --type=jsxRepository: Recoupable-com/Recoup-API
Length of output: 96
🏁 Script executed:
# Search for all usages of validateGetCatalogsQuery
rg -n "validateGetCatalogsQuery"Repository: Recoupable-com/Recoup-API
Length of output: 310
🏁 Script executed:
# Read the registerGetCatalogsTool.ts file to see how validateGetCatalogsQuery is used
cat -n lib/mcp/tools/catalogs/registerGetCatalogsTool.tsRepository: Recoupable-com/Recoup-API
Length of output: 1799
🏁 Script executed:
# Search for route handler files that might use validateGetCatalogsQuery
# Look in common API route locations
find . -type f -name "route.ts" -o -name "route.js" | head -20Repository: Recoupable-com/Recoup-API
Length of output: 631
🏁 Script executed:
# Check the catalogs route
cat -n app/api/catalogs/songs/route.tsRepository: Recoupable-com/Recoup-API
Length of output: 1907
🏁 Script executed:
# Search for catalogs-related route files
find . -type f -path "*/api/catalogs*" -name "*.ts"Repository: Recoupable-com/Recoup-API
Length of output: 102
🏁 Script executed:
# Search for validateGetCatalogsQuery in the handler files
rg -n "validateGetCatalogsQuery" lib/songs/Repository: Recoupable-com/Recoup-API
Length of output: 51
Remove validateGetCatalogsQuery function—it has no consumers.
The function is never called anywhere in the codebase. While imported in registerGetCatalogsTool.ts, only the getCatalogsQuerySchema and GetCatalogsQuery type are used; the validation function itself is dead code. MCP tools use the schema directly, and no route handlers invoke this function.
🤖 Prompt for AI Agents
In lib/catalog/validateGetCatalogsQuery.ts around lines 20 to 42, the
validateGetCatalogsQuery function is unused and should be removed; delete the
entire file to eliminate dead code and update any imports across the repo —
specifically remove the import of validateGetCatalogsQuery from
registerGetCatalogsTool.ts (leaving getCatalogsQuerySchema and GetCatalogsQuery
type imports intact) and run a quick search to ensure no other references remain
before committing.
| async (args: GetCatalogSongsQuery) => { | ||
| const response = await analyzeFullCatalog({ | ||
| catalogId: args.catalog_id, | ||
| criteria: args.criteria, | ||
| }); | ||
|
|
||
| return { | ||
| content: [ | ||
| { | ||
| type: "text", | ||
| text: JSON.stringify(response), | ||
| }, | ||
| ], | ||
| }; | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add error handling for async operations.
The handler calls analyzeFullCatalog without error handling. If the catalog analysis fails (e.g., database connection issues, AI model failures), the error will propagate unhandled to the MCP framework.
Apply this pattern to add error handling:
async (args: GetCatalogSongsQuery) => {
+ try {
const response = await analyzeFullCatalog({
catalogId: args.catalog_id,
criteria: args.criteria,
});
return {
content: [
{
type: "text",
text: JSON.stringify(response),
},
],
};
+ } catch (error) {
+ return {
+ content: [
+ {
+ type: "text",
+ text: JSON.stringify({
+ error: error instanceof Error ? error.message : "Failed to analyze catalog"
+ }),
+ },
+ ],
+ isError: true,
+ };
+ }
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async (args: GetCatalogSongsQuery) => { | |
| const response = await analyzeFullCatalog({ | |
| catalogId: args.catalog_id, | |
| criteria: args.criteria, | |
| }); | |
| return { | |
| content: [ | |
| { | |
| type: "text", | |
| text: JSON.stringify(response), | |
| }, | |
| ], | |
| }; | |
| }, | |
| async (args: GetCatalogSongsQuery) => { | |
| try { | |
| const response = await analyzeFullCatalog({ | |
| catalogId: args.catalog_id, | |
| criteria: args.criteria, | |
| }); | |
| return { | |
| content: [ | |
| { | |
| type: "text", | |
| text: JSON.stringify(response), | |
| }, | |
| ], | |
| }; | |
| } catch (error) { | |
| return { | |
| content: [ | |
| { | |
| type: "text", | |
| text: JSON.stringify({ | |
| error: error instanceof Error ? error.message : "Failed to analyze catalog" | |
| }), | |
| }, | |
| ], | |
| isError: true, | |
| }; | |
| } | |
| }, |
🤖 Prompt for AI Agents
In lib/mcp/tools/catalogs/registerGetCatalogSongsTool.ts around lines 38–52,
wrap the analyzeFullCatalog call in a try/catch: call analyzeFullCatalog inside
try, and in catch log the error (using the available logger or console.error)
and return a structured error response (e.g., content with type "text" and text
set to a JSON string containing an error message and optional error details)
instead of letting the exception propagate; ensure the success return stays in
the try block and the catch returns the same response shape the MCP expects.
| async (args: GetCatalogsQuery) => { | ||
| const data = await selectAccountCatalogs({ accountIds: [args.account_id] }); | ||
|
|
||
| return { | ||
| content: [ | ||
| { | ||
| type: "text", | ||
| text: JSON.stringify(data), | ||
| }, | ||
| ], | ||
| }; | ||
| }, | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add error handling for database operations.
The handler calls selectAccountCatalogs without error handling. Database queries can fail due to connection issues, permission errors, or invalid data.
Apply this pattern:
async (args: GetCatalogsQuery) => {
+ try {
const data = await selectAccountCatalogs({ accountIds: [args.account_id] });
return {
content: [
{
type: "text",
text: JSON.stringify(data),
},
],
};
+ } catch (error) {
+ return {
+ content: [
+ {
+ type: "text",
+ text: JSON.stringify({
+ error: error instanceof Error ? error.message : "Failed to fetch catalogs"
+ }),
+ },
+ ],
+ isError: true,
+ };
+ }
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async (args: GetCatalogsQuery) => { | |
| const data = await selectAccountCatalogs({ accountIds: [args.account_id] }); | |
| return { | |
| content: [ | |
| { | |
| type: "text", | |
| text: JSON.stringify(data), | |
| }, | |
| ], | |
| }; | |
| }, | |
| ); | |
| async (args: GetCatalogsQuery) => { | |
| try { | |
| const data = await selectAccountCatalogs({ accountIds: [args.account_id] }); | |
| return { | |
| content: [ | |
| { | |
| type: "text", | |
| text: JSON.stringify(data), | |
| }, | |
| ], | |
| }; | |
| } catch (error) { | |
| return { | |
| content: [ | |
| { | |
| type: "text", | |
| text: JSON.stringify({ | |
| error: error instanceof Error ? error.message : "Failed to fetch catalogs" | |
| }), | |
| }, | |
| ], | |
| isError: true, | |
| }; | |
| } | |
| }, |
🤖 Prompt for AI Agents
In lib/mcp/tools/catalogs/registerGetCatalogsTool.ts around lines 29 to 41, the
call to selectAccountCatalogs lacks error handling; wrap the await
selectAccountCatalogs(...) call in a try/catch, log the caught error (using the
module's logger or console if none), and return a safe failure response (e.g.,
content with an error text message or a standardized error object) instead of
letting the exception bubble; ensure any sensitive error details are not exposed
to the client and optionally include an internal error code for telemetry.
| type AccountCatalogWithCatalog = { | ||
| catalog: string; | ||
| catalogs: Tables<"catalogs">[]; | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat lib/supabase/account_catalogs/selectAccountCatalogs.tsRepository: Recoupable-com/Recoup-API
Length of output: 1351
🏁 Script executed:
# Search for database schema, migrations, or relationship definitions
fd -t f -e sql -e ts | grep -i "schema\|migration\|database" | head -20Repository: Recoupable-com/Recoup-API
Length of output: 92
🏁 Script executed:
# Look for database types file and check the relationship
fd -t f "database.types.ts"Repository: Recoupable-com/Recoup-API
Length of output: 92
🏁 Script executed:
# Search for how AccountCatalogWithCatalog is actually used in the codebase
rg -n "AccountCatalogWithCatalog" --type=ts -B 2 -A 3Repository: Recoupable-com/Recoup-API
Length of output: 1094
🏁 Script executed:
cat types/database.types.ts | head -200Repository: Recoupable-com/Recoup-API
Length of output: 5968
🏁 Script executed:
rg -n "catalogs:" types/database.types.ts -A 50 | head -80Repository: Recoupable-com/Recoup-API
Length of output: 2406
🏁 Script executed:
# Check if there are any tests or other usages that might reveal the actual runtime behavior
rg -n "selectAccountCatalogs" --type=ts -B 2 -A 5Repository: Recoupable-com/Recoup-API
Length of output: 2214
Type definition does not match query result shape.
The catalog column in account_catalogs has a many-to-one relationship to catalogs. When selecting this relationship via catalogs!inner (...), Supabase returns a single nested object per row, not an array. The type should be corrected:
type AccountCatalogWithCatalog = {
catalog: string;
- catalogs: Tables<"catalogs">[];
+ catalogs: Tables<"catalogs">;
};The !inner modifier filters rows where the relationship exists but does not affect the cardinality of the relationship itself.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| type AccountCatalogWithCatalog = { | |
| catalog: string; | |
| catalogs: Tables<"catalogs">[]; | |
| }; | |
| type AccountCatalogWithCatalog = { | |
| catalog: string; | |
| catalogs: Tables<"catalogs">; | |
| }; |
🤖 Prompt for AI Agents
In lib/supabase/account_catalogs/selectAccountCatalogs.ts around lines 4 to 7,
the AccountCatalogWithCatalog type declares catalogs as an array but the
Supabase relationship selected via catalogs!inner returns a single nested object
per row; update the type so catalogs is a single Tables<"catalogs"> (not an
array) and keep catalog (ID) as-is; no runtime logic change needed because
!inner filters existence but does not change cardinality.
…ate-tools-sora2tools Sweetmantech/myc 3684 migrate tools sora2tools
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (3)
lib/openai/getOpenAIErrorMessage.ts (1)
7-31: LGTM with minor observation on case sensitivity.The error handling logic is clean and covers common OpenAI error scenarios. One consideration:
includes()is case-sensitive, so error messages like"Rate Limit"or"RATE LIMIT"would not match. OpenAI typically uses lowercase, but if robustness is a concern, consider using a case-insensitive check:const lowerMessage = errorMessage.toLowerCase(); if (lowerMessage.includes("rate limit")) { ... }This is optional since OpenAI's current error messages are consistently lowercase.
lib/openai/retrieveVideoContent.ts (2)
35-46: Consider memory implications for large videos.Converting the entire video to a base64 data URL keeps the full video in memory (and base64 encoding increases size by ~33%). For large videos (e.g., 100MB+), this could cause memory pressure.
If large video support is expected, consider:
- Streaming the content directly to storage instead of buffering
- Returning only metadata and a signed URL for download
This may be acceptable for the current use case if video sizes are bounded.
21-26: Consider adding a request timeout.The fetch call has no timeout, which could cause indefinite hangs on slow or unresponsive connections. Consider using
AbortControllerwith a timeout:+ const controller = new AbortController(); + const timeoutId = setTimeout(() => controller.abort(), 300000); // 5 min timeout + const response = await fetch(`https://api.openai.com/v1/videos/${videoId}/content`, { method: "GET", headers: { Authorization: `Bearer ${apiKey}`, }, + signal: controller.signal, }); + + clearTimeout(timeoutId);
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
lib/mcp/tools/index.ts(2 hunks)lib/mcp/tools/sora2/index.ts(1 hunks)lib/mcp/tools/sora2/registerGenerateVideoTool.ts(1 hunks)lib/mcp/tools/sora2/registerRetrieveVideoContentTool.ts(1 hunks)lib/mcp/tools/sora2/registerRetrieveVideoTool.ts(1 hunks)lib/openai/generateVideo.ts(1 hunks)lib/openai/getOpenAIErrorMessage.ts(1 hunks)lib/openai/retrieveVideo.ts(1 hunks)lib/openai/retrieveVideoContent.ts(1 hunks)lib/video/generateVideo.ts(1 hunks)lib/video/retrieveVideo.ts(1 hunks)lib/video/retrieveVideoContent.ts(1 hunks)lib/video/validateGenerateVideoQuery.ts(1 hunks)lib/video/validateRetrieveVideoQuery.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- lib/mcp/tools/index.ts
🧰 Additional context used
🧬 Code graph analysis (8)
lib/openai/retrieveVideo.ts (1)
lib/openai/generateVideo.ts (1)
GenerateVideoResponse(8-18)
lib/mcp/tools/sora2/index.ts (3)
lib/mcp/tools/sora2/registerGenerateVideoTool.ts (1)
registerGenerateVideoTool(14-42)lib/mcp/tools/sora2/registerRetrieveVideoTool.ts (1)
registerRetrieveVideoTool(14-44)lib/mcp/tools/sora2/registerRetrieveVideoContentTool.ts (1)
registerRetrieveVideoContentTool(14-45)
lib/mcp/tools/sora2/registerRetrieveVideoTool.ts (2)
lib/video/validateRetrieveVideoQuery.ts (2)
retrieveVideoQuerySchema(3-8)RetrieveVideoQuery(10-10)lib/video/retrieveVideo.ts (1)
retrieveVideoFunction(18-61)
lib/mcp/tools/sora2/registerGenerateVideoTool.ts (2)
lib/video/validateGenerateVideoQuery.ts (2)
generateVideoQuerySchema(3-17)GenerateVideoQuery(19-19)lib/video/generateVideo.ts (1)
generateVideoFunction(20-66)
lib/video/generateVideo.ts (3)
lib/openai/generateVideo.ts (1)
GenerateVideoResponse(8-18)lib/video/validateGenerateVideoQuery.ts (1)
GenerateVideoQuery(19-19)lib/openai/getOpenAIErrorMessage.ts (1)
getOpenAIErrorMessage(7-31)
lib/video/retrieveVideoContent.ts (2)
lib/video/validateRetrieveVideoQuery.ts (1)
RetrieveVideoQuery(10-10)lib/openai/getOpenAIErrorMessage.ts (1)
getOpenAIErrorMessage(7-31)
lib/mcp/tools/sora2/registerRetrieveVideoContentTool.ts (2)
lib/video/validateRetrieveVideoQuery.ts (2)
retrieveVideoQuerySchema(3-8)RetrieveVideoQuery(10-10)lib/video/retrieveVideoContent.ts (1)
retrieveVideoContentFunction(22-54)
lib/video/retrieveVideo.ts (3)
lib/openai/generateVideo.ts (1)
GenerateVideoResponse(8-18)lib/video/validateRetrieveVideoQuery.ts (1)
RetrieveVideoQuery(10-10)lib/openai/getOpenAIErrorMessage.ts (1)
getOpenAIErrorMessage(7-31)
🔇 Additional comments (9)
lib/video/validateRetrieveVideoQuery.ts (1)
1-10: LGTM!Clean validation schema following the established project pattern. The
min(1)constraint ensures the video ID is non-empty, and the.describe()provides useful documentation for MCP tool consumers.lib/video/retrieveVideoContent.ts (1)
22-54: LGTM!Well-structured wrapper function with proper error handling. The try/catch ensures errors are caught, logged, and transformed into user-friendly messages via
getOpenAIErrorMessage. The structured return type with separate success/error fields makes it easy for consumers to handle both cases.lib/mcp/tools/sora2/index.ts (1)
1-15: LGTM!Clean aggregator pattern for registering all Sora2-related MCP tools. This follows the established project convention and keeps the registration logic modular.
lib/mcp/tools/sora2/registerGenerateVideoTool.ts (1)
14-41: MCPgenerate_sora_2_videotool wiring looks consistent and type-safe.Registration correctly uses
generateVideoQuerySchemafor validation, a typedGenerateVideoQueryhandler, and returns a JSON-serializedVideoGenerationResultin the standard{ content: [{ type: "text", text }] }shape. No issues from this file alone.lib/mcp/tools/sora2/registerRetrieveVideoTool.ts (1)
14-43:retrieve_sora_2_videoMCP tool wiring matches the rest of the Sora2 suite.The tool correctly uses
retrieveVideoQuerySchemafor input validation, passes a typedRetrieveVideoQueryintoretrieveVideoFunction, and returns the normalized result as JSON text. This is consistent and ready to use.lib/mcp/tools/sora2/registerRetrieveVideoContentTool.ts (1)
14-45:retrieve_sora_2_video_contentMCP tool registration is consistent and clear.The tool uses the shared
retrieveVideoQuerySchema, delegates toretrieveVideoContentFunction, and returns a JSON-serialized result. The description documents dependencies ongenerate_sora_2_video/retrieve_sora_2_videonicely. No changes needed here.lib/video/generateVideo.ts (1)
20-65: Video generation wrapper is well-structured and consistent with the retrieval flow.The function cleanly normalizes the OpenAI response into
VideoGenerationResult, uses sensible defaults (seconds,size), and provides informative messages for both completed and queued jobs. The error path is symmetric and leveragesgetOpenAIErrorMessagewhile keeping a stable result shape. Only follow-up is to keep thissecondsdefault in sync with the validation schema/docs (already noted there); otherwise this looks good.lib/video/retrieveVideo.ts (1)
18-60: retrieveVideoFunction normalization and error handling look solid.This wrapper cleanly converts the low-level OpenAI response into a stable
RetrieveVideoResult, with clear status messaging for completed/processing/queued states. The error path correctly logs, usesgetOpenAIErrorMessage, and preserves the inputvideo_idwhile marking status as"failed". This fits well with the rest of the video tooling.lib/openai/retrieveVideo.ts (1)
9-31: retrieveVideo implementation is correct and well-structured.The endpoint
GET /v1/videos/{videoId}and response shape match the OpenAI API contract. Error handling with guarded JSON parsing is sound. Consider adding anAccept: "application/json"header for explicit expectations, though it is optional.
| export interface GenerateVideoRequest { | ||
| model: string; | ||
| prompt: string; | ||
| seconds: number; | ||
| size: string; | ||
| } | ||
|
|
||
| export interface GenerateVideoResponse { | ||
| id: string; | ||
| object: string; | ||
| model: string; | ||
| status: string; | ||
| progress?: number; | ||
| created_at?: number; | ||
| size: string; | ||
| seconds: string; | ||
| quality?: string; | ||
| } | ||
|
|
||
| /** | ||
| * Generates a video using OpenAI's API | ||
| * | ||
| * @param request - The request parameters for video generation | ||
| * @returns The video generation response | ||
| */ | ||
| export async function generateVideo(request: GenerateVideoRequest): Promise<GenerateVideoResponse> { | ||
| const apiKey = process.env.OPENAI_API_KEY; | ||
|
|
||
| if (!apiKey) { | ||
| throw new Error("OPENAI_API_KEY environment variable is not set"); | ||
| } | ||
|
|
||
| const formData = new FormData(); | ||
| formData.append("model", request.model); | ||
| formData.append("prompt", request.prompt); | ||
| formData.append("seconds", request.seconds.toString()); | ||
| formData.append("size", request.size); | ||
|
|
||
| const response = await fetch("https://api.openai.com/v1/videos", { | ||
| method: "POST", | ||
| headers: { | ||
| Authorization: `Bearer ${apiKey}`, | ||
| }, | ||
| body: formData, | ||
| }); | ||
|
|
||
| if (!response.ok) { | ||
| const errorData = await response.json().catch(() => ({})); | ||
| throw new Error( | ||
| errorData.error?.message || `API request failed with status ${response.status}`, | ||
| ); | ||
| } | ||
|
|
||
| const data: GenerateVideoResponse = await response.json(); | ||
| return data; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
OpenAI video generation API endpoint specification POST /v1/videos parameters response format 2024 2025
💡 Result:
Current (OpenAI Platform) specification for POST /v1/videos — summary of request parameters and returned response (as documented by OpenAI). Sources: OpenAI API reference. [1]
POST https://api.openai.com/v1/videos
Request (multipart/form-data or JSON where noted)
- model (string, optional) — video model to use (allowed: sora-2, sora-2-pro). Defaults to sora-2. [1]
- prompt (string, required) — text prompt describing the video to generate. [1]
- input_reference (file, optional) — image/file reference to guide generation. [1]
- seconds (string, optional) — clip duration in seconds (allowed values: "4", "8", "12"). Defaults to "4". [1]
- size (string, optional) — output resolution (allowed values: "720x1280", "1280x720", "1024x1792", "1792x1024"). Defaults to "720x1280". [1]
Example curl request
- curl https://api.openai.com/v1/videos -H "Authorization: Bearer $OPENAI_API_KEY" -F "model=sora-2" -F "prompt=…" (example in docs). [1]
Response (video job object)
- id (string) — video job id (e.g., "video_123"). [1]
- object (string) — "video". [1]
- model (string) — model used (e.g., "sora-2"). [1]
- status (string) — lifecycle status (e.g., "queued", "completed", "failed"). [1]
- progress (integer) — approximate completion percent. [1]
- created_at (integer) — unix timestamp (seconds) when job created. [1]
- completed_at (integer, optional) — unix timestamp when finished. [1]
- expires_at (integer, optional) — unix timestamp when downloadable assets expire. [1]
- prompt (string) — original prompt. [1]
- seconds (string) — duration (e.g., "8"). [1]
- size (string) — resolution (e.g., "1024x1792"). [1]
- remixed_from_video_id (string, optional) — if a remix. [1]
- error (object, optional) — error payload if generation failed. [1]
Example response JSON (from docs)
{
"id": "video_123",
"object": "video",
"model": "sora-2",
"status": "queued",
"progress": 0,
"created_at": 1712697600,
"size": "1024x1792",
"seconds": "8",
"quality": "standard"
} [1]
Related endpoints
- POST /v1/videos/{video_id}/remix — create a remix of a completed video. [1]
- GET /v1/videos — list jobs; GET /v1/videos/{video_id} — retrieve job metadata; GET /v1/videos/{video_id}/content — download video bytes; DELETE /v1/videos/{video_id} — delete job. [1]
If you want the raw live docs or examples for 2024 vs 2025 snapshots, I can fetch archived or changelog entries and show differences. Sources: OpenAI API reference (Platform docs). [1]
Sources
[1] OpenAI Platform — Videos API reference (Create/list/retrieve/download/delete/remix) — platform.openai.com/docs/api-reference/videos.
API contract has typing mismatches; update request/response interfaces to match OpenAI spec.
The endpoint and FormData approach are correct, but the code has several discrepancies with the OpenAI video API:
- Request:
secondsmust be validated to only accept"4","8", or"12"; currently any number converts to string. Also,modelshould be optional (defaults tosora-2), not required inGenerateVideoRequest. - Response: Add optional fields
completed_at,expires_at(both unix timestamps),prompt, andremixed_from_video_idtoGenerateVideoResponse. Thequalityfield in the example may need clarification against the schema. - Runtime:
fetchandFormDatasupport is fine for Node 18+ / edge runtimes (standard now); no polyfill needed.
Update the interfaces to match the OpenAI API reference and add validation for the seconds enum values.
🤖 Prompt for AI Agents
In lib/openai/generateVideo.ts around lines 1 to 56, update the TypeScript
interfaces and add runtime validation: change GenerateVideoRequest.model to
optional and treat missing model as 'sora-2' when building the FormData; change
seconds to a string union type of "4" | "8" | "12" (not any number) and validate
at runtime to throw if request.seconds is not one of those values; extend
GenerateVideoResponse with optional completed_at and expires_at (unix
timestamps), prompt, and remixed_from_video_id fields, keep quality as optional
string; ensure when appending to FormData you append the validated/normalized
seconds string and model (defaulting to 'sora-2') before making the fetch call.
| export const generateVideoQuerySchema = z.object({ | ||
| prompt: z | ||
| .string() | ||
| .min(1, "Prompt is required") | ||
| .describe("Text description of the video to generate"), | ||
| seconds: z | ||
| .number() | ||
| .min(4) | ||
| .optional() | ||
| .describe("Duration of the video in seconds (default: 4, max: 20)"), | ||
| size: z | ||
| .enum(["720x1280", "1280x720"]) | ||
| .optional() | ||
| .describe("Size of the video: 720x1280 (default portrait), 1280x720 (landscape)"), | ||
| }); | ||
|
|
||
| export type GenerateVideoQuery = z.infer<typeof generateVideoQuerySchema>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Align documented defaults with implementation and enforce the described max duration.
Two things to tidy up here:
- The JSDoc/description says
secondshas a default of 4, butgenerateVideoFunctioncurrently defaultsseconds = 8. Please align either the description or the implementation so callers aren’t surprised by the actual default. - The description mentions “max: 20”, but the schema only enforces
.min(4). You can enforce the upper bound directly in the schema:
seconds: z
.number()
- .min(4)
+ .min(4)
+ .max(20)
.optional()🤖 Prompt for AI Agents
In lib/video/validateGenerateVideoQuery.ts around lines 3 to 19, the seconds
field's schema and description are inconsistent with the implementation and lack
an upper bound; update the seconds schema to enforce .max(20) and set a default
that matches the implementation (e.g., .default(8)) and update the description
text to "Duration of the video in seconds (default: 8, max: 20)" so the schema
enforces the upper bound and the documented default matches runtime behavior.
| .min(4) | ||
| .optional() | ||
| .describe("Duration of the video in seconds (default: 4, max: 20)"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| .min(4) | |
| .optional() | |
| .describe("Duration of the video in seconds (default: 4, max: 20)"), | |
| .min(4, "Minimum 4 seconds") | |
| .max(20, "Maximum 20 seconds") | |
| .optional() | |
| .describe("Duration of the video in seconds (default: 8, max: 20)"), |
The video seconds validation schema is missing the .max(20) constraint that's mentioned in the description, and the documented default value (4) doesn't match the actual default value used in the code (8).
View Details
Analysis
Missing maximum validation and default value mismatch in video generation schema
What fails: The Zod schema in lib/video/validateGenerateVideoQuery.ts (lines 10-12) lacks the .max(20) constraint on the seconds field, allowing API clients to request video durations beyond the documented 20-second limit. Additionally, the schema description states "default: 4" but the actual default value used in lib/video/generateVideo.ts line 24 is 8 seconds.
How to reproduce:
// This should fail validation but currently passes:
generateVideoQuerySchema.safeParse({
prompt: "A cat",
seconds: 25 // Above documented max of 20
});
// Result: { success: true, data: { prompt: "A cat", seconds: 25 } }Result:
- Request with
seconds: 25passes validation when it should fail - Schema description incorrectly documents default as 4 seconds when actual default is 8 seconds
Expected:
- Requests with
seconds > 20should be rejected with validation error - Schema description should match actual default value (8) used in generateVideo.ts
Fix applied:
- Added
.max(20, "Maximum 20 seconds")to the schema - Added
.min(4, "Minimum 4 seconds")with error message for clarity - Updated description from "default: 4" to "default: 8" to match implementation
…ate-tool-contact_team Migrate tool - contact_team
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
lib/telegram/client.ts (1)
1-15: Consider avoiding module‑level throw for missing TELEGRAM_BOT_TOKENThrowing at import time means any code that indirectly imports this module (even if it never sends a Telegram message) will crash if
TELEGRAM_BOT_TOKENisn’t set. For local dev/tests or environments where contact tools are disabled, that can be brittle.You could instead:
- Lazily construct the client when first needed, or
- Keep the check but wrap client access in a small getter that throws if the token is missing, limiting failures to actual Telegram usage paths.
This preserves fail‑fast behavior for the feature without coupling process startup to this env var.
lib/telegram/trimMessage.ts (1)
1-16: Derive trim length from a suffix constant instead of hardcoded15The current logic is correct, but the
15is a magic number. Defining the suffix once and using its length would make future changes safer and clearer.-// Telegram message length limit is 4096 characters -const MAX_MESSAGE_LENGTH = 4000; // Using slightly less than 4096 to be safe +// Telegram message length limit is 4096 characters +const MAX_MESSAGE_LENGTH = 4000; // Using slightly less than 4096 to be safe +const TRIM_SUFFIX = "...[trimmed]"; @@ - if (text.length <= MAX_MESSAGE_LENGTH) { + if (text.length <= MAX_MESSAGE_LENGTH) { return text; } - return text.substring(0, MAX_MESSAGE_LENGTH - 15) + "...[trimmed]"; + return text.substring(0, MAX_MESSAGE_LENGTH - TRIM_SUFFIX.length) + TRIM_SUFFIX;
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (8)
lib/contact/contactTeam.ts(1 hunks)lib/contact/validateContactTeamQuery.ts(1 hunks)lib/mcp/tools/index.ts(2 hunks)lib/mcp/tools/registerContactTeamTool.ts(1 hunks)lib/telegram/client.ts(1 hunks)lib/telegram/sendMessage.ts(1 hunks)lib/telegram/trimMessage.ts(1 hunks)package.json(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- lib/mcp/tools/index.ts
🧰 Additional context used
🧬 Code graph analysis (3)
lib/mcp/tools/registerContactTeamTool.ts (2)
lib/contact/validateContactTeamQuery.ts (2)
contactTeamQuerySchema(3-12)ContactTeamQuery(14-14)lib/contact/contactTeam.ts (1)
contactTeam(15-43)
lib/contact/contactTeam.ts (2)
lib/contact/validateContactTeamQuery.ts (1)
ContactTeamQuery(14-14)lib/telegram/sendMessage.ts (1)
sendMessage(12-23)
lib/telegram/sendMessage.ts (1)
lib/telegram/trimMessage.ts (1)
trimMessage(10-16)
🔇 Additional comments (5)
package.json (1)
31-31: Telegram dependency addition looks appropriateAdding
node-telegram-bot-apialigns with the new Telegram client and messaging helpers; no issues from this diff alone.lib/mcp/tools/registerContactTeamTool.ts (1)
1-35: Contact‑team tool registration looks consistent and well‑typedUsing
contactTeamQuerySchemaasinputSchemaand delegating tocontactTeamkeeps validation and behavior nicely separated; wrapping the result in a single text content node is consistent with other tools.lib/telegram/sendMessage.ts (1)
1-23: sendMessage helper is cohesive and safely validates configurationChecking
TELEGRAM_CHAT_IDat call time, trimming viatrimMessage, and delegating to the shared client keeps this helper focused and robust; the flow looks good.lib/contact/validateContactTeamQuery.ts (1)
1-14: ContactTeamQuery schema is clear and sufficient for current usageThe Zod schema captures the required message and optional context cleanly, and the inferred
ContactTeamQuerytype fits the downstreamcontactTeamimplementation.lib/contact/contactTeam.ts (1)
1-43: contactTeam implementation cleanly bridges MCP input to TelegramMessage formatting, fallbacks for missing metadata, and the success/failure envelope with logging are all sensible; behavior looks solid for the new tool.
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.