Skip to content

Conversation

@sweetmantech
Copy link
Contributor

@sweetmantech sweetmantech commented Dec 7, 2025

Summary by CodeRabbit

  • New Features
    • Image generation & editing via prompts with optional file uploads and validation
    • Programmatic image, video, catalog, and contact-team tools exposed through MCP
    • Video generation, status retrieval, and downloadable video content (OpenAI-backed)
    • AI-driven catalog analysis with parallel batch processing and bulk song insertion
    • Telegram-based “contact team” messaging and message-trimming for long texts
    • New default AI model and improved OpenAI error messaging for clearer feedback

✏️ Tip: You can customize this high-level summary in your review settings.

@vercel
Copy link

vercel bot commented Dec 7, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Updated (UTC)
recoup-api Ready Ready Preview Dec 8, 2025 2:50am

@chatgpt-codex-connector
Copy link

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.
To continue using code reviews, you can upgrade your account or add credits to your account and enable them for code reviews in your settings.

@coderabbitai
Copy link

coderabbitai bot commented Dec 7, 2025

Walkthrough

Replaces inline validation in image generation route with a validator; adds image generation + upload helper; registers new MCP tools for images, catalogs, and sora2; adds catalog analysis, Supabase helpers, OpenAI video wrappers, Telegram utilities, and related validation schemas; introduces DEFAULT_MODEL constant.

Changes

Cohort / File(s) Summary
API Route
app/api/image/generate/route.ts
Replaced inline Zod validation with validateGenerateImageQuery(request); GET now short-circuits with a NextResponse on validation failure or uses parsed { prompt, account_id } and files from searchParams on success.
Image validation & helpers
lib/image/validateGenerateImageQuery.ts, lib/image/validateEditImageQuery.ts, lib/image/generateAndProcessImage.ts
Added generateImageQuerySchema, editImageQuerySchema, validateGenerateImageQuery(request); new generateAndProcessImage(prompt, accountId, files?) that calls generateImage then uploadImageAndCreateMoment, exports types GenerateAndProcessImageResult and FileInput.
MCP image tools
lib/mcp/tools/images/registerGenerateImageTool.ts, lib/mcp/tools/images/registerEditImageTool.ts, lib/mcp/tools/images/index.ts
New tool registrations generate_image and edit_image; handlers call generateAndProcessImage; index exposes registerAllImageTools.
MCP tool registry
lib/mcp/tools/index.ts
registerAllTools now imports and invokes registerAllImageTools, registerAllCatalogTools, registerAllSora2Tools, and registerContactTeamTool.
Catalog analysis & batching
lib/catalog/analyzeCatalogBatch.ts, lib/catalog/processBatchesInParallel.ts, lib/catalog/refineResults.ts, lib/catalog/analyzeFullCatalog.ts
New AI-driven batch analyzer, parallel batch processor, recursive refine step (MAX_RESULTS), and full-catalog analyzer that pages and refines results.
Catalog insertion & validation
lib/catalog/insertCatalogSongs.ts, lib/catalog/validateGetCatalogSongsQuery.ts, lib/catalog/validateGetCatalogsQuery.ts, lib/catalog/validateInsertCatalogSongsQuery.ts
New insert flow for catalog songs and Zod validators/types for get-catalogs, get-catalog-songs, and insert-catalog-songs queries.
Supabase helpers & types
lib/supabase/account_catalogs/selectAccountCatalogs.ts, lib/supabase/catalog_songs/selectCatalogSongsWithArtists.ts
Added selectAccountCatalogs (query with catalogs relation); made CatalogSongWithArtists type exported.
MCP catalog tools
lib/mcp/tools/catalogs/index.ts, lib/mcp/tools/catalogs/registerGetCatalogsTool.ts, lib/mcp/tools/catalogs/registerGetCatalogSongsTool.ts, lib/mcp/tools/catalogs/registerInsertCatalogSongsTool.ts
New MCP tools: select_catalogs, select_catalog_songs, insert_catalog_songs; each uses input schema and returns JSON-stringified payloads.
Sora2 video MCP tools
lib/mcp/tools/sora2/index.ts, lib/mcp/tools/sora2/registerGenerateVideoTool.ts, lib/mcp/tools/sora2/registerRetrieveVideoTool.ts, lib/mcp/tools/sora2/registerRetrieveVideoContentTool.ts
New barrel and registrations for Sora 2 video tools wired to video service functions and schemas.
OpenAI video wrappers
lib/openai/generateVideo.ts, lib/openai/retrieveVideo.ts, lib/openai/retrieveVideoContent.ts, lib/openai/getOpenAIErrorMessage.ts
New wrappers for OpenAI video endpoints (generate, retrieve, retrieve content), error-message mapping, and content → Blob/dataUrl handling.
Video service adapters & validators
lib/video/generateVideo.ts, lib/video/retrieveVideo.ts, lib/video/retrieveVideoContent.ts, lib/video/validateGenerateVideoQuery.ts, lib/video/validateRetrieveVideoQuery.ts
High-level video functions that map OpenAI responses to service-friendly result types, plus Zod validators for video queries.
Contact & Telegram utilities
lib/contact/validateContactTeamQuery.ts, lib/contact/contactTeam.ts, lib/telegram/client.ts, lib/telegram/sendMessage.ts, lib/telegram/trimMessage.ts, lib/mcp/tools/registerContactTeamTool.ts
New contact-team flow: validator, contactTeam (sends Telegram message), Telegram client and sendMessage helper, trimming utility, and MCP tool registration.
Constants & package
lib/const.ts, package.json
Added export const DEFAULT_MODEL = "openai/gpt-5-mini"; and added runtime dependency node-telegram-bot-api (^0.66.0).

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant APIRoute as Image API Route
    participant Validator as validateGenerateImageQuery
    participant ImageProcessor as generateAndProcessImage
    participant ImageGen as generateImage
    participant Uploader as uploadImageAndCreateMoment

    Client->>APIRoute: GET /api/image/generate?prompt=...&account_id=...(&files=...)
    APIRoute->>Validator: validateGenerateImageQuery(request)
    Validator->>Validator: parse searchParams & safeParse schema
    alt validation fails
        Validator-->>APIRoute: NextResponse (400 + CORS)
        APIRoute-->>Client: 400 error
    else validation succeeds
        Validator-->>APIRoute: { prompt, account_id }
        APIRoute->>ImageProcessor: generateAndProcessImage(prompt, account_id, files?)
        ImageProcessor->>ImageProcessor: validate inputs, map FileInput→FilePart
        ImageProcessor->>ImageGen: generateImage(prompt, fileParts)
        ImageGen-->>ImageProcessor: { image } (or error)
        alt image generated
            ImageProcessor->>Uploader: uploadImageAndCreateMoment({ image, prompt, account: account_id })
            Uploader-->>ImageProcessor: { imageUrl }
            ImageProcessor-->>APIRoute: { imageUrl }
            APIRoute-->>Client: 200 { imageUrl }
        else generation failed
            ImageProcessor-->>APIRoute: throw / error
            APIRoute-->>Client: 500 error
        end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Areas to focus review on:
    • lib/image/validateGenerateImageQuery.ts — error mapping, CORS headers, NextResponse shape.
    • lib/image/generateAndProcessImage.ts — FileInput→FilePart conversion, handling of falsy generateImage results, upload contract.
    • lib/supabase/account_catalogs/selectAccountCatalogs.ts — query composition, relation selection, error handling and typing.
    • Catalog recursion/parallelization — refineResults.ts, processBatchesInParallel.ts, analyzeFullCatalog.ts for termination, batching, and parallel fetch semantics.
    • OpenAI wrappers and content handling — retrieveVideoContent.ts Blob→dataUrl conversion and large response handling.
    • Telegram integration — environment checks in lib/telegram/client.ts and message trimming logic.

Possibly related PRs

Poem

🐰 I hopped through code with bright-eyed cheer,
Parsed prompts and files, made images appear,
I stitched tools and bots and batched each song,
Sent moments to moments — the pipeline’s strong,
A happy rabbit, coding with a carrot near. 🥕

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title accurately describes the main objective—migrating all tools from chat into the new Vercel API codebase—and is directly supported by the extensive changeset across image, catalog, video, and contact tools.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch sweetmantech/myc-3645-migrate-all-tools-from-chat-into-the-new-vercel-api-codebase

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
lib/mcp/tools/images/registerEditImageTool.ts (1)

18-31: Avoid hard-coding PNG MIME type unless inputs are guaranteed PNG

The files array always uses type: "image/png":

[
  {
    url: args.imageUrl,
    type: "image/png",
  },
]

If callers can supply non-PNG images (e.g., JPEG/WebP), this MIME type may be inaccurate and could affect downstream handling in generateAndProcessImage / the underlying model.

If inputs aren’t guaranteed PNG, consider either:

  • Deriving the media type from the URL or metadata, or
  • Extending the schema to accept a mimeType/imageType field and wiring that through instead of hard-coding "image/png".
lib/image/generateAndProcessImage.ts (1)

14-40: Align JSDoc with return type and verify FilePart expectations for file inputs

Two small points here:

  1. The JSDoc @returns says “The Arweave URL of the generated/processed image”, but the function actually returns a GenerateAndProcessImageResult object with an imageUrl field. Consider updating the comment to reflect the object shape.

  2. When mapping FileInput to FilePart:

const fileParts: FilePart[] | undefined = files?.map(file => ({
  type: "file" as const,
  data: file.url,
  mediaType: file.type,
}));

data is being set to a URL string. Please double-check the FilePart type from the ai package to confirm that passing a URL as data with mediaType is supported; if it expects binary content instead, you may need to adjust this mapping (e.g., fetch or otherwise supply the actual file bytes, or use the recommended URL mechanism for that library).

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3b05f47 and 956b52d.

📒 Files selected for processing (8)
  • app/api/image/generate/route.ts (2 hunks)
  • lib/image/generateAndProcessImage.ts (1 hunks)
  • lib/image/validateEditImageQuery.ts (1 hunks)
  • lib/image/validateGenerateImageQuery.ts (1 hunks)
  • lib/mcp/tools/images/index.ts (1 hunks)
  • lib/mcp/tools/images/registerEditImageTool.ts (1 hunks)
  • lib/mcp/tools/images/registerGenerateImageTool.ts (1 hunks)
  • lib/mcp/tools/index.ts (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (7)
lib/mcp/tools/images/registerEditImageTool.ts (2)
lib/image/validateEditImageQuery.ts (2)
  • editImageQuerySchema (4-6)
  • EditImageQuery (8-8)
lib/image/generateAndProcessImage.ts (2)
  • GenerateAndProcessImageResult (5-7)
  • generateAndProcessImage (22-59)
lib/mcp/tools/images/registerGenerateImageTool.ts (2)
lib/image/validateGenerateImageQuery.ts (2)
  • generateImageQuerySchema (5-15)
  • GenerateImageQuery (17-17)
lib/image/generateAndProcessImage.ts (2)
  • GenerateAndProcessImageResult (5-7)
  • generateAndProcessImage (22-59)
lib/image/generateAndProcessImage.ts (1)
lib/arweave/uploadImageAndCreateMoment.ts (1)
  • uploadImageAndCreateMoment (27-74)
lib/mcp/tools/images/index.ts (2)
lib/mcp/tools/images/registerGenerateImageTool.ts (1)
  • registerGenerateImageTool (17-40)
lib/mcp/tools/images/registerEditImageTool.ts (1)
  • registerEditImageTool (14-43)
lib/image/validateGenerateImageQuery.ts (1)
lib/networking/getCorsHeaders.ts (1)
  • getCorsHeaders (6-12)
app/api/image/generate/route.ts (1)
lib/image/validateGenerateImageQuery.ts (1)
  • validateGenerateImageQuery (25-54)
lib/mcp/tools/index.ts (1)
lib/mcp/tools/images/index.ts (1)
  • registerAllImageTools (10-13)
🔇 Additional comments (6)
lib/mcp/tools/index.ts (1)

5-17: Image tools correctly integrated into global MCP tool registration

Importing registerAllImageTools and invoking it inside registerAllTools keeps the single entry point pattern for tool setup and cleanly adds the new image capabilities without changing the existing API surface.

app/api/image/generate/route.ts (1)

4-34: Centralized query validation and early-return flow look solid

Delegating query parsing to validateGenerateImageQuery and short-circuiting when it returns a NextResponse simplifies the handler and ensures consistent 400 responses with CORS headers. Using the validated { prompt, account_id } while still reading files from searchParams preserves previous behavior.

lib/mcp/tools/images/index.ts (1)

1-13: Image tool aggregator is clean and consistent

registerAllImageTools cleanly groups the generate and edit image tools behind a single entry point, matching the pattern used for other tool families.

lib/mcp/tools/images/registerGenerateImageTool.ts (1)

1-40: generate_image tool registration and handler wiring look correct

The tool uses the shared generateImageQuerySchema, passes prompt and account_id through to generateAndProcessImage, and returns a well-formed MCP response payload. The structure is consistent with the rest of the MCP tooling.

lib/image/validateGenerateImageQuery.ts (1)

5-53: Shared query validation helper is well-structured

The generateImageQuerySchema and validateGenerateImageQuery helper cleanly centralize query parsing and error handling. Using safeParse on searchParams and returning a structured 400 with CORS headers on failure should make both the route handler and any future consumers more consistent and easier to maintain.

lib/image/validateEditImageQuery.ts (1)

4-6: > Likely an incorrect or invalid review comment.

[
{
url: args.imageUrl,
type: "image/png",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The edit_image tool hardcodes the image MIME type as "image/png" when passing the image to generateAndProcessImage, which will fail for images in other formats (JPG, WebP, GIF, etc.).

View Details
📝 Patch Details
diff --git a/lib/image/getMimeTypeFromUrl.ts b/lib/image/getMimeTypeFromUrl.ts
new file mode 100644
index 0000000..87a288b
--- /dev/null
+++ b/lib/image/getMimeTypeFromUrl.ts
@@ -0,0 +1,44 @@
+/**
+ * Extracts the MIME type from an image URL based on file extension.
+ * Supports PNG, JPEG, WebP, GIF, SVG, HEIC, and HEIF formats.
+ * Falls back to image/jpeg if the extension cannot be determined.
+ *
+ * @param url - The image URL
+ * @returns The MIME type for the image
+ */
+export function getMimeTypeFromUrl(url: string): string {
+  try {
+    // Extract the path from the URL
+    const urlObj = new URL(url);
+    const pathname = urlObj.pathname.toLowerCase();
+
+    // Extract file extension (handles query parameters)
+    const match = pathname.match(/\.([a-z0-9]+)(?:\?|$)/i);
+    const extension = match?.[1]?.toLowerCase();
+
+    // Map extensions to MIME types
+    switch (extension) {
+      case "png":
+        return "image/png";
+      case "jpg":
+      case "jpeg":
+        return "image/jpeg";
+      case "webp":
+        return "image/webp";
+      case "gif":
+        return "image/gif";
+      case "svg":
+        return "image/svg+xml";
+      case "heic":
+        return "image/heic";
+      case "heif":
+        return "image/heif";
+      default:
+        // Fallback to JPEG if extension cannot be determined
+        return "image/jpeg";
+    }
+  } catch {
+    // If URL parsing fails, default to image/jpeg
+    return "image/jpeg";
+  }
+}
diff --git a/lib/mcp/tools/images/registerEditImageTool.ts b/lib/mcp/tools/images/registerEditImageTool.ts
index e7a00cf..177fa29 100644
--- a/lib/mcp/tools/images/registerEditImageTool.ts
+++ b/lib/mcp/tools/images/registerEditImageTool.ts
@@ -4,6 +4,7 @@ import {
   type GenerateAndProcessImageResult,
 } from "@/lib/image/generateAndProcessImage";
 import { editImageQuerySchema, type EditImageQuery } from "@/lib/image/validateEditImageQuery";
+import { getMimeTypeFromUrl } from "@/lib/image/getMimeTypeFromUrl";
 
 /**
  * Registers the "edit_image" tool on the MCP server.
@@ -25,7 +26,7 @@ export function registerEditImageTool(server: McpServer): void {
         [
           {
             url: args.imageUrl,
-            type: "image/png",
+            type: getMimeTypeFromUrl(args.imageUrl),
           },
         ],
       );

Analysis

Hardcoded MIME type in edit_image tool fails for non-PNG images

What fails: The registerEditImageTool() function hardcodes type: "image/png" for all images passed to the Gemini API, causing JPEG, WebP, GIF, and other image formats to be sent with incorrect MIME type metadata.

How to reproduce:

// Call edit_image with a JPEG image URL
const result = await registerEditImageTool(server);
// Tool receives a URL like: https://example.com/photo.jpg
// But declares it as: type: "image/png"

Result: The Gemini API receives incorrect MIME type metadata for non-PNG images. According to Google Gemini API validation behavior, the API validates MIME types and can reject requests with unsupported types or mismatched content types, leading to API errors or incorrect image processing.

Expected: The MIME type should be extracted from the URL to accurately reflect the actual image format (image/jpeg for .jpg files, image/webp for .webp files, etc.), or users should be able to specify the format explicitly.

Fix: Created getMimeTypeFromUrl() utility function in lib/image/getMimeTypeFromUrl.ts that extracts the MIME type from the URL file extension with proper fallback handling, and updated registerEditImageTool() to use it instead of hardcoding image/png.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (12)
lib/catalog/processBatchesInParallel.ts (1)

18-27: Consider limiting concurrent batch processing for large datasets.

When songs.length is very large (e.g., 10,000 songs), this creates 100 concurrent AI API calls via Promise.all. This could hit rate limits, cause memory pressure, or result in timeouts.

Consider using a concurrency limiter for large batch counts:

import pLimit from 'p-limit';

const MAX_CONCURRENT_BATCHES = 5;

export async function processBatchesInParallel(
  songs: CatalogSongWithArtists[],
  criteria: string,
): Promise<CatalogSongWithArtists[]> {
  const batches = [];
  for (let i = 0; i < songs.length; i += BATCH_SIZE) {
    batches.push(songs.slice(i, i + BATCH_SIZE));
  }

  const limit = pLimit(MAX_CONCURRENT_BATCHES);
  const batchPromises = batches.map(batch => 
    limit(() => analyzeCatalogBatch(batch, criteria))
  );

  const results = await Promise.all(batchPromises);
  return results.flat();
}

Alternatively, if you prefer not to add a dependency, implement a simple queue:

const MAX_CONCURRENT_BATCHES = 5;

export async function processBatchesInParallel(
  songs: CatalogSongWithArtists[],
  criteria: string,
): Promise<CatalogSongWithArtists[]> {
  const batches = [];
  for (let i = 0; i < songs.length; i += BATCH_SIZE) {
    batches.push(songs.slice(i, i + BATCH_SIZE));
  }

  const results: CatalogSongWithArtists[][] = [];
  
  for (let i = 0; i < batches.length; i += MAX_CONCURRENT_BATCHES) {
    const chunk = batches.slice(i, i + MAX_CONCURRENT_BATCHES);
    const chunkResults = await Promise.all(
      chunk.map(batch => analyzeCatalogBatch(batch, criteria))
    );
    results.push(...chunkResults);
  }

  return results.flat();
}
lib/catalog/analyzeCatalogBatch.ts (1)

26-40: Consider token limit implications for large batches.

With BATCH_SIZE = 100, the prompt could become very large when songs have long names or many artists. While 100 songs is reasonable for most cases, consider monitoring prompt token usage to ensure it stays within model limits (especially for older or smaller models).

Optional: Add logging to track prompt size for monitoring:

const songsData = songs.map(s => ({
  isrc: s.isrc,
  name: s.name,
  artist: s.artists.map(a => a.name).join(", "),
}));

const songsJson = JSON.stringify(songsData, null, 2);

// Optional: Log for monitoring
if (process.env.NODE_ENV === 'development') {
  console.log(`Batch analysis prompt size: ~${songsJson.length} chars for ${songs.length} songs`);
}

const { object } = await generateObject({
  model: DEFAULT_MODEL,
  schema: z.object({
    selected_song_isrcs: z
      .array(z.string())
      .describe("Array of song ISRCs that match the criteria"),
  }),
  prompt: `Given these songs and the criteria: "${criteria}", select the song ISRCs that are most relevant.
      
Songs:
${songsJson}

Return only the ISRCs of songs that match the criteria.`,
});
lib/mcp/tools/catalogs/registerInsertCatalogSongsTool.ts (1)

34-45: Consider adding error handling for the async operation.

If insertCatalogSongsFunction throws, the error will propagate unhandled. Wrapping in try-catch with a meaningful error response improves reliability and user feedback.

     async (args: InsertCatalogSongsQuery) => {
-      const response = await insertCatalogSongsFunction(args.songs);
-
-      return {
-        content: [
-          {
-            type: "text",
-            text: JSON.stringify(response),
-          },
-        ],
-      };
+      try {
+        const response = await insertCatalogSongsFunction(args.songs);
+        return {
+          content: [
+            {
+              type: "text",
+              text: JSON.stringify(response),
+            },
+          ],
+        };
+      } catch (error) {
+        return {
+          content: [
+            {
+              type: "text",
+              text: JSON.stringify({
+                error: error instanceof Error ? error.message : "Failed to insert catalog songs",
+              }),
+            },
+          ],
+          isError: true,
+        };
+      }
     },
lib/catalog/validateGetCatalogsQuery.ts (1)

26-38: The missing_fields value is an array, which may not match the expected response format.

firstError.path is a (string | number)[]. If the intent is to return field names as a list, this is fine. However, if consumers expect a string, this could cause issues. Consider whether this should be firstError.path.join('.') or kept as-is with documentation.

       return NextResponse.json(
         {
           status: "error",
-          missing_fields: firstError.path,
+          missing_fields: firstError.path.length > 0 ? firstError.path : [firstError.code],
           error: firstError.message,
         },
lib/catalog/validateInsertCatalogSongsQuery.ts (1)

3-11: Consider adding optional fields supported by insertCatalogSongsFunction.

Based on the relevant code snippet from lib/catalog/insertCatalogSongs.ts, the function accepts additional optional fields (name, album, notes, artists). If these should be exposed via the MCP tool, extend the schema:

 const catalogSongInputSchema = z.object({
   catalog_id: z
     .string()
     .min(1, "catalog_id is required")
     .describe(
       "Catalog ID to which the song will be added. Get this from the select_catalogs tool.",
     ),
   isrc: z.string().min(1, "isrc is required").describe("Song ISRC to associate to the catalog"),
+  name: z.string().optional().describe("Song name"),
+  album: z.string().optional().describe("Album name"),
+  notes: z.string().optional().describe("Additional notes"),
+  artists: z.array(z.string()).optional().describe("Array of artist names"),
 });
lib/catalog/analyzeFullCatalog.ts (2)

34-44: Unbounded parallel fetches may overwhelm the database for large catalogs.

For catalogs with many songs (e.g., 10,000 songs = 100 pages), Promise.all fires all requests simultaneously. Consider chunking the parallel requests to limit concurrency.

+const MAX_CONCURRENT_PAGES = 10;
+
+async function fetchPagesInChunks(
+  catalogId: string,
+  pageNumbers: number[],
+  firstPageSongs: CatalogSongWithArtists[],
+): Promise<CatalogSongWithArtists[]> {
+  const results: CatalogSongWithArtists[] = [];
+  for (let i = 0; i < pageNumbers.length; i += MAX_CONCURRENT_PAGES) {
+    const chunk = pageNumbers.slice(i, i + MAX_CONCURRENT_PAGES);
+    const chunkResults = await Promise.all(
+      chunk.map(async pageNum =>
+        pageNum === 1
+          ? firstPageSongs
+          : (await selectCatalogSongsWithArtists({ catalogId, page: pageNum, limit: BATCH_SIZE })).songs,
+      ),
+    );
+    results.push(...chunkResults.flat());
+  }
+  return results;
+}

Then replace lines 36-44 with:

const allSongs = await fetchPagesInChunks(catalogId, pageNumbers, firstPage.songs);

21-28: Add error handling for database query failures.

If selectCatalogSongsWithArtists throws (network issue, database error), the error propagates unhandled. Consider wrapping with try-catch and providing a meaningful error message.

lib/supabase/account_catalogs/selectAccountCatalogs.ts (1)

21-55: Consider adding pagination support for large datasets.

The function returns all matching rows without pagination. For accounts with many catalogs, this could return large datasets. Consider adding optional page and limit parameters similar to selectCatalogSongsWithArtists.

lib/catalog/insertCatalogSongs.ts (4)

36-52: ISRC grouping silently drops falsy ISRCs and overwrites metadata on duplicates

dataByIsrc:

  • Skips entries where song.isrc is falsy, but isrc is typed as a required string. If empty strings are possible from upstream, those rows will be ignored without signaling an error.
  • For multiple rows with the same ISRC but different name/album/notes/artists, the last one wins.

If this is not intended, consider:

  • Validating isrc earlier and rejecting/flagging invalid rows instead of silently dropping them.
  • Deciding a deterministic precedence strategy (e.g., first non-empty, or merging artists across duplicates) and documenting it.

57-66: Consider deduplicating (catalog_id, isrc) pairs before calling insertCatalogSongs

You currently call insertCatalogSongs with one row per input element, which means duplicates in the input (same catalog and ISRC) will produce duplicate relationship inserts unless insertCatalogSongs itself handles upserts/constraints.

If insertCatalogSongs expects unique pairs, consider:

  • Deduping via a Set keyed by ${catalog_id}:${isrc}, or
  • Letting insertCatalogSongs accept a distinct flag and handle it centrally.

This avoids potential constraint violations and unnecessary work if clients send redundant rows.


68-75: Returned “created catalog songs” may include pre-existing relationships

The combination of

  • selectCatalogSongsWithArtists({ isrcs }), and
  • filtering only by catalog_id (uniqueCatalogIds.includes(catalogSong.catalog_id))

means the result can include catalog-song rows that already existed for those ISRCs before this call, as long as they match one of the catalogs in uniqueCatalogIds.

If the intent is “full current state for those catalogs and ISRCs”, this is fine; if the intent is “only relationships newly created by this operation”, you’ll likely need to:

  • Use the return value of insertCatalogSongs (if it returns inserted rows), or
  • Filter by a more specific key set (e.g., the exact (catalog_id, isrc) pairs you attempted to insert, possibly tied to a timestamp or primary key).

Worth clarifying the expected semantics for callers.


77-85: Pagination fields are effectively static single-page metadata

The pagination object always reports:

  • page: 1
  • total_pages: 1
  • limit: filteredCatalogSongs.length

So this is logically “all results, single page”, regardless of input size. If the callers only ever use this in a non-paginated context, that’s fine, but if they expect real pagination semantics (e.g., they pass page/limit somewhere upstream), consider:

  • Threading requested page/limit through the function and applying slicing here, or
  • Removing/renaming the pagination wrapper to avoid implying generic paging support.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 956b52d and 158bc0b.

📒 Files selected for processing (16)
  • lib/catalog/analyzeCatalogBatch.ts (1 hunks)
  • lib/catalog/analyzeFullCatalog.ts (1 hunks)
  • lib/catalog/insertCatalogSongs.ts (1 hunks)
  • lib/catalog/processBatchesInParallel.ts (1 hunks)
  • lib/catalog/refineResults.ts (1 hunks)
  • lib/catalog/validateGetCatalogSongsQuery.ts (1 hunks)
  • lib/catalog/validateGetCatalogsQuery.ts (1 hunks)
  • lib/catalog/validateInsertCatalogSongsQuery.ts (1 hunks)
  • lib/const.ts (1 hunks)
  • lib/mcp/tools/catalogs/index.ts (1 hunks)
  • lib/mcp/tools/catalogs/registerGetCatalogSongsTool.ts (1 hunks)
  • lib/mcp/tools/catalogs/registerGetCatalogsTool.ts (1 hunks)
  • lib/mcp/tools/catalogs/registerInsertCatalogSongsTool.ts (1 hunks)
  • lib/mcp/tools/index.ts (2 hunks)
  • lib/supabase/account_catalogs/selectAccountCatalogs.ts (1 hunks)
  • lib/supabase/catalog_songs/selectCatalogSongsWithArtists.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (11)
lib/mcp/tools/catalogs/registerGetCatalogSongsTool.ts (2)
lib/catalog/validateGetCatalogSongsQuery.ts (2)
  • getCatalogSongsQuerySchema (3-16)
  • GetCatalogSongsQuery (18-18)
lib/catalog/analyzeFullCatalog.ts (1)
  • analyzeFullCatalog (21-54)
lib/catalog/analyzeCatalogBatch.ts (2)
lib/supabase/catalog_songs/selectCatalogSongsWithArtists.ts (1)
  • CatalogSongWithArtists (4-8)
lib/const.ts (1)
  • DEFAULT_MODEL (8-8)
lib/catalog/refineResults.ts (2)
lib/supabase/catalog_songs/selectCatalogSongsWithArtists.ts (1)
  • CatalogSongWithArtists (4-8)
lib/catalog/processBatchesInParallel.ts (1)
  • processBatchesInParallel (14-27)
lib/catalog/processBatchesInParallel.ts (2)
lib/supabase/catalog_songs/selectCatalogSongsWithArtists.ts (1)
  • CatalogSongWithArtists (4-8)
lib/catalog/analyzeCatalogBatch.ts (1)
  • analyzeCatalogBatch (14-44)
lib/mcp/tools/catalogs/registerGetCatalogsTool.ts (2)
lib/catalog/validateGetCatalogsQuery.ts (2)
  • getCatalogsQuerySchema (5-10)
  • GetCatalogsQuery (12-12)
lib/supabase/account_catalogs/selectAccountCatalogs.ts (1)
  • selectAccountCatalogs (21-55)
lib/mcp/tools/catalogs/registerInsertCatalogSongsTool.ts (2)
lib/catalog/validateInsertCatalogSongsQuery.ts (2)
  • insertCatalogSongsQuerySchema (13-18)
  • InsertCatalogSongsQuery (20-20)
lib/catalog/insertCatalogSongs.ts (1)
  • insertCatalogSongsFunction (32-86)
lib/catalog/validateGetCatalogsQuery.ts (1)
lib/networking/getCorsHeaders.ts (1)
  • getCorsHeaders (6-12)
lib/mcp/tools/index.ts (2)
lib/mcp/tools/images/index.ts (1)
  • registerAllImageTools (10-13)
lib/mcp/tools/catalogs/index.ts (1)
  • registerAllCatalogTools (11-15)
lib/supabase/account_catalogs/selectAccountCatalogs.ts (1)
types/database.types.ts (1)
  • Tables (3755-3780)
lib/catalog/analyzeFullCatalog.ts (2)
lib/supabase/catalog_songs/selectCatalogSongsWithArtists.ts (2)
  • CatalogSongWithArtists (4-8)
  • selectCatalogSongsWithArtists (42-113)
lib/catalog/refineResults.ts (1)
  • refineResults (14-25)
lib/mcp/tools/catalogs/index.ts (3)
lib/mcp/tools/catalogs/registerGetCatalogsTool.ts (1)
  • registerGetCatalogsTool (13-42)
lib/mcp/tools/catalogs/registerGetCatalogSongsTool.ts (1)
  • registerGetCatalogSongsTool (14-54)
lib/mcp/tools/catalogs/registerInsertCatalogSongsTool.ts (1)
  • registerInsertCatalogSongsTool (14-47)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Vercel Agent Review
🔇 Additional comments (6)
lib/const.ts (1)

8-8: DEFAULT_MODEL is fine; confirm this is truly the global default across environments

The literal "openai/gpt-5-mini" as a shared DEFAULT_MODEL export looks reasonable. Just make sure this is the model you want in all environments (dev/stage/prod); if not, consider process.env.DEFAULT_MODEL ?? "openai/gpt-5-mini" so you can override without code changes.

lib/mcp/tools/index.ts (1)

5-6: LGTM! Clean integration of new tool categories.

The imports and registration calls for image and catalog tools follow the existing pattern and integrate cleanly with the tool registration flow.

Also applies to: 18-19

lib/supabase/catalog_songs/selectCatalogSongsWithArtists.ts (1)

4-8: LGTM! Appropriate type exposure.

Making CatalogSongWithArtists publicly exported enables downstream catalog processing modules to reference this type directly, which aligns with the broader migration objectives.

lib/mcp/tools/catalogs/index.ts (1)

1-15: LGTM! Clean barrel module pattern.

The catalog tool registration aggregation follows the established pattern and provides a clean single entry point for registering all catalog-related MCP tools.

lib/catalog/validateGetCatalogSongsQuery.ts (1)

1-18: LGTM!

Clean Zod schema definition with appropriate validations and descriptive metadata. The schema correctly enforces required fields and provides helpful descriptions for MCP tool documentation.

lib/catalog/insertCatalogSongs.ts (1)

1-6: Imports and overall structure look consistent

The imports match usage in this module, and the high-level flow (normalize → processSongsInput → insert relationships → re-query with artists) is coherent.

Comment on lines +14 to +44
export async function analyzeCatalogBatch(
songs: CatalogSongWithArtists[],
criteria: string,
): Promise<CatalogSongWithArtists[]> {
// Use AI to select relevant songs from this batch
const { object } = await generateObject({
model: DEFAULT_MODEL,
schema: z.object({
selected_song_isrcs: z
.array(z.string())
.describe("Array of song ISRCs that match the criteria"),
}),
prompt: `Given these songs and the criteria: "${criteria}", select the song ISRCs that are most relevant.

Songs:
${JSON.stringify(
songs.map(s => ({
isrc: s.isrc,
name: s.name,
artist: s.artists.map(a => a.name).join(", "),
})),
null,
2,
)}

Return only the ISRCs of songs that match the criteria.`,
});

// Filter songs based on AI selection
return songs.filter(song => (object.selected_song_isrcs as string[]).includes(song.isrc));
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add error handling for AI generation call.

The generateObject call lacks error handling. AI API calls can fail due to rate limits, network issues, invalid prompts, or model errors.

Wrap the AI call in a try-catch:

 export async function analyzeCatalogBatch(
   songs: CatalogSongWithArtists[],
   criteria: string,
 ): Promise<CatalogSongWithArtists[]> {
+  try {
     // Use AI to select relevant songs from this batch
     const { object } = await generateObject({
       model: DEFAULT_MODEL,
       schema: z.object({
         selected_song_isrcs: z
           .array(z.string())
           .describe("Array of song ISRCs that match the criteria"),
       }),
       prompt: `Given these songs and the criteria: "${criteria}", select the song ISRCs that are most relevant.
           
 Songs:
 ${JSON.stringify(
   songs.map(s => ({
     isrc: s.isrc,
     name: s.name,
     artist: s.artists.map(a => a.name).join(", "),
   })),
   null,
   2,
 )}
 
 Return only the ISRCs of songs that match the criteria.`,
     });
 
     // Filter songs based on AI selection
     const selectedIsrcs = object.selected_song_isrcs;
     if (!Array.isArray(selectedIsrcs)) {
       console.warn('AI response did not include valid selected_song_isrcs array');
       return [];
     }
     
     return songs.filter(song => selectedIsrcs.includes(song.isrc));
+  } catch (error) {
+    console.error(`Failed to analyze batch of ${songs.length} songs:`, error);
+    // Return empty array or all songs depending on desired failure mode
+    return []; // Conservative: exclude batch on error
+    // OR: return songs; // Permissive: include all songs on error
+  }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export async function analyzeCatalogBatch(
songs: CatalogSongWithArtists[],
criteria: string,
): Promise<CatalogSongWithArtists[]> {
// Use AI to select relevant songs from this batch
const { object } = await generateObject({
model: DEFAULT_MODEL,
schema: z.object({
selected_song_isrcs: z
.array(z.string())
.describe("Array of song ISRCs that match the criteria"),
}),
prompt: `Given these songs and the criteria: "${criteria}", select the song ISRCs that are most relevant.
Songs:
${JSON.stringify(
songs.map(s => ({
isrc: s.isrc,
name: s.name,
artist: s.artists.map(a => a.name).join(", "),
})),
null,
2,
)}
Return only the ISRCs of songs that match the criteria.`,
});
// Filter songs based on AI selection
return songs.filter(song => (object.selected_song_isrcs as string[]).includes(song.isrc));
}
export async function analyzeCatalogBatch(
songs: CatalogSongWithArtists[],
criteria: string,
): Promise<CatalogSongWithArtists[]> {
try {
// Use AI to select relevant songs from this batch
const { object } = await generateObject({
model: DEFAULT_MODEL,
schema: z.object({
selected_song_isrcs: z
.array(z.string())
.describe("Array of song ISRCs that match the criteria"),
}),
prompt: `Given these songs and the criteria: "${criteria}", select the song ISRCs that are most relevant.
Songs:
${JSON.stringify(
songs.map(s => ({
isrc: s.isrc,
name: s.name,
artist: s.artists.map(a => a.name).join(", "),
})),
null,
2,
)}
Return only the ISRCs of songs that match the criteria.`,
});
// Filter songs based on AI selection
const selectedIsrcs = object.selected_song_isrcs;
if (!Array.isArray(selectedIsrcs)) {
console.warn('AI response did not include valid selected_song_isrcs array');
return [];
}
return songs.filter(song => selectedIsrcs.includes(song.isrc));
} catch (error) {
console.error(`Failed to analyze batch of ${songs.length} songs:`, error);
// Return empty array or all songs depending on desired failure mode
return []; // Conservative: exclude batch on error
// OR: return songs; // Permissive: include all songs on error
}
}

});

// Filter songs based on AI selection
return songs.filter(song => (object.selected_song_isrcs as string[]).includes(song.isrc));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Validate AI response before type assertion.

Line 43 uses a type assertion without validating the structure. While the schema constrains selected_song_isrcs to be an array of strings, runtime validation adds robustness against unexpected AI responses.

Apply this diff to add validation:

   // Filter songs based on AI selection
-  return songs.filter(song => (object.selected_song_isrcs as string[]).includes(song.isrc));
+  const selectedIsrcs = object.selected_song_isrcs;
+  if (!Array.isArray(selectedIsrcs)) {
+    console.warn('AI response did not include valid selected_song_isrcs array');
+    return [];
+  }
+  
+  return songs.filter(song => selectedIsrcs.includes(song.isrc));
}

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In lib/catalog/analyzeCatalogBatch.ts around line 43, the code blindly asserts
object.selected_song_isrcs as string[] and uses .includes, which can throw if
the AI response is malformed; validate at runtime by checking
Array.isArray(object.selected_song_isrcs) and that every element is a string (or
coerce/filter non-strings), assign a safe string[] (e.g., [] when invalid), then
use that validated array in songs.filter(...). If invalid, either log/record the
anomaly or fall back to an empty list so the filter is safe.

Comment on lines +14 to +25
export async function refineResults(
songs: CatalogSongWithArtists[],
criteria: string,
): Promise<CatalogSongWithArtists[]> {
if (songs.length <= MAX_RESULTS) return songs;

// Process in parallel batches - AI naturally selects best matches from whatever set it's given
const filtered = await processBatchesInParallel(songs, criteria);

// Recursively refine if still too many
return refineResults(filtered, criteria);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add recursion depth limit to prevent stack overflow.

The recursive refinement lacks a depth limit and assumes processBatchesInParallel will always reduce the result count. If AI filtering is ineffective or inconsistent, this could lead to excessive recursion or even stack overflow.

Apply this diff to add a depth limit:

+const MAX_DEPTH = 10;
+
 /**
  * Recursively filters song selection until results are under MAX_RESULTS
  * Single Responsibility: Ensure result count stays within LLM context limits
  *
  * @param songs - The songs to refine
  * @param criteria - The criteria to use to refine the songs
+ * @param depth - Current recursion depth (internal use)
  * @returns The refined songs
  */
 export async function refineResults(
   songs: CatalogSongWithArtists[],
   criteria: string,
+  depth: number = 0,
 ): Promise<CatalogSongWithArtists[]> {
   if (songs.length <= MAX_RESULTS) return songs;
+  
+  if (depth >= MAX_DEPTH) {
+    console.warn(`Max refinement depth (${MAX_DEPTH}) reached with ${songs.length} songs. Truncating results.`);
+    return songs.slice(0, MAX_RESULTS);
+  }
 
   // Process in parallel batches - AI naturally selects best matches from whatever set it's given
   const filtered = await processBatchesInParallel(songs, criteria);
+  
+  // Guard against cases where filtering doesn't reduce the count
+  if (filtered.length >= songs.length) {
+    console.warn(`Refinement made no progress. Truncating ${filtered.length} songs to ${MAX_RESULTS}.`);
+    return filtered.slice(0, MAX_RESULTS);
+  }
 
   // Recursively refine if still too many
-  return refineResults(filtered, criteria);
+  return refineResults(filtered, criteria, depth + 1);
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export async function refineResults(
songs: CatalogSongWithArtists[],
criteria: string,
): Promise<CatalogSongWithArtists[]> {
if (songs.length <= MAX_RESULTS) return songs;
// Process in parallel batches - AI naturally selects best matches from whatever set it's given
const filtered = await processBatchesInParallel(songs, criteria);
// Recursively refine if still too many
return refineResults(filtered, criteria);
}
const MAX_DEPTH = 10;
export async function refineResults(
songs: CatalogSongWithArtists[],
criteria: string,
depth: number = 0,
): Promise<CatalogSongWithArtists[]> {
if (songs.length <= MAX_RESULTS) return songs;
if (depth >= MAX_DEPTH) {
console.warn(`Max refinement depth (${MAX_DEPTH}) reached with ${songs.length} songs. Truncating results.`);
return songs.slice(0, MAX_RESULTS);
}
// Process in parallel batches - AI naturally selects best matches from whatever set it's given
const filtered = await processBatchesInParallel(songs, criteria);
// Guard against cases where filtering doesn't reduce the count
if (filtered.length >= songs.length) {
console.warn(`Refinement made no progress. Truncating ${filtered.length} songs to ${MAX_RESULTS}.`);
return filtered.slice(0, MAX_RESULTS);
}
// Recursively refine if still too many
return refineResults(filtered, criteria, depth + 1);
}
🤖 Prompt for AI Agents
In lib/catalog/refineResults.ts around lines 14-25, the recursive refineResults
has no depth limit; add a MAX_RECURSION_DEPTH constant and an optional depth
parameter (e.g. depth = 0) to the function signature, increment depth on each
recursive call and stop recursing when depth >= MAX_RECURSION_DEPTH by returning
the current songs (or logging/warning) instead of recursing further; update the
initial call sites to use the new signature where necessary so recursion is
bounded.

Comment on lines +20 to +42
export function validateGetCatalogsQuery(request: NextRequest): NextResponse | GetCatalogsQuery {
const { searchParams } = new URL(request.url);
const params = Object.fromEntries(searchParams.entries());

const validationResult = getCatalogsQuerySchema.safeParse(params);

if (!validationResult.success) {
const firstError = validationResult.error.issues[0];
return NextResponse.json(
{
status: "error",
missing_fields: firstError.path,
error: firstError.message,
},
{
status: 400,
headers: getCorsHeaders(),
},
);
}

return validationResult.data;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Search for all usages of validateGetCatalogsQuery
rg -n "validateGetCatalogsQuery" --type=ts --type=tsx --type=js --type=jsx

Repository: Recoupable-com/Recoup-API

Length of output: 96


🏁 Script executed:

# Search for all usages of validateGetCatalogsQuery
rg -n "validateGetCatalogsQuery"

Repository: Recoupable-com/Recoup-API

Length of output: 310


🏁 Script executed:

# Read the registerGetCatalogsTool.ts file to see how validateGetCatalogsQuery is used
cat -n lib/mcp/tools/catalogs/registerGetCatalogsTool.ts

Repository: Recoupable-com/Recoup-API

Length of output: 1799


🏁 Script executed:

# Search for route handler files that might use validateGetCatalogsQuery
# Look in common API route locations
find . -type f -name "route.ts" -o -name "route.js" | head -20

Repository: Recoupable-com/Recoup-API

Length of output: 631


🏁 Script executed:

# Check the catalogs route
cat -n app/api/catalogs/songs/route.ts

Repository: Recoupable-com/Recoup-API

Length of output: 1907


🏁 Script executed:

# Search for catalogs-related route files
find . -type f -path "*/api/catalogs*" -name "*.ts"

Repository: Recoupable-com/Recoup-API

Length of output: 102


🏁 Script executed:

# Search for validateGetCatalogsQuery in the handler files
rg -n "validateGetCatalogsQuery" lib/songs/

Repository: Recoupable-com/Recoup-API

Length of output: 51


Remove validateGetCatalogsQuery function—it has no consumers.

The function is never called anywhere in the codebase. While imported in registerGetCatalogsTool.ts, only the getCatalogsQuerySchema and GetCatalogsQuery type are used; the validation function itself is dead code. MCP tools use the schema directly, and no route handlers invoke this function.

🤖 Prompt for AI Agents
In lib/catalog/validateGetCatalogsQuery.ts around lines 20 to 42, the
validateGetCatalogsQuery function is unused and should be removed; delete the
entire file to eliminate dead code and update any imports across the repo —
specifically remove the import of validateGetCatalogsQuery from
registerGetCatalogsTool.ts (leaving getCatalogsQuerySchema and GetCatalogsQuery
type imports intact) and run a quick search to ensure no other references remain
before committing.

Comment on lines +38 to +52
async (args: GetCatalogSongsQuery) => {
const response = await analyzeFullCatalog({
catalogId: args.catalog_id,
criteria: args.criteria,
});

return {
content: [
{
type: "text",
text: JSON.stringify(response),
},
],
};
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add error handling for async operations.

The handler calls analyzeFullCatalog without error handling. If the catalog analysis fails (e.g., database connection issues, AI model failures), the error will propagate unhandled to the MCP framework.

Apply this pattern to add error handling:

     async (args: GetCatalogSongsQuery) => {
+      try {
       const response = await analyzeFullCatalog({
         catalogId: args.catalog_id,
         criteria: args.criteria,
       });
 
       return {
         content: [
           {
             type: "text",
             text: JSON.stringify(response),
           },
         ],
       };
+      } catch (error) {
+        return {
+          content: [
+            {
+              type: "text",
+              text: JSON.stringify({
+                error: error instanceof Error ? error.message : "Failed to analyze catalog"
+              }),
+            },
+          ],
+          isError: true,
+        };
+      }
     },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async (args: GetCatalogSongsQuery) => {
const response = await analyzeFullCatalog({
catalogId: args.catalog_id,
criteria: args.criteria,
});
return {
content: [
{
type: "text",
text: JSON.stringify(response),
},
],
};
},
async (args: GetCatalogSongsQuery) => {
try {
const response = await analyzeFullCatalog({
catalogId: args.catalog_id,
criteria: args.criteria,
});
return {
content: [
{
type: "text",
text: JSON.stringify(response),
},
],
};
} catch (error) {
return {
content: [
{
type: "text",
text: JSON.stringify({
error: error instanceof Error ? error.message : "Failed to analyze catalog"
}),
},
],
isError: true,
};
}
},
🤖 Prompt for AI Agents
In lib/mcp/tools/catalogs/registerGetCatalogSongsTool.ts around lines 38–52,
wrap the analyzeFullCatalog call in a try/catch: call analyzeFullCatalog inside
try, and in catch log the error (using the available logger or console.error)
and return a structured error response (e.g., content with type "text" and text
set to a JSON string containing an error message and optional error details)
instead of letting the exception propagate; ensure the success return stays in
the try block and the catch returns the same response shape the MCP expects.

Comment on lines +29 to +41
async (args: GetCatalogsQuery) => {
const data = await selectAccountCatalogs({ accountIds: [args.account_id] });

return {
content: [
{
type: "text",
text: JSON.stringify(data),
},
],
};
},
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add error handling for database operations.

The handler calls selectAccountCatalogs without error handling. Database queries can fail due to connection issues, permission errors, or invalid data.

Apply this pattern:

     async (args: GetCatalogsQuery) => {
+      try {
       const data = await selectAccountCatalogs({ accountIds: [args.account_id] });
 
       return {
         content: [
           {
             type: "text",
             text: JSON.stringify(data),
           },
         ],
       };
+      } catch (error) {
+        return {
+          content: [
+            {
+              type: "text",
+              text: JSON.stringify({
+                error: error instanceof Error ? error.message : "Failed to fetch catalogs"
+              }),
+            },
+          ],
+          isError: true,
+        };
+      }
     },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async (args: GetCatalogsQuery) => {
const data = await selectAccountCatalogs({ accountIds: [args.account_id] });
return {
content: [
{
type: "text",
text: JSON.stringify(data),
},
],
};
},
);
async (args: GetCatalogsQuery) => {
try {
const data = await selectAccountCatalogs({ accountIds: [args.account_id] });
return {
content: [
{
type: "text",
text: JSON.stringify(data),
},
],
};
} catch (error) {
return {
content: [
{
type: "text",
text: JSON.stringify({
error: error instanceof Error ? error.message : "Failed to fetch catalogs"
}),
},
],
isError: true,
};
}
},
🤖 Prompt for AI Agents
In lib/mcp/tools/catalogs/registerGetCatalogsTool.ts around lines 29 to 41, the
call to selectAccountCatalogs lacks error handling; wrap the await
selectAccountCatalogs(...) call in a try/catch, log the caught error (using the
module's logger or console if none), and return a safe failure response (e.g.,
content with an error text message or a standardized error object) instead of
letting the exception bubble; ensure any sensitive error details are not exposed
to the client and optionally include an internal error code for telemetry.

Comment on lines +4 to +7
type AccountCatalogWithCatalog = {
catalog: string;
catalogs: Tables<"catalogs">[];
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat lib/supabase/account_catalogs/selectAccountCatalogs.ts

Repository: Recoupable-com/Recoup-API

Length of output: 1351


🏁 Script executed:

# Search for database schema, migrations, or relationship definitions
fd -t f -e sql -e ts | grep -i "schema\|migration\|database" | head -20

Repository: Recoupable-com/Recoup-API

Length of output: 92


🏁 Script executed:

# Look for database types file and check the relationship
fd -t f "database.types.ts"

Repository: Recoupable-com/Recoup-API

Length of output: 92


🏁 Script executed:

# Search for how AccountCatalogWithCatalog is actually used in the codebase
rg -n "AccountCatalogWithCatalog" --type=ts -B 2 -A 3

Repository: Recoupable-com/Recoup-API

Length of output: 1094


🏁 Script executed:

cat types/database.types.ts | head -200

Repository: Recoupable-com/Recoup-API

Length of output: 5968


🏁 Script executed:

rg -n "catalogs:" types/database.types.ts -A 50 | head -80

Repository: Recoupable-com/Recoup-API

Length of output: 2406


🏁 Script executed:

# Check if there are any tests or other usages that might reveal the actual runtime behavior
rg -n "selectAccountCatalogs" --type=ts -B 2 -A 5

Repository: Recoupable-com/Recoup-API

Length of output: 2214


Type definition does not match query result shape.

The catalog column in account_catalogs has a many-to-one relationship to catalogs. When selecting this relationship via catalogs!inner (...), Supabase returns a single nested object per row, not an array. The type should be corrected:

type AccountCatalogWithCatalog = {
  catalog: string;
-  catalogs: Tables<"catalogs">[];
+  catalogs: Tables<"catalogs">;
};

The !inner modifier filters rows where the relationship exists but does not affect the cardinality of the relationship itself.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
type AccountCatalogWithCatalog = {
catalog: string;
catalogs: Tables<"catalogs">[];
};
type AccountCatalogWithCatalog = {
catalog: string;
catalogs: Tables<"catalogs">;
};
🤖 Prompt for AI Agents
In lib/supabase/account_catalogs/selectAccountCatalogs.ts around lines 4 to 7,
the AccountCatalogWithCatalog type declares catalogs as an array but the
Supabase relationship selected via catalogs!inner returns a single nested object
per row; update the type so catalogs is a single Tables<"catalogs"> (not an
array) and keep catalog (ID) as-is; no runtime logic change needed because
!inner filters existence but does not change cardinality.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
lib/openai/getOpenAIErrorMessage.ts (1)

7-31: LGTM with minor observation on case sensitivity.

The error handling logic is clean and covers common OpenAI error scenarios. One consideration: includes() is case-sensitive, so error messages like "Rate Limit" or "RATE LIMIT" would not match. OpenAI typically uses lowercase, but if robustness is a concern, consider using a case-insensitive check:

const lowerMessage = errorMessage.toLowerCase();
if (lowerMessage.includes("rate limit")) { ... }

This is optional since OpenAI's current error messages are consistently lowercase.

lib/openai/retrieveVideoContent.ts (2)

35-46: Consider memory implications for large videos.

Converting the entire video to a base64 data URL keeps the full video in memory (and base64 encoding increases size by ~33%). For large videos (e.g., 100MB+), this could cause memory pressure.

If large video support is expected, consider:

  1. Streaming the content directly to storage instead of buffering
  2. Returning only metadata and a signed URL for download

This may be acceptable for the current use case if video sizes are bounded.


21-26: Consider adding a request timeout.

The fetch call has no timeout, which could cause indefinite hangs on slow or unresponsive connections. Consider using AbortController with a timeout:

+  const controller = new AbortController();
+  const timeoutId = setTimeout(() => controller.abort(), 300000); // 5 min timeout
+
   const response = await fetch(`https://api.openai.com/v1/videos/${videoId}/content`, {
     method: "GET",
     headers: {
       Authorization: `Bearer ${apiKey}`,
     },
+    signal: controller.signal,
   });
+
+  clearTimeout(timeoutId);
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 158bc0b and b7a4c0c.

📒 Files selected for processing (14)
  • lib/mcp/tools/index.ts (2 hunks)
  • lib/mcp/tools/sora2/index.ts (1 hunks)
  • lib/mcp/tools/sora2/registerGenerateVideoTool.ts (1 hunks)
  • lib/mcp/tools/sora2/registerRetrieveVideoContentTool.ts (1 hunks)
  • lib/mcp/tools/sora2/registerRetrieveVideoTool.ts (1 hunks)
  • lib/openai/generateVideo.ts (1 hunks)
  • lib/openai/getOpenAIErrorMessage.ts (1 hunks)
  • lib/openai/retrieveVideo.ts (1 hunks)
  • lib/openai/retrieveVideoContent.ts (1 hunks)
  • lib/video/generateVideo.ts (1 hunks)
  • lib/video/retrieveVideo.ts (1 hunks)
  • lib/video/retrieveVideoContent.ts (1 hunks)
  • lib/video/validateGenerateVideoQuery.ts (1 hunks)
  • lib/video/validateRetrieveVideoQuery.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • lib/mcp/tools/index.ts
🧰 Additional context used
🧬 Code graph analysis (8)
lib/openai/retrieveVideo.ts (1)
lib/openai/generateVideo.ts (1)
  • GenerateVideoResponse (8-18)
lib/mcp/tools/sora2/index.ts (3)
lib/mcp/tools/sora2/registerGenerateVideoTool.ts (1)
  • registerGenerateVideoTool (14-42)
lib/mcp/tools/sora2/registerRetrieveVideoTool.ts (1)
  • registerRetrieveVideoTool (14-44)
lib/mcp/tools/sora2/registerRetrieveVideoContentTool.ts (1)
  • registerRetrieveVideoContentTool (14-45)
lib/mcp/tools/sora2/registerRetrieveVideoTool.ts (2)
lib/video/validateRetrieveVideoQuery.ts (2)
  • retrieveVideoQuerySchema (3-8)
  • RetrieveVideoQuery (10-10)
lib/video/retrieveVideo.ts (1)
  • retrieveVideoFunction (18-61)
lib/mcp/tools/sora2/registerGenerateVideoTool.ts (2)
lib/video/validateGenerateVideoQuery.ts (2)
  • generateVideoQuerySchema (3-17)
  • GenerateVideoQuery (19-19)
lib/video/generateVideo.ts (1)
  • generateVideoFunction (20-66)
lib/video/generateVideo.ts (3)
lib/openai/generateVideo.ts (1)
  • GenerateVideoResponse (8-18)
lib/video/validateGenerateVideoQuery.ts (1)
  • GenerateVideoQuery (19-19)
lib/openai/getOpenAIErrorMessage.ts (1)
  • getOpenAIErrorMessage (7-31)
lib/video/retrieveVideoContent.ts (2)
lib/video/validateRetrieveVideoQuery.ts (1)
  • RetrieveVideoQuery (10-10)
lib/openai/getOpenAIErrorMessage.ts (1)
  • getOpenAIErrorMessage (7-31)
lib/mcp/tools/sora2/registerRetrieveVideoContentTool.ts (2)
lib/video/validateRetrieveVideoQuery.ts (2)
  • retrieveVideoQuerySchema (3-8)
  • RetrieveVideoQuery (10-10)
lib/video/retrieveVideoContent.ts (1)
  • retrieveVideoContentFunction (22-54)
lib/video/retrieveVideo.ts (3)
lib/openai/generateVideo.ts (1)
  • GenerateVideoResponse (8-18)
lib/video/validateRetrieveVideoQuery.ts (1)
  • RetrieveVideoQuery (10-10)
lib/openai/getOpenAIErrorMessage.ts (1)
  • getOpenAIErrorMessage (7-31)
🔇 Additional comments (9)
lib/video/validateRetrieveVideoQuery.ts (1)

1-10: LGTM!

Clean validation schema following the established project pattern. The min(1) constraint ensures the video ID is non-empty, and the .describe() provides useful documentation for MCP tool consumers.

lib/video/retrieveVideoContent.ts (1)

22-54: LGTM!

Well-structured wrapper function with proper error handling. The try/catch ensures errors are caught, logged, and transformed into user-friendly messages via getOpenAIErrorMessage. The structured return type with separate success/error fields makes it easy for consumers to handle both cases.

lib/mcp/tools/sora2/index.ts (1)

1-15: LGTM!

Clean aggregator pattern for registering all Sora2-related MCP tools. This follows the established project convention and keeps the registration logic modular.

lib/mcp/tools/sora2/registerGenerateVideoTool.ts (1)

14-41: MCP generate_sora_2_video tool wiring looks consistent and type-safe.

Registration correctly uses generateVideoQuerySchema for validation, a typed GenerateVideoQuery handler, and returns a JSON-serialized VideoGenerationResult in the standard { content: [{ type: "text", text }] } shape. No issues from this file alone.

lib/mcp/tools/sora2/registerRetrieveVideoTool.ts (1)

14-43: retrieve_sora_2_video MCP tool wiring matches the rest of the Sora2 suite.

The tool correctly uses retrieveVideoQuerySchema for input validation, passes a typed RetrieveVideoQuery into retrieveVideoFunction, and returns the normalized result as JSON text. This is consistent and ready to use.

lib/mcp/tools/sora2/registerRetrieveVideoContentTool.ts (1)

14-45: retrieve_sora_2_video_content MCP tool registration is consistent and clear.

The tool uses the shared retrieveVideoQuerySchema, delegates to retrieveVideoContentFunction, and returns a JSON-serialized result. The description documents dependencies on generate_sora_2_video / retrieve_sora_2_video nicely. No changes needed here.

lib/video/generateVideo.ts (1)

20-65: Video generation wrapper is well-structured and consistent with the retrieval flow.

The function cleanly normalizes the OpenAI response into VideoGenerationResult, uses sensible defaults (seconds, size), and provides informative messages for both completed and queued jobs. The error path is symmetric and leverages getOpenAIErrorMessage while keeping a stable result shape. Only follow-up is to keep this seconds default in sync with the validation schema/docs (already noted there); otherwise this looks good.

lib/video/retrieveVideo.ts (1)

18-60: retrieveVideoFunction normalization and error handling look solid.

This wrapper cleanly converts the low-level OpenAI response into a stable RetrieveVideoResult, with clear status messaging for completed/processing/queued states. The error path correctly logs, uses getOpenAIErrorMessage, and preserves the input video_id while marking status as "failed". This fits well with the rest of the video tooling.

lib/openai/retrieveVideo.ts (1)

9-31: retrieveVideo implementation is correct and well-structured.

The endpoint GET /v1/videos/{videoId} and response shape match the OpenAI API contract. Error handling with guarded JSON parsing is sound. Consider adding an Accept: "application/json" header for explicit expectations, though it is optional.

Comment on lines +1 to +56
export interface GenerateVideoRequest {
model: string;
prompt: string;
seconds: number;
size: string;
}

export interface GenerateVideoResponse {
id: string;
object: string;
model: string;
status: string;
progress?: number;
created_at?: number;
size: string;
seconds: string;
quality?: string;
}

/**
* Generates a video using OpenAI's API
*
* @param request - The request parameters for video generation
* @returns The video generation response
*/
export async function generateVideo(request: GenerateVideoRequest): Promise<GenerateVideoResponse> {
const apiKey = process.env.OPENAI_API_KEY;

if (!apiKey) {
throw new Error("OPENAI_API_KEY environment variable is not set");
}

const formData = new FormData();
formData.append("model", request.model);
formData.append("prompt", request.prompt);
formData.append("seconds", request.seconds.toString());
formData.append("size", request.size);

const response = await fetch("https://api.openai.com/v1/videos", {
method: "POST",
headers: {
Authorization: `Bearer ${apiKey}`,
},
body: formData,
});

if (!response.ok) {
const errorData = await response.json().catch(() => ({}));
throw new Error(
errorData.error?.message || `API request failed with status ${response.status}`,
);
}

const data: GenerateVideoResponse = await response.json();
return data;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

OpenAI video generation API endpoint specification POST /v1/videos parameters response format 2024 2025

💡 Result:

Current (OpenAI Platform) specification for POST /v1/videos — summary of request parameters and returned response (as documented by OpenAI). Sources: OpenAI API reference. [1]

POST https://api.openai.com/v1/videos

Request (multipart/form-data or JSON where noted)

  • model (string, optional) — video model to use (allowed: sora-2, sora-2-pro). Defaults to sora-2. [1]
  • prompt (string, required) — text prompt describing the video to generate. [1]
  • input_reference (file, optional) — image/file reference to guide generation. [1]
  • seconds (string, optional) — clip duration in seconds (allowed values: "4", "8", "12"). Defaults to "4". [1]
  • size (string, optional) — output resolution (allowed values: "720x1280", "1280x720", "1024x1792", "1792x1024"). Defaults to "720x1280". [1]

Example curl request

Response (video job object)

  • id (string) — video job id (e.g., "video_123"). [1]
  • object (string) — "video". [1]
  • model (string) — model used (e.g., "sora-2"). [1]
  • status (string) — lifecycle status (e.g., "queued", "completed", "failed"). [1]
  • progress (integer) — approximate completion percent. [1]
  • created_at (integer) — unix timestamp (seconds) when job created. [1]
  • completed_at (integer, optional) — unix timestamp when finished. [1]
  • expires_at (integer, optional) — unix timestamp when downloadable assets expire. [1]
  • prompt (string) — original prompt. [1]
  • seconds (string) — duration (e.g., "8"). [1]
  • size (string) — resolution (e.g., "1024x1792"). [1]
  • remixed_from_video_id (string, optional) — if a remix. [1]
  • error (object, optional) — error payload if generation failed. [1]

Example response JSON (from docs)
{
"id": "video_123",
"object": "video",
"model": "sora-2",
"status": "queued",
"progress": 0,
"created_at": 1712697600,
"size": "1024x1792",
"seconds": "8",
"quality": "standard"
} [1]

Related endpoints

  • POST /v1/videos/{video_id}/remix — create a remix of a completed video. [1]
  • GET /v1/videos — list jobs; GET /v1/videos/{video_id} — retrieve job metadata; GET /v1/videos/{video_id}/content — download video bytes; DELETE /v1/videos/{video_id} — delete job. [1]

If you want the raw live docs or examples for 2024 vs 2025 snapshots, I can fetch archived or changelog entries and show differences. Sources: OpenAI API reference (Platform docs). [1]

Sources
[1] OpenAI Platform — Videos API reference (Create/list/retrieve/download/delete/remix) — platform.openai.com/docs/api-reference/videos.


API contract has typing mismatches; update request/response interfaces to match OpenAI spec.

The endpoint and FormData approach are correct, but the code has several discrepancies with the OpenAI video API:

  • Request: seconds must be validated to only accept "4", "8", or "12"; currently any number converts to string. Also, model should be optional (defaults to sora-2), not required in GenerateVideoRequest.
  • Response: Add optional fields completed_at, expires_at (both unix timestamps), prompt, and remixed_from_video_id to GenerateVideoResponse. The quality field in the example may need clarification against the schema.
  • Runtime: fetch and FormData support is fine for Node 18+ / edge runtimes (standard now); no polyfill needed.

Update the interfaces to match the OpenAI API reference and add validation for the seconds enum values.

🤖 Prompt for AI Agents
In lib/openai/generateVideo.ts around lines 1 to 56, update the TypeScript
interfaces and add runtime validation: change GenerateVideoRequest.model to
optional and treat missing model as 'sora-2' when building the FormData; change
seconds to a string union type of "4" | "8" | "12" (not any number) and validate
at runtime to throw if request.seconds is not one of those values; extend
GenerateVideoResponse with optional completed_at and expires_at (unix
timestamps), prompt, and remixed_from_video_id fields, keep quality as optional
string; ensure when appending to FormData you append the validated/normalized
seconds string and model (defaulting to 'sora-2') before making the fetch call.

Comment on lines +3 to +19
export const generateVideoQuerySchema = z.object({
prompt: z
.string()
.min(1, "Prompt is required")
.describe("Text description of the video to generate"),
seconds: z
.number()
.min(4)
.optional()
.describe("Duration of the video in seconds (default: 4, max: 20)"),
size: z
.enum(["720x1280", "1280x720"])
.optional()
.describe("Size of the video: 720x1280 (default portrait), 1280x720 (landscape)"),
});

export type GenerateVideoQuery = z.infer<typeof generateVideoQuerySchema>;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Align documented defaults with implementation and enforce the described max duration.

Two things to tidy up here:

  • The JSDoc/description says seconds has a default of 4, but generateVideoFunction currently defaults seconds = 8. Please align either the description or the implementation so callers aren’t surprised by the actual default.
  • The description mentions “max: 20”, but the schema only enforces .min(4). You can enforce the upper bound directly in the schema:
  seconds: z
     .number()
-    .min(4)
+    .min(4)
+    .max(20)
     .optional()
🤖 Prompt for AI Agents
In lib/video/validateGenerateVideoQuery.ts around lines 3 to 19, the seconds
field's schema and description are inconsistent with the implementation and lack
an upper bound; update the seconds schema to enforce .max(20) and set a default
that matches the implementation (e.g., .default(8)) and update the description
text to "Duration of the video in seconds (default: 8, max: 20)" so the schema
enforces the upper bound and the documented default matches runtime behavior.

Comment on lines +10 to +12
.min(4)
.optional()
.describe("Duration of the video in seconds (default: 4, max: 20)"),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
.min(4)
.optional()
.describe("Duration of the video in seconds (default: 4, max: 20)"),
.min(4, "Minimum 4 seconds")
.max(20, "Maximum 20 seconds")
.optional()
.describe("Duration of the video in seconds (default: 8, max: 20)"),

The video seconds validation schema is missing the .max(20) constraint that's mentioned in the description, and the documented default value (4) doesn't match the actual default value used in the code (8).

View Details

Analysis

Missing maximum validation and default value mismatch in video generation schema

What fails: The Zod schema in lib/video/validateGenerateVideoQuery.ts (lines 10-12) lacks the .max(20) constraint on the seconds field, allowing API clients to request video durations beyond the documented 20-second limit. Additionally, the schema description states "default: 4" but the actual default value used in lib/video/generateVideo.ts line 24 is 8 seconds.

How to reproduce:

// This should fail validation but currently passes:
generateVideoQuerySchema.safeParse({
  prompt: "A cat",
  seconds: 25  // Above documented max of 20
});
// Result: { success: true, data: { prompt: "A cat", seconds: 25 } }

Result:

  • Request with seconds: 25 passes validation when it should fail
  • Schema description incorrectly documents default as 4 seconds when actual default is 8 seconds

Expected:

  • Requests with seconds > 20 should be rejected with validation error
  • Schema description should match actual default value (8) used in generateVideo.ts

Fix applied:

  • Added .max(20, "Maximum 20 seconds") to the schema
  • Added .min(4, "Minimum 4 seconds") with error message for clarity
  • Updated description from "default: 4" to "default: 8" to match implementation

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
lib/telegram/client.ts (1)

1-15: Consider avoiding module‑level throw for missing TELEGRAM_BOT_TOKEN

Throwing at import time means any code that indirectly imports this module (even if it never sends a Telegram message) will crash if TELEGRAM_BOT_TOKEN isn’t set. For local dev/tests or environments where contact tools are disabled, that can be brittle.

You could instead:

  • Lazily construct the client when first needed, or
  • Keep the check but wrap client access in a small getter that throws if the token is missing, limiting failures to actual Telegram usage paths.

This preserves fail‑fast behavior for the feature without coupling process startup to this env var.

lib/telegram/trimMessage.ts (1)

1-16: Derive trim length from a suffix constant instead of hardcoded 15

The current logic is correct, but the 15 is a magic number. Defining the suffix once and using its length would make future changes safer and clearer.

-// Telegram message length limit is 4096 characters
-const MAX_MESSAGE_LENGTH = 4000; // Using slightly less than 4096 to be safe
+// Telegram message length limit is 4096 characters
+const MAX_MESSAGE_LENGTH = 4000; // Using slightly less than 4096 to be safe
+const TRIM_SUFFIX = "...[trimmed]";
@@
-  if (text.length <= MAX_MESSAGE_LENGTH) {
+  if (text.length <= MAX_MESSAGE_LENGTH) {
     return text;
   }
 
-  return text.substring(0, MAX_MESSAGE_LENGTH - 15) + "...[trimmed]";
+  return text.substring(0, MAX_MESSAGE_LENGTH - TRIM_SUFFIX.length) + TRIM_SUFFIX;
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b7a4c0c and 3a7ca52.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (8)
  • lib/contact/contactTeam.ts (1 hunks)
  • lib/contact/validateContactTeamQuery.ts (1 hunks)
  • lib/mcp/tools/index.ts (2 hunks)
  • lib/mcp/tools/registerContactTeamTool.ts (1 hunks)
  • lib/telegram/client.ts (1 hunks)
  • lib/telegram/sendMessage.ts (1 hunks)
  • lib/telegram/trimMessage.ts (1 hunks)
  • package.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • lib/mcp/tools/index.ts
🧰 Additional context used
🧬 Code graph analysis (3)
lib/mcp/tools/registerContactTeamTool.ts (2)
lib/contact/validateContactTeamQuery.ts (2)
  • contactTeamQuerySchema (3-12)
  • ContactTeamQuery (14-14)
lib/contact/contactTeam.ts (1)
  • contactTeam (15-43)
lib/contact/contactTeam.ts (2)
lib/contact/validateContactTeamQuery.ts (1)
  • ContactTeamQuery (14-14)
lib/telegram/sendMessage.ts (1)
  • sendMessage (12-23)
lib/telegram/sendMessage.ts (1)
lib/telegram/trimMessage.ts (1)
  • trimMessage (10-16)
🔇 Additional comments (5)
package.json (1)

31-31: Telegram dependency addition looks appropriate

Adding node-telegram-bot-api aligns with the new Telegram client and messaging helpers; no issues from this diff alone.

lib/mcp/tools/registerContactTeamTool.ts (1)

1-35: Contact‑team tool registration looks consistent and well‑typed

Using contactTeamQuerySchema as inputSchema and delegating to contactTeam keeps validation and behavior nicely separated; wrapping the result in a single text content node is consistent with other tools.

lib/telegram/sendMessage.ts (1)

1-23: sendMessage helper is cohesive and safely validates configuration

Checking TELEGRAM_CHAT_ID at call time, trimming via trimMessage, and delegating to the shared client keeps this helper focused and robust; the flow looks good.

lib/contact/validateContactTeamQuery.ts (1)

1-14: ContactTeamQuery schema is clear and sufficient for current usage

The Zod schema captures the required message and optional context cleanly, and the inferred ContactTeamQuery type fits the downstream contactTeam implementation.

lib/contact/contactTeam.ts (1)

1-43: contactTeam implementation cleanly bridges MCP input to Telegram

Message formatting, fallbacks for missing metadata, and the success/failure envelope with logging are all sensible; behavior looks solid for the new tool.

@sweetmantech sweetmantech merged commit b1ebc78 into main Dec 8, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants