Conversation
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
|
Warning Rate limit exceeded@elliotBraem has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 12 minutes and 28 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (6)
WalkthroughThis update introduces a comprehensive recap scheduling and management system across the backend and frontend. It adds a robust backend scheduler service integrated with an external scheduler SDK, new database tables and migrations for recap state tracking, and a suite of API endpoints for managing recap configurations. The frontend gains a new Changes
Sequence Diagram(s)Recap Job Scheduling and Execution FlowsequenceDiagram
participant User
participant Frontend
participant Backend API
participant SchedulerService
participant SchedulerSDK
participant FeedRepository
participant ProcessorService
User->>Frontend: Add/Update/Delete Recap Config
Frontend->>Backend API: POST/PUT/DELETE /api/feeds/:id/recap
Backend API->>FeedRepository: Update feed config
Backend API->>SchedulerService: syncFeedSchedules(feedId)
SchedulerService->>FeedRepository: getAllRecapStatesForFeed(feedId)
SchedulerService->>SchedulerSDK: create/update/delete jobs as needed
SchedulerService->>FeedRepository: upsert/delete recap state records
SchedulerSDK->>SchedulerService: Triggers recap job (HTTP call)
SchedulerService->>FeedRepository: getRecapState(feedId, recapId)
SchedulerService->>FeedRepository: getApprovedSubmissionsSince(feedId, lastSuccess)
SchedulerService->>ProcessorService: process submissions
alt Success
SchedulerService->>FeedRepository: updateRecapCompletion(feedId, recapId, now)
else Failure
SchedulerService->>FeedRepository: updateRecapError(feedId, recapId, error)
end
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 11
♻️ Duplicate comments (1)
backend/src/services/db/repositories/feed.repository.ts (1)
231-250:⚠️ Potential issueSame write-connection issue as above
updateRecapErroralso lacksisWrite=true– apply the identical patch.
🧹 Nitpick comments (13)
frontend/src/components/FeedList.tsx (1)
10-10: Avoid leaking inconsistent endpoint comments.The TODO comment indicates a plan to change
/api/feedsto/feed. If this is tracked in an issue or forthcoming PR, reference the issue ID here. Otherwise, remove or replace it with a more actionable TODO (e.g.,// TODO (FIXME-123): switch to /feed endpoint once backend migration is live).backend/src/app.ts (1)
64-84: Scheduler initialization follows good practices with error handling.The implementation properly:
- Uses environment variables with reasonable defaults
- Initializes the scheduler asynchronously to avoid blocking app startup
- Includes error handling to log failures without crashing the application
Consider adding a more descriptive error log that includes the actual error message:
- console.error("Failed to initialize scheduler service:", err); + console.error("Failed to initialize scheduler service:", err instanceof Error ? err.message : err);frontend/src/components/RecapManager.tsx (4)
38-51: Consider refactoring mutation hooks setup.The mutation hooks are correctly set up, but they're repeating the same pattern of index selection.
Consider refactoring to avoid repetition and potential bugs:
- const updateRecap = useUpdateRecap( - feedId, - editingRecapIndex !== null ? editingRecapIndex : 0, - ); - const deleteRecap = useDeleteRecap( - feedId, - editingRecapIndex !== null ? editingRecapIndex : 0, - ); - const runRecap = useRunRecap( - feedId, - editingRecapIndex !== null ? editingRecapIndex : 0, - ); + const activeIndex = editingRecapIndex !== null ? editingRecapIndex : 0; + const updateRecap = useUpdateRecap(feedId, activeIndex); + const deleteRecap = useDeleteRecap(feedId, activeIndex); + const runRecap = useRunRecap(feedId, activeIndex);
160-163: Enhance date formatting with timezone support.The current date formatting doesn't account for timezones, which is important when working with scheduled tasks.
Improve the date formatting to include timezone information:
// Format date for display const formatDate = (dateString?: string) => { if (!dateString) return "Never"; - return new Date(dateString).toLocaleString(); + return new Date(dateString).toLocaleString(undefined, { + dateStyle: 'medium', + timeStyle: 'medium', + timeZone: 'UTC' // Or use the user's timezone or the recap's timezone + }); };
179-180: Add fallback when recapsData is undefined.The code could handle the case where recapsData is undefined more explicitly.
Improve the fallback for when recapsData is undefined:
- const recaps = recapsData?.recaps || []; + const recaps = recapsData?.recaps ?? [];
317-412: Consider breaking down the recap list rendering into a sub-component.The recap list rendering code is quite lengthy and could be extracted into a separate component to improve readability.
Extract the recap list into a separate component to improve maintainability:
type RecapItemProps = { recap: RecapWithState; index: number; onEdit: (recap: RecapWithState, index: number) => void; onDelete: (index: number) => void; onRun: (index: number) => void; isRunning: boolean; isDeleting: boolean; editingIndex: number | null; formatDate: (dateString?: string) => string; }; const RecapItem = ({ recap, index, onEdit, onDelete, onRun, isRunning, isDeleting, editingIndex, formatDate }: RecapItemProps) => ( <div className="card"> {/* Item rendering logic here */} </div> ); // Then in the main component: {recaps.map((recap, index) => ( <RecapItem key={index} recap={recap} index={index} onEdit={handleEditRecap} onDelete={handleDeleteRecap} onRun={handleRunRecap} isRunning={runRecap.isPending} isDeleting={deleteRecap.isPending} editingIndex={editingRecapIndex} formatDate={formatDate} /> ))}backend/test/unit/services/scheduler/scheduler.service.test.ts (1)
2-2:beforeEachis imported but never usedThe explicit import increases the cognitive load and may trigger linter warnings.
Please remove the unused symbol to keep the test file clean.-import { beforeEach, describe, expect, mock, spyOn, test } from "bun:test"; +import { describe, expect, mock, spyOn, test } from "bun:test";backend/src/services/db/schema.ts (3)
15-17: Unused imports increase bundle size and lint noise
RecapStateandbooleanare imported but never referenced in this file.
Remove them to keep the schema module lean.-import { FeedConfig } from "../../types/config"; -import { RecapState } from "../../types/recap"; +import { FeedConfig } from "../../types/config";
37-42: Duplicated source-of-truth betweenconfig.nameand top-level columnsStoring
nameanddescriptionboth inside the JSONBconfigand as separate columns invites data drift:
- Updates must touch two locations.
- Reads can observe inconsistent values if only one side is updated.
Unless you have a compelling querying need, consider dropping the redundant columns or creating generated columns that derive from
configto enforce consistency.
65-71: Composite unique index name deviates from naming convention
feedRecapIdIdxis defined as a unique index but carries the_idxsuffix, usually reserved for non-unique indices.
Renaming it tofeed_recap_id_uidx(or similar) communicates its uniqueness at a glance.backend/src/services/db/repositories/feed.repository.ts (1)
260-288: Raw SQL usessubmitted_atstring comparisonComparing timestamps as text (
>= ${effectiveSince.toISOString()}) can bypass indexes and mis-order dates.
Prefer parameterisedtimestampvalues:.where( and( eq(submissionFeeds.feedId, feedId), eq(submissionFeeds.status, "approved"), gte(submissions.submittedAt, effectiveSince), ), )(or pass
effectiveSincedirectly insidesql${effectiveSince}` which Drizzle will cast).frontend/src/lib/recap.ts (2)
25-39: MissingstaleTime/retryleads to UI flicker & unbounded retriesFor frequently refreshed dashboards the default React-Query behaviour refetches on every window focus and retries thrice on error. This can hammer
/api/feed/:id/recap.Add sensible defaults:
return useQuery<GetRecapsResponse>({ queryKey: ["recaps", feedId], queryFn: async () => { ... }, + staleTime: 30_000, + retry: 1, });Apply similar options to
useRecap.
67-90: Error messages are swallowed whenresponse.json()failsWhen the backend returns non-JSON (e.g., 502 HTML),
await response.json()throws, masking the real error.Pattern:
let errorText = await response.text(); let error; try { error = JSON.parse(errorText).error; } catch { error = errorText; } throw new Error(error || "…");Extract to a shared helper to DRY the mutations.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
backend/pnpm-lock.yamlis excluded by!**/pnpm-lock.yamlpnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (29)
.clinerules(2 hunks)backend/package.json(1 hunks)backend/src/app.ts(4 hunks)backend/src/routes/api/internal.ts(1 hunks)backend/src/routes/api/recap.ts(1 hunks)backend/src/services/db/migrations/0002_brown_satana.sql(1 hunks)backend/src/services/db/migrations/meta/0002_snapshot.json(1 hunks)backend/src/services/db/migrations/meta/_journal.json(1 hunks)backend/src/services/db/queries.ts(2 hunks)backend/src/services/db/repositories/feed.repository.ts(2 hunks)backend/src/services/db/schema.ts(3 hunks)backend/src/services/db/transaction.ts(2 hunks)backend/src/services/scheduler/scheduler.service.ts(1 hunks)backend/src/services/submissions/submission.service.ts(1 hunks)backend/src/services/twitter/queries.ts(1 hunks)backend/src/services/twitter/schema.ts(0 hunks)backend/src/types/app.ts(2 hunks)backend/src/types/config.ts(2 hunks)backend/src/types/recap.ts(1 hunks)backend/test/curate.config.test.json(1 hunks)backend/test/setup/seed-test.ts(3 hunks)backend/test/setup/seed.sql(2 hunks)backend/test/unit/services/scheduler/scheduler.service.test.ts(1 hunks)frontend/src/components/FeedList.tsx(1 hunks)frontend/src/components/RecapManager.tsx(1 hunks)frontend/src/components/Settings.tsx(2 hunks)frontend/src/lib/recap.ts(1 hunks)frontend/src/types/recap.ts(1 hunks)package.json(1 hunks)
💤 Files with no reviewable changes (1)
- backend/src/services/twitter/schema.ts
🧰 Additional context used
🧬 Code Graph Analysis (10)
backend/src/types/app.ts (2)
backend/src/services/scheduler/scheduler.service.ts (1)
SchedulerService(27-423)backend/src/services/db/repositories/feed.repository.ts (1)
FeedRepository(32-436)
frontend/src/components/Settings.tsx (1)
frontend/src/components/RecapManager.tsx (1)
RecapManager(16-415)
backend/src/types/config.ts (1)
backend/src/types/recap.ts (1)
RecapConfig(6-34)
backend/src/types/recap.ts (2)
frontend/src/types/recap.ts (1)
RecapConfig(6-31)backend/src/types/config.ts (2)
TransformConfig(22-25)DistributorConfig(27-31)
frontend/src/types/recap.ts (2)
backend/src/types/recap.ts (1)
RecapConfig(6-34)backend/src/types/config.ts (2)
TransformConfig(22-25)DistributorConfig(27-31)
backend/src/services/db/transaction.ts (2)
backend/src/services/db/index.ts (2)
executeTransaction(5-5)dbConnection(1-1)backend/src/services/db/connection.ts (1)
dbConnection(295-295)
backend/src/routes/api/recap.ts (3)
backend/src/types/app.ts (1)
AppContext(13-21)backend/src/services/db/repositories/feed.repository.ts (1)
feedRepository(439-439)backend/src/utils/logger.ts (1)
logger(71-80)
frontend/src/lib/recap.ts (2)
backend/src/types/recap.ts (1)
RecapConfig(6-34)frontend/src/types/recap.ts (1)
RecapConfig(6-31)
backend/src/services/db/schema.ts (2)
backend/scripts/migrate-sqlite-to-postgres.js (3)
feeds(53-53)twitterCookies(217-219)twitterCache(230-230)backend/src/types/config.ts (1)
FeedConfig(43-52)
backend/src/services/db/repositories/feed.repository.ts (6)
backend/src/types/config.ts (1)
FeedConfig(43-52)backend/src/services/db/transaction.ts (2)
executeOperation(14-22)executeTransaction(29-54)backend/src/services/db/index.ts (2)
executeOperation(4-4)executeTransaction(5-5)backend/src/services/db/schema.ts (2)
feeds(35-43)feedRecapsState(47-71)backend/src/types/recap.ts (1)
RecapState(39-63)backend/src/utils/logger.ts (1)
logger(71-80)
🔇 Additional comments (47)
backend/src/services/twitter/queries.ts (1)
3-3: Import paths updated correctly.
Updating the import from the localschema.tsto the centralized../db/schemaaligns with the refactored schema location. No further changes needed here.backend/package.json (1)
46-46: Verify scheduler SDK peer compatibility.Adding
@crosspost/scheduler-sdk@^0.1.1is necessary for the new SchedulerService. Ensure this version is compatible with your Node.js version and peer dependencies in the SDK.package.json (1)
10-10: Keep packageManager up-to-date.The upgrade from
pnpm@10.8.0+sha512...topnpm@10.10.0simplifies the specification and aligns with recent pnpm releases. No issues detected.backend/src/services/db/migrations/meta/_journal.json (1)
18-25: Journal entry appended correctly.The new entry with
idx: 2, tag"0002_brown_satana", and timestamp aligns with the SQL migration file. Entries remain ordered andbreakpointsare enabled.backend/src/services/submissions/submission.service.ts (1)
83-89: Simplified feed initialization is cleaner and aligns with broader changes.The
initializeFeedsmethod has been simplified to directly pass the complete feed configuration objects to the repository layer, which aligns with the expanded feed configuration management that now supports multiple recap configurations per feed.backend/src/types/app.ts (2)
7-8: Added necessary imports for the new scheduler and feed repository services.These imports support the new recap scheduling functionality introduced in this PR.
19-20: Extended AppContext with scheduler and feed repository services.Adding these services to the app context makes them available throughout the application, which is necessary for the new recap scheduling functionality. This aligns with the PR objective of integrating scheduler service for recap capabilities.
backend/test/setup/seed.sql (2)
13-13: Added truncate statement for the new feed_recaps_state table.Ensures the test environment starts with a clean state for the new recap functionality.
56-60: Added test data for the new feed_recaps_state table.The seed data includes both a successful recap completion and a failed recap with an error message, providing good test coverage for different recap states.
backend/src/types/config.ts (2)
39-39: Imported RecapConfig from dedicated module.Good modular design - moving the RecapConfig to a dedicated file improves code organization.
50-50:Details
✅ Verification successful
Updated FeedConfig to support multiple recap configurations per feed.
Changing from a single optional RecapConfig to an array of RecapConfigs enables feeds to have multiple recap schedules with different configurations, which is a key enhancement in this PR.
🏁 Script executed:
#!/bin/bash # Verify that the code has been updated to handle arrays of recap configurations # Check for array handling in scheduler service rg -A 3 "feeds.*config.*recap" --type tsLength of output: 42
I couldn’t find any references to array handling of
recapin the scheduler or service code. Let’s broaden our search:
🏁 Script executed:
#!/bin/bash # Locate and inspect the config definitions fd config.ts -e ts | xargs sed -n '1,200p' # Search for RecapConfig definition and usage rg -n "RecapConfig" --type ts # Search for FeedConfig to see if recap is now an array rg -n "FeedConfig" --type ts # Search all TypeScript for any ‘recap’ handling rg -n "recap" --type tsLength of output: 47554
Approve: Multiple Recap Configurations Handled
FeedConfig.outputs.recap is now defined as an array of RecapConfig, and the scheduler/service code correctly iterates over every recap configuration. Unit tests cover multiple-recaps scenarios, confirming the implementation works as intended.
frontend/src/components/Settings.tsx (2)
7-7: Clean import of the RecapManager component.This import correctly adds the new RecapManager component that will handle all recap configuration management.
147-150: Good implementation of the RecapManager component.The RecapManager component is properly integrated into the Settings UI for each feed, with a clear section divider above it. This approach correctly encapsulates all recap-related functionality in a dedicated component, improving code organization and maintainability.
.clinerules (3)
81-88: Well-documented Recap Flow process.This section clearly outlines the step-by-step flow for recap job execution, providing a comprehensive overview of how the scheduler interacts with the system. The documentation helps developers understand the critical path of recap job execution.
100-101: Good documentation of database patterns.These additions properly document the repository pattern and state tracking for external scheduled jobs, aligning with the implementation of the recap scheduling system.
105-112: Comprehensive documentation of the Recap Scheduling System focus area.This section clearly outlines the implementation goals and approach for the recap scheduling system, covering all important aspects like reliability, job state tracking, schedule formats, UI management, and resilience against process restarts.
frontend/src/types/recap.ts (1)
1-31: Well-structured RecapConfig interface with comprehensive documentation.The RecapConfig interface is well-designed with clear JSDoc comments that explain each field's purpose. The interface properly imports and reuses existing types (TransformConfig and DistributorConfig) and includes support for both cron expressions and interval-based schedules.
I notice that unlike the backend version, this interface doesn't include an
idfield. This appears intentional since the frontend likely uses array indices for new configurations before they're assigned server-side IDs.backend/src/types/recap.ts (2)
1-34: Well-structured RecapConfig interface with comprehensive documentation.The RecapConfig interface is well-designed with clear JSDoc comments that explain each field's purpose. The interface properly imports and reuses existing types and includes additional fields needed for backend operations.
36-63: Well-designed RecapState interface for tracking job execution state.The RecapState interface thoroughly captures all necessary information for tracking recap job execution, including:
- Internal and external identifiers
- Timestamps for creation, updates, and successful completions
- Error tracking for failed executions
- Proper relationships to feeds and recap configurations
This design supports the stateful job tracking described in the documentation, ensuring robustness against process restarts.
backend/src/services/db/queries.ts (4)
11-11: Good addition of the FeedConfig importThe import of FeedConfig from "../../types/config" properly supports the enhanced feed configurations needed for the recap system.
22-23: Function signature properly updated to support complex feed configurationsChanging the parameter type to
FeedConfig[]clearly conveys the expected input structure and aligns with the broader system changes.
25-33: Well-structured object mapping from FeedConfigThe implementation correctly extracts all necessary fields from the FeedConfig object while preserving the full configuration in the config column.
39-43: Complete conflict resolution logicThe conflict resolution has been properly updated to include all necessary fields that should be updated when a feed record already exists.
backend/test/setup/seed-test.ts (4)
32-32: Good test data cleanupThe addition of the truncation command for the new
feed_recaps_statetable ensures clean test data for each test run.
43-129: Comprehensive test feed configurationsThe seed data now includes detailed config objects with moderation settings and multiple recap configurations, which is excellent for testing the new recap functionality. Each feed has appropriate settings for different scenarios.
259-273: Improved date handlingUsing
toISOString().split("T")[0]to get just the date portion forlastResetDateis a good approach for date-only fields.
279-300: Great test data for the new recap state tableThe seed entries for
feed_recaps_statecover both successful and failed recap scenarios, providing good test coverage for error handling in the recap system.backend/src/services/db/transaction.ts (3)
2-2: Appropriate imports for ORM integrationAdding the Drizzle ORM imports supports the enhanced transaction handling throughout the system.
27-32: Improved function signature for ORM supportThe function signature changes appropriately reflect the move to using Drizzle ORM within transactions, and the default value for
isWritemakes sense for most transaction use cases.
39-44: Good ORM integration within transactionsCreating a Drizzle instance for each transaction and passing it to the operations function enables consistent ORM usage throughout the transaction, improving type safety and query composition.
backend/src/routes/api/internal.ts (2)
7-11: Well-defined validation schemaThe Zod schema properly validates the required fields for recap job requests, ensuring input validation before processing.
36-55: Robust error handling in endpoint implementationThe endpoint implementation properly validates input, logs actions, and has comprehensive error handling that returns appropriate status codes and error messages.
backend/src/app.ts (4)
6-6: Integration of internal API router is well-structured.The addition of the internal router import supports the new scheduler functionality.
15-17: Good inclusion of scheduler dependencies.The imports for feedRepository, SchedulerService, and the external SchedulerClient are appropriately added.
91-92: Appropriate context extension.Adding the schedulerService and feedRepository to the application context makes these services properly accessible throughout the app.
132-135: Secure internal API route integration.The internal router is correctly mounted at
/api/internalwith the scheduler service provided as a dependency, establishing a clean architecture for internal endpoints.frontend/src/components/RecapManager.tsx (3)
1-15: Clean component setup with appropriate imports and interface definition.The component correctly imports necessary hooks and types, with a well-defined props interface.
16-37: Well-structured state management with appropriate defaults.The component initializes state variables properly and provides a sensible default configuration for new recaps.
102-120: Form submission correctly handles add/update operations.The form submission logic properly differentiates between adding and updating recaps, with appropriate success messages and error handling.
backend/test/curate.config.test.json (1)
194-291: Well-structured recap configuration array with differentiated schedules.The configuration now properly supports multiple recap configurations per feed, with:
- A minute-by-minute recap for frequent updates (edge-recap-1)
- A daily AI-powered summary at midnight (edge-recap-ai)
Each has appropriate transform and distribute configurations. The structure aligns with the schema changes and supports the new scheduler functionality.
backend/src/services/db/migrations/0002_brown_satana.sql (3)
1-11: Well-structured feed_recaps_state table creation.The table design appropriately captures:
- Identification (id, feed_id, recap_id)
- External job tracking (external_job_id)
- State tracking (last_successful_completion, last_run_error)
- Audit timestamps (created_at, updated_at)
And includes a unique constraint on external_job_id.
13-24: Safe migration pattern for adding the config column.The migration correctly:
- Adds the column as nullable first
- Populates it with data from existing columns
- Sets it as NOT NULL only after data is populated
This ensures the migration can run without errors on existing data.
26-27: Appropriate constraints and indexes.The foreign key constraint with cascade delete ensures referential integrity, and the unique index on (feed_id, recap_id) properly enforces the business rule of one state record per recap per feed.
backend/test/unit/services/scheduler/scheduler.service.test.ts (1)
120-137: Potentially unsupportedspyOnhelper in Bun’s test runner
bun:testcurrently exposesspybut notspyOnin many releases. If your Bun version matches that behaviour, this call will throw at runtime and all tests will fail before execution begins.
Confirm the availability ofspyOn; otherwise, switch to:const syncSpy = spy(schedulerService, "syncFeedSchedules");or use the built-in
mockfacility.backend/src/routes/api/recap.ts (1)
274-295: Surface scheduler failures to the caller
await schedulerService.runRecapJob(...)might reject (e.g., external scheduler 500).
The currenttryblock returns{ success: true }before confirming the external call succeeded.Return the scheduler’s response or propagate its error:
- await schedulerService.runRecapJob(feedId, recapId); - return c.json({ success: true }); + const ok = await schedulerService.runRecapJob(feedId, recapId); + if (!ok) { + return c.json({ error: "Scheduler failed to start job" }, 502); + } + return c.json({ success: true });backend/src/services/db/repositories/feed.repository.ts (1)
206-226:⚠️ Potential issueWrites are going to the read replica – switch to write connection
updateRecapCompletioncallsexecuteOperationwithoutisWrite=true.
The update therefore hits the read pool, which will silently fail on read-only replicas.- await executeOperation(async (db) => { + await executeOperation( + async (db) => { await db .update(feedRecapsState) ... - }, true); // isWrite = true + }, + true, // isWrite + );Apply the same fix to
updateRecapError(lines 231-250).Likely an incorrect or invalid review comment.
backend/src/services/db/migrations/meta/0002_snapshot.json (1)
98-198: 👍 Schema covers recap state with necessary constraintsUnique
(feed_id, recap_id)andexternal_job_idindexes look good; they will prevent duplicate states and help scheduler look-ups.
| describe("when creating new jobs", () => { | ||
| test("should create a new job when recap is enabled and no existing state", async () => { | ||
| // Arrange | ||
| const { | ||
| schedulerService, | ||
| mockFeedRepository, | ||
| mockCreateJobIfNotExists, | ||
| } = createTestContext(); | ||
|
|
||
| const feedConfig = createSampleFeedConfig(); | ||
| mockFeedRepository.getFeedConfig.mockImplementation( | ||
| async () => feedConfig, | ||
| ); | ||
| mockFeedRepository.getRecapState.mockImplementation(async () => null); | ||
| mockFeedRepository.getAllRecapStatesForFeed.mockImplementation( | ||
| async () => [], | ||
| ); | ||
|
|
||
| // Act | ||
| await schedulerService.syncFeedSchedules("test-feed"); | ||
|
|
||
| // Assert | ||
| expect(mockCreateJobIfNotExists).toHaveBeenCalledTimes(1); | ||
| expect(mockCreateJobIfNotExists).toHaveBeenCalledWith( | ||
| expect.objectContaining({ | ||
| name: "curate-recap-test-feed-daily-recap-1", | ||
| type: JobType.HTTP, | ||
| target: "http://localhost:3000/api/internal/run-recap", | ||
| payload: { feedId: "test-feed", recapId: "daily-recap-1" }, | ||
| schedule_type: ScheduleType.CRON, | ||
| cron_expression: "0 0 * * *", | ||
| }), | ||
| ); | ||
| expect(mockFeedRepository.upsertRecapState).toHaveBeenCalledTimes(1); | ||
| expect(mockFeedRepository.upsertRecapState).toHaveBeenCalledWith( | ||
| expect.objectContaining({ | ||
| feedId: "test-feed", | ||
| recapId: "daily-recap-1", | ||
| externalJobId: "job-123", | ||
| }), | ||
| ); | ||
| }); |
There was a problem hiding this comment.
Assertion targets a method that is never invoked by the implementation
The production code calls schedulerClient.createJob, but the test expects createJobIfNotExists to be used:
expect(mockCreateJobIfNotExists).toHaveBeenCalledTimes(1);Since SchedulerService.syncRecapJob() ultimately delegates to createNewJob() → schedulerClient.createJob, this expectation will always fail.
Either:
- Change the test to assert on
mockCreateJob, or - Update the service to use
createJobIfNotExistsif that is the intended contract.
Example fix for the test:
- expect(mockCreateJobIfNotExists).toHaveBeenCalledTimes(1);
- expect(mockCreateJobIfNotExists).toHaveBeenCalledWith(
+ expect(mockCreateJob).toHaveBeenCalledTimes(1);
+ expect(mockCreateJob).toHaveBeenCalledWith(
expect.objectContaining({
name: "curate-recap-test-feed-daily-recap-1",
...
}),
);| const jobName = `curate-recap-${feedId}-${recapConfig.id}`; | ||
|
|
||
| // Payload for the job | ||
| const payload = { feedId, recapId: recapConfig.id }; | ||
|
|
||
| // Target URL for the job | ||
| const target = `${this.backendUrl}/api/internal/run-recap`; | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
jobName may contain characters invalid for the scheduler
feedId or recapConfig.id can legally contain spaces or other special characters that the external scheduler might reject in a job name.
Consider slugifying / URL-encoding these components or replacing disallowed characters to guarantee cross-platform safety.
| } else if (existingState?.externalJobId) { | ||
| // Recap is disabled but job exists, delete it | ||
| try { | ||
| logger.info( | ||
| `Deleting job for disabled recap: ${feedId}/${recapConfig.id}`, | ||
| ); | ||
|
|
||
| await this.schedulerClient.deleteJob(existingState.externalJobId); | ||
| await this.feedRepository.deleteRecapState(feedId, recapConfig.id); | ||
|
|
||
| logger.info(`Successfully deleted job: ${existingState.externalJobId}`); | ||
| } catch (error) { | ||
| if (error instanceof JobNotFoundError) { | ||
| logger.warn( | ||
| `Job not found, already deleted: ${existingState.externalJobId}`, | ||
| ); | ||
| await this.feedRepository.deleteRecapState(feedId, recapConfig.id); | ||
| } else { | ||
| logger.error( | ||
| `Failed to delete job: ${existingState.externalJobId}`, | ||
| error, | ||
| ); | ||
| throw error; | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Disabled recap without externalJobId leaves stale state records
The branch only deletes state when existingState?.externalJobId is truthy.
If a state row exists but external_job_id is NULL (e.g., job manually purged from the scheduler), the record survives and will be re-processed on every sync.
Add an additional clean-up path:
- } else if (existingState?.externalJobId) {
+ } else if (existingState) {and guard internal deletion logic accordingly.
Committable suggestion skipped: line range outside the PR's diff.
| // Initialize recap array if it doesn't exist | ||
| if (!feedConfig.outputs.recap) { | ||
| feedConfig.outputs.recap = []; | ||
| } | ||
|
|
||
| // Add the new recap config | ||
| feedConfig.outputs.recap.push(recapConfig); | ||
|
|
||
| // Update feed config | ||
| await feedRepository.updateFeedConfig(feedId, feedConfig); |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Race condition & duplicate-ID risk when mutating feedConfig.outputs.recap
-
Multiple parallel POST/PUT/DELETE requests mutate a stale in-memory
feedConfigcopy and can overwrite each other.
Wrap the read-modify-write in a DB transaction or use anUPDATE … JSONB_SETstatement to make the change atomic. -
The
idfield is client-supplied. Two recaps with the sameid(or empty string) can be inserted, violating the unique index infeed_recaps_statelater.
Generate a UUID server-side when creating new configs and validate uniqueness when updating.
Also applies to: 203-209
| async upsertRecapState(stateData: { | ||
| feedId: string; | ||
| recapId: string; | ||
| externalJobId: string; | ||
| lastSuccessfulCompletion: Date | null; | ||
| lastRunError: string | null; | ||
| }): Promise<RecapState> { | ||
| return executeTransaction(async (db) => { | ||
| // Check if state exists | ||
| const existing = await db | ||
| .select() | ||
| .from(feedRecapsState) | ||
| .where( | ||
| and( | ||
| eq(feedRecapsState.feedId, stateData.feedId), | ||
| eq(feedRecapsState.recapId, stateData.recapId), | ||
| ), | ||
| ) | ||
| .limit(1); | ||
|
|
||
| const now = new Date(); | ||
|
|
||
| if (existing.length > 0) { | ||
| // Update existing state | ||
| const updated = await db | ||
| .update(feedRecapsState) | ||
| .set({ | ||
| externalJobId: stateData.externalJobId, | ||
| lastSuccessfulCompletion: stateData.lastSuccessfulCompletion, | ||
| lastRunError: stateData.lastRunError, | ||
| updatedAt: now, | ||
| }) | ||
| .where(eq(feedRecapsState.id, existing[0].id)) | ||
| .returning(); | ||
|
|
||
| return updated[0]; | ||
| } else { | ||
| // Create new state | ||
| const inserted = await db | ||
| .insert(feedRecapsState) | ||
| .values({ | ||
| feedId: stateData.feedId, | ||
| recapId: stateData.recapId, | ||
| externalJobId: stateData.externalJobId, | ||
| lastSuccessfulCompletion: stateData.lastSuccessfulCompletion, | ||
| lastRunError: stateData.lastRunError, | ||
| createdAt: now, | ||
| updatedAt: now, | ||
| }) | ||
| .returning(); | ||
|
|
||
| return inserted[0]; | ||
| } | ||
| }, true); // isWrite = true | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
externalJobId should be nullable
upsertRecapState forces a non-null externalJobId, yet the column is nullable (unique but not not null).
Failed creations without a scheduler job will crash.
- externalJobId: string;
+ externalJobId: string | null;Then guard in the insert / update calls.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Uses scheduler service
Acceptance Criteria
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Tests
Chores