Conversation
… create The full worker manifest stored in BackgroundWorker.metadata duplicates source files and per-task config that already live in dedicated columns and tables. On large deploys this inflated Prisma's client-side serialize step to several seconds, blocking the API event loop and tail-latencying unrelated concurrent requests. Only the slice read after storage — schedule-bearing tasks consumed by syncDeclarativeSchedules at deploy promotion — is persisted now.
|
WalkthroughThe pull request introduces a utility function Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Large deploys (projects with many tasks or source files) blocked the webapp event loop for several seconds inside Prisma's client-side serializer on
BackgroundWorker.create, tail-latencying every other in-flight request on the same Node process. ThemetadataJSON column was being written with the full deploy manifest — every task's config, every queue and prompt, and the full source of every file — all of which already live on dedicated columns or in dedicated tables.Fix: project the manifest to
{ packageVersion, contentHash, tasks: [{ id, filePath, schedule }] }on insert. The only post-write read site ischangeCurrentDeployment, which feedstasks[].scheduleintosyncDeclarativeSchedulesat deploy promotion. The retained top-level keys and per-taskfilePathare kept solely soBackgroundWorkerMetadata.safeParsestill succeeds on read.Test plan
changeCurrentDeploymentre-syncs schedulesBackgroundWorker.metadataon a fresh deploy — should be a small object, not the full manifest