🔍 Duplicate Code Detected: sendPending batch swap/checkpoint loop in Routerlicious lambdas
Analysis of commit 19a716b
Assignee: @copilot
Summary
Multiple Routerlicious lambdas implement near-identical sendPending() logic: early return when work-in-progress or nothing pending, swap current/pending buffers, process all batches via Promise.all(...), checkpoint the batch offset, then recursively call sendPending(); errors trigger context.error(..., { restart: true }).
This is a classic copy/paste maintenance hotspot: small behavioral fixes (checkpoint conditions, retry/backoff, shutdown semantics, metric timing) must be duplicated across lambdas.
Duplication Details
Pattern: Pending-batch drain loop with buffer swap + Promise.all + checkpoint + recurse
Impact Analysis
- Maintainability: Changes to batching strategy, checkpoint timing, shutdown behavior, and error/retry policy likely need to be applied consistently across lambdas, but are currently spread across multiple implementations.
- Bug Risk: Divergence risk is high—future fixes may be applied to one lambda but missed in others, leading to inconsistent restart/checkpoint semantics.
- Code Bloat: The pattern is relatively verbose and repeated; extracting the shared skeleton would reduce repeated control-flow boilerplate.
Refactoring Recommendations
-
Extract a shared “pending batch drain” helper
- Extract common control-flow into a utility, e.g.
server/routerlicious/packages/lambdas/src/utils/pendingBatchDrain.ts.
- Make it generic over:
- data structure (
Map(K, V) batches vs other queues)
- processing function (sync/async)
- checkpoint behavior (when/how to checkpoint)
- error handling (restart vs pause)
- Estimated effort: Medium (2–6 hours), depending on desired generality.
- Benefits: centralized correctness + easier consistent instrumentation.
-
Unify checkpoint + recursion semantics
- Consider a single implementation that can optionally:
- checkpoint only on success
- checkpoint when pending becomes empty
- schedule the next drain (immediate recursion vs
setImmediate/task scheduling)
- Estimated effort: Medium.
Implementation Checklist
Analysis Metadata
- Analyzed Files: 15 top-churn TS files + Routerlicious lambdas cross-check
- Detection Method: Serena semantic code analysis (symbol-level extraction + cross-file comparison)
- Commit: 19a716b
- Analysis Date: 2026-04-04T21:46:46.663Z
Generated by Duplicate Code Detector · ◷
To install this agentic workflow, run
gh aw add github/gh-aw/.github/workflows/duplicate-code-detector.md@94662b1dee8ce96c876ba9f33b3ab8be32de82a4
🔍 Duplicate Code Detected: sendPending batch swap/checkpoint loop in Routerlicious lambdas
Analysis of commit 19a716b
Assignee:
@copilotSummary
Multiple Routerlicious lambdas implement near-identical
sendPending()logic: early return when work-in-progress or nothing pending, swapcurrent/pendingbuffers, process all batches viaPromise.all(...), checkpoint the batch offset, then recursively callsendPending(); errors triggercontext.error(..., { restart: true }).This is a classic copy/paste maintenance hotspot: small behavioral fixes (checkpoint conditions, retry/backoff, shutdown semantics, metric timing) must be duplicated across lambdas.
Duplication Details
Pattern: Pending-batch drain loop with buffer swap + Promise.all + checkpoint + recurse
Severity: Medium
Occurrences: 3 similar implementations
Locations:
server/routerlicious/packages/lambdas/src/moira/lambda.ts(lines 73–102)server/routerlicious/packages/lambdas/src/copier/lambda.ts(lines 71–100)server/routerlicious/packages/lambdas/src/scriptorium/lambda.ts(lines 217–330) — extended with telemetry, batching caps, and circuit-breaker pausing, but retains the same core structureCode Sample (common structure, simplified):
Impact Analysis
Refactoring Recommendations
Extract a shared “pending batch drain” helper
server/routerlicious/packages/lambdas/src/utils/pendingBatchDrain.ts.Map(K, V)batches vs other queues)Unify checkpoint + recursion semantics
setImmediate/task scheduling)Implementation Checklist
Analysis Metadata