(SP: 3)[Shop][DB] Reduce Neon compute: throttle janitor + relax checkout polling + add sweep indexes#375
Conversation
…ebhook + janitor (ORIGIN_BLOCKED)
…wnership test and pass all pre-prod invariants
…ests; minor security/log/UI cleanups
…s) and remove duplicate sha256 hashing
…lCount for orders badge
…obank logging safety invariant, namespace localStorage cart by user and reset on auth change
|
The latest updates on your projects. Learn more about Vercel for GitHub. 1 Skipped Deployment
|
📝 WalkthroughWalkthroughRefactors client order-status polling to a bounded abortable backoff loop and adds a lite order-status view; introduces canonical dual-write event recording and deterministic dedupe keys; clamps internal shipping job intervals with telemetry; hardens webhook/shipment enqueue and monobank flows; adds test seeding helpers, throttled activity heartbeats, sweep batch claiming, and DB migration/indexes. Changes
Sequence Diagram(s)sequenceDiagram
actor Browser as Client
participant UI as MonobankRedirectStatus
participant Poll as OrderStatusAutoRefresh
participant API as /api/shop/orders/[id]/status
participant DB as Database
Browser->>UI: mount with orderId & paymentStatus
UI->>Poll: start bounded poll loop
activate Poll
Poll->>API: GET /status?view=lite&statusToken
API->>DB: query lite summary
DB-->>API: return lite summary
API-->>Poll: {ok:true, paymentStatus}
alt paymentStatus terminal
Poll->>UI: trigger router.refresh()
Poll-->>Poll: stop polling
else not terminal
Poll->>Poll: wait backoff + jitter (3000–15000ms)
Poll->>API: retry
end
deactivate Poll
sequenceDiagram
participant Webhook as Stripe Webhook Route
participant DB as Database
participant Orders as orders
participant Shipments as shipping_shipments
Webhook->>DB: begin transactional dual-write
DB->>Orders: update order (paid/refund) + compute eligibility
alt canonicalDualWriteEnabled
DB->>DB: insert payment_events (dedupe)
DB->>Shipments: insert queued shipment (from eligible)
DB->>Orders: update shipping_status = 'queued'
else fallback
DB->>Shipments: insert queued shipment (fallback)
DB->>Orders: update shipping_status = 'queued'
end
DB-->>Webhook: commit & return updated_count / queued_shipment_count
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (11)
frontend/drizzle/0020_shop_orders_sweeps_partial_indexes.sql (2)
1-11: Verify deployment strategy for non-CONCURRENT index creation.The comment notes that
CONCURRENTLYis intentionally not used. However, plainCREATE INDEXacquires an exclusive lock on the table, blocking all writes until the index is built. On a productionorderstable with significant data volume, this could cause request timeouts or failures during deployment.Consider:
- Running this migration during a low-traffic window, or
- Using
CREATE INDEX CONCURRENTLYif zero-downtime is required (though this cannot run inside a transaction and requires separate handling in Drizzle migrations).If the table is small or a maintenance window is planned, the current approach is acceptable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/drizzle/0020_shop_orders_sweeps_partial_indexes.sql` around lines 1 - 11, The migration creates a plain CREATE INDEX "orders_sweep_stripe_created_claim_id_idx" on table "orders", which will take an exclusive lock and block writes; either schedule this migration for a low-traffic/maintenance window or change to CREATE INDEX CONCURRENTLY "orders_sweep_stripe_created_claim_id_idx" and adjust the Drizzle migration so it is executed outside a transaction (CONCURRENTLY cannot run inside a transaction), ensuring any migration runner/transaction wrapper is disabled for this statement and that you handle potential long-running index creation and retry/error handling accordingly.
21-21: Missing newline at end of file.Add a trailing newline after the final statement-breakpoint for POSIX compliance.
Proposed fix
--> statement-breakpoint +🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/drizzle/0020_shop_orders_sweeps_partial_indexes.sql` at line 21, The file ends with the token "statement-breakpoint" but lacks a trailing newline; open the file containing the "statement-breakpoint" token (frontend/drizzle/0020_shop_orders_sweeps_partial_indexes.sql) and add a single newline character after the final "statement-breakpoint" so the file terminates with a newline (POSIX-compliant EOF).frontend/lib/tests/shop/monobank-payments-disabled.test.ts (1)
70-106: Consider extracting the shared seeding helper to reduce duplication.The
getOrSeedActiveTemplateProducthelper is duplicated across multiple test files (checkout-no-payments.test.ts,checkout-monobank-idempotency-contract.test.ts). While the current implementation is correct and functional, extracting this to a shared test utility (e.g.,frontend/lib/tests/helpers/seed-product.ts) would reduce duplication and ensure consistent seeding behavior across tests.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/tests/shop/monobank-payments-disabled.test.ts` around lines 70 - 106, Extract the duplicated getOrSeedActiveTemplateProduct function into a shared test helper module (e.g., create frontend/lib/tests/helpers/seed-product.ts) and export it; update tests that currently define the function (monobank-payments-disabled.test.ts, checkout-no-payments.test.ts, checkout-monobank-idempotency-contract.test.ts) to import { getOrSeedActiveTemplateProduct } from that helper instead of duplicating the implementation; ensure the helper preserves the same behavior and references (db, products, toDbMoney, crypto) and update any relative import paths and test fixtures accordingly.frontend/app/api/shop/internal/orders/restock-stale/route.ts (1)
397-401: Consider documenting the backward compatibility intent.The assignment
minIntervalSeconds = effectiveIntervalSecondsmaintains backward compatibility for API consumers while internally using the more descriptiveeffectiveIntervalSeconds. Consider adding a brief comment explaining this dual naming is intentional for response/log compatibility.📝 Suggested clarification comment
const effectiveIntervalSeconds = Math.max( envMinIntervalSeconds, requestedMinIntervalSeconds ); + // Alias for backward compatibility in API responses and logs const minIntervalSeconds = effectiveIntervalSeconds;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/internal/orders/restock-stale/route.ts` around lines 397 - 401, Add a brief comment next to the assignment of minIntervalSeconds = effectiveIntervalSeconds explaining that minIntervalSeconds is preserved to maintain backward compatibility in responses/logs while effectiveIntervalSeconds is the clearer internal name; update the block around the variables effectiveIntervalSeconds and minIntervalSeconds so the intent is explicit for future readers.frontend/app/api/sessions/activity/route.ts (1)
10-10: Consider making heartbeat throttle configurable.Hardcoding
HEARTBEAT_THROTTLE_MSlimits runtime tuning. An env-backed value (with safe default/floor) would make ops adjustments easier without redeploys.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/sessions/activity/route.ts` at line 10, Replace the hardcoded HEARTBEAT_THROTTLE_MS constant with an env-backed value: read a new env var (e.g., HEARTBEAT_THROTTLE_MS) in the module, parse it as an integer with a safe default of 60000 and enforce a minimum floor (e.g., 1000) to avoid too-small values, and export or use that parsed value wherever HEARTBEAT_THROTTLE_MS is referenced in route.ts (or wrap the logic in a small getter like getHeartbeatThrottleMs to centralize parsing/validation).frontend/lib/services/orders/monobank-webhook.ts (1)
978-987: Skip the ensure pass when atomic enqueue already succeeded.At Line 978, the helper runs unconditionally, adding extra DB work on successful fast-path events. You can gate it behind
!atomicResult.shipmentQueued.Suggested refactor
- const ensured = await ensureQueuedShipmentAndOrderShippingStatus({ - now, - orderId: orderRow.id, - }); + const ensured = atomicResult.shipmentQueued + ? { insertedShipment: false, updatedOrder: false } + : await ensureQueuedShipmentAndOrderShippingStatus({ + now, + orderId: orderRow.id, + }); const shipmentQueued = atomicResult.shipmentQueued || ensured.insertedShipment || ensured.updatedOrder;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/orders/monobank-webhook.ts` around lines 978 - 987, The code always calls ensureQueuedShipmentAndOrderShippingStatus(...) even when the fast-path atomic enqueue succeeded (atomicResult.shipmentQueued), causing unnecessary DB work; change the logic to only call ensureQueuedShipmentAndOrderShippingStatus when !atomicResult.shipmentQueued, then compute ensured and shipmentQueued as before (shipmentQueued = atomicResult.shipmentQueued || ensured.insertedShipment || ensured.updatedOrder) so the helper is skipped on the successful atomic path; update references to ensured and shipmentQueued accordingly.frontend/app/api/shop/internal/shipping/np/sync/route.ts (1)
164-182: Double clamping: schema transform already applies the floor.The
internalNpSyncPayloadSchemaalready applies.transform(applyInternalShippingMinIntervalFloor)(seeshop-shipping.ts:91-92), soparsed.data.minIntervalSecondsis the already-clamped value. The route-level clamping here is redundant and thewasClampedlog will typically befalseeven when the original request value was below the floor.This isn't a bug—the clamping is idempotent—but the logging may be misleading. Consider either:
- Removing the route-level clamping (trust the schema), or
- Accessing the raw request value before schema transform for accurate logging
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/internal/shipping/np/sync/route.ts` around lines 164 - 182, The route currently re-applies the floor to parsed.data.minIntervalSeconds (using getInternalShippingMinIntervalFloorSeconds) causing redundant clamping and misleading wasClamped logs; fix by either (A) removing the route-level clamp and use effectiveIntervalSeconds = parsed.data.minIntervalSeconds and pass that to acquireJobSlot (keep wasClamped removed or set to false), or (B) if you need to log whether the original request was below the floor, obtain the raw incoming minIntervalSeconds before the schema transform (from the unvalidated request payload) and compute wasClamped by comparing that rawRequested value to getInternalShippingMinIntervalFloorSeconds(), while keeping effectiveIntervalSeconds = parsed.data.minIntervalSeconds; update logging and the call to acquireJobSlot accordingly (referencing internalNpSyncPayloadSchema, applyInternalShippingMinIntervalFloor, parsed.data.minIntervalSeconds, getInternalShippingMinIntervalFloorSeconds, wasClamped, acquireJobSlot).frontend/lib/tests/shop/shipping-checkout-payload-phase6.test.ts (1)
74-93: Test name no longer reflects test behavior.The test is named "returns shipping unavailable UX message for unsupported country" but no longer validates any message field. Consider renaming to better reflect the actual assertion:
Suggested rename
- it('returns shipping unavailable UX message for unsupported country', () => { + it('returns SHIPPING_UNAVAILABLE code for unsupported country', () => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/tests/shop/shipping-checkout-payload-phase6.test.ts` around lines 74 - 93, The test title for the case exercising buildCheckoutShippingPayload no longer matches its assertions; update the it(...) description to reflect that it only asserts the result.ok is false and result.code === 'SHIPPING_UNAVAILABLE' (for example: "returns SHIPPING_UNAVAILABLE code when shipping is unavailable for unsupported country") so the test name accurately describes the behavior tested in buildCheckoutShippingPayload.frontend/lib/services/shop/shipping/inventory-eligibility.ts (1)
16-25: Inconsistency between constant array and function implementations.
INVENTORY_COMMITTED_FOR_SHIPPING(lines 8-10) andgetInventoryCommittedForShippingStatuses()still exist, butisInventoryCommittedForShippingandinventoryCommittedForShippingSqlnow hardcode'reserved'instead of referencing the constant. This creates a maintenance risk: if someone adds a status to the array expecting the functions to respect it, the change won't take effect.Consider either:
- Having these functions derive from the constant, or
- Removing the constant/getter if
'reserved'is the only intended valueOption 1: Derive from the constant
export function isInventoryCommittedForShipping( status: InventoryStatusValue | string | null | undefined ): boolean { - return status === 'reserved'; + return INVENTORY_COMMITTED_FOR_SHIPPING.includes(status as InventoryStatusValue); } export function inventoryCommittedForShippingSql( columnReference: SQLWrapper ): SQL { - return sql`${columnReference} = 'reserved'`; + const values = INVENTORY_COMMITTED_FOR_SHIPPING.map(s => `'${s}'`).join(', '); + return sql`${columnReference} IN (${sql.raw(values)})`; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/shipping/inventory-eligibility.ts` around lines 16 - 25, The isInventoryCommittedForShipping and inventoryCommittedForShippingSql functions are hardcoding 'reserved' instead of using the existing INVENTORY_COMMITTED_FOR_SHIPPING constant/getter; update these functions to derive their logic from INVENTORY_COMMITTED_FOR_SHIPPING (or getInventoryCommittedForShippingStatuses()) so they honor any future statuses added to that array—e.g., have isInventoryCommittedForShipping check membership in INVENTORY_COMMITTED_FOR_SHIPPING and have inventoryCommittedForShippingSql produce an SQL fragment that compares columnReference against the set of statuses (using IN or equivalent) rather than a single literal.frontend/app/[locale]/shop/checkout/success/OrderStatusAutoRefresh.tsx (1)
148-151: Minor: Redundant abort after fetch completion.When execution reaches line 148, the fetch has already completed (either successfully or via the
.catch()handler). Callingcontroller.abort()at this point has no effect. The abort is harmless but adds unnecessary code.♻️ Remove redundant abort
if (cancelled) { - controller.abort(); return; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/`[locale]/shop/checkout/success/OrderStatusAutoRefresh.tsx around lines 148 - 151, The controller.abort() call after the fetch has completed is redundant; remove the call to controller.abort() inside the block that checks cancelled (i.e., the cancelled check in OrderStatusAutoRefresh's effect/async flow) so you only abort the controller when cancelling an in-flight request (use controller.abort() in the effect cleanup or prior to starting a fetch), leaving the cancelled flag check to simply return without calling controller.abort().frontend/lib/services/orders/sweeps.ts (1)
77-127: Extract the shared claim transaction flow into one helper.Line 77, Line 211, and Line 351 repeat nearly identical logic; centralizing the claim/update routine will reduce drift risk and make future sweep changes safer.
♻️ Refactor sketch
+type ClaimBatchArgs = { + now: Date; + claimExpiresAt: Date; + runId: string; + workerId: string; + batchSize: number; + baseConditions: any[]; + extraPatch?: Record<string, unknown>; +}; + +async function claimSweepBatch(tx: typeof db, args: ClaimBatchArgs) { + const claimable = tx + .select({ id: orders.id }) + .from(orders) + .where(and(...args.baseConditions)) + .orderBy(orders.createdAt) + .limit(args.batchSize) + .for('update', { skipLocked: true }); + + return tx + .update(orders) + .set({ + sweepClaimedAt: args.now, + sweepClaimExpiresAt: args.claimExpiresAt, + sweepRunId: args.runId, + sweepClaimedBy: args.workerId, + updatedAt: args.now, + ...(args.extraPatch ?? {}), + }) + .where( + and( + inArray(orders.id, claimable), + or(isNull(orders.sweepClaimExpiresAt), lt(orders.sweepClaimExpiresAt, args.now)) + ) + ) + .returning({ id: orders.id }); +}Also applies to: 211-265, 351-397
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/orders/sweeps.ts` around lines 77 - 127, Extract the repeated claim/update transaction into a single helper like claimOrdersForSweep(tx, { batchSize, cutoff, hasExplicitIds, orderIds, now, claimExpiresAt, runId, workerId }) that encapsulates building baseConditions, selecting the claimable set (the claimable query using tx.select(...).from(orders).where(...).for('update', { skipLocked: true })), performing the tx.update(...) with the same .set({...}) and .where(and(inArray(orders.id, claimable), or(isNull(orders.sweepClaimExpiresAt), lt(orders.sweepClaimExpiresAt, now)))), and returning the updated ids; replace the three repeated blocks (the db.transaction(...) that assigns claimed and uses baseConditions/claimable/tx.update) with calls into this helper inside their transactions, passing the appropriate parameters (batchSize, cutoff, hasExplicitIds, options?.orderIds, now, claimExpiresAt, runId, workerId) so the logic in one place controls claim selection and update.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/app/`[locale]/shop/checkout/success/OrderStatusAutoRefresh.tsx:
- Around line 15-21: TERMINAL_STATUSES currently includes an invalid value
'canceled'; update the TERMINAL_STATUSES Set in OrderStatusAutoRefresh.tsx to
remove the 'canceled' entry so it only contains valid payment_status enum values
('paid', 'failed', 'refunded', 'needs_review'), ensuring the terminal-check
logic uses only real API-returned statuses.
In `@frontend/app/api/shop/webhooks/stripe/route.ts`:
- Around line 1034-1066: The current non-atomic fallback inserts into
shipping_shipments then separately updates orders, which can leave inconsistent
state if the second statement fails; modify the logic around the shipment
queuing (the block that inserts into shipping_shipments and then updates orders)
to run atomically by wrapping both DB statements in a single transaction or
converting them into one CTE-based statement (mirror the approach used in
applyStripePaidAndQueueShipmentAtomic): ensure the insert into
shipping_shipments (status='queued') and the update of orders.shipping_status to
'queued' for the same order.id are performed in one transactional unit so either
both succeed or both roll back.
- Around line 442-453: The CTE eligible_for_enqueue currently uses
"(${shouldAttemptEnqueue} or true)" which always evaluates true; update the
eligible_for_enqueue CTE to remove "or true" so ${shouldAttemptEnqueue} actually
gates the clause (i.e., change the predicate to just ${shouldAttemptEnqueue}),
keeping the rest of the WHERE conditions (args.orderId,
payment_provider/payment_status, shipping_required, shipping_provider,
shipping_method_code, and
inventoryCommittedForShippingSql(sql`o.inventory_status`)) unchanged and
preserving safe SQL interpolation.
In `@frontend/components/shared/OnlineCounterPopup.tsx`:
- Around line 25-34: The lazy state initializer for online in OnlineCounterPopup
causes SSR hydration mismatch by reading sessionStorage; change the useState
call for online to initialize unconditionally to null, and move the
sessionStorage read into a client-only effect: inside the existing useEffect in
OnlineCounterPopup (or a new useEffect with no dependencies) read
sessionStorage.getItem(ACTIVITY_LAST_ONLINE_KEY), parse Number and call
setOnline(...) only on the client, keeping the try/catch from the original
initializer; ensure you still handle non-finite values by setting null.
In `@frontend/lib/services/orders/monobank-webhook.ts`:
- Around line 559-565: The update in the CTE shipping_status_update is
performing no-op writes when shipping_status is already 'queued', causing
updated_at to change and downstream logic (e.g., shipmentQueued) to trigger
incorrectly; modify the WHERE clause of the update in shipping_status_update
(which targets orders and references queued_order_ids) to only affect rows whose
shipping_status is not already 'queued' (e.g., add a condition like
shipping_status IS DISTINCT FROM 'queued'::shipping_status) so only real state
changes update updated_at and produce rows in the RETURNING id set.
---
Nitpick comments:
In `@frontend/app/`[locale]/shop/checkout/success/OrderStatusAutoRefresh.tsx:
- Around line 148-151: The controller.abort() call after the fetch has completed
is redundant; remove the call to controller.abort() inside the block that checks
cancelled (i.e., the cancelled check in OrderStatusAutoRefresh's effect/async
flow) so you only abort the controller when cancelling an in-flight request (use
controller.abort() in the effect cleanup or prior to starting a fetch), leaving
the cancelled flag check to simply return without calling controller.abort().
In `@frontend/app/api/sessions/activity/route.ts`:
- Line 10: Replace the hardcoded HEARTBEAT_THROTTLE_MS constant with an
env-backed value: read a new env var (e.g., HEARTBEAT_THROTTLE_MS) in the
module, parse it as an integer with a safe default of 60000 and enforce a
minimum floor (e.g., 1000) to avoid too-small values, and export or use that
parsed value wherever HEARTBEAT_THROTTLE_MS is referenced in route.ts (or wrap
the logic in a small getter like getHeartbeatThrottleMs to centralize
parsing/validation).
In `@frontend/app/api/shop/internal/orders/restock-stale/route.ts`:
- Around line 397-401: Add a brief comment next to the assignment of
minIntervalSeconds = effectiveIntervalSeconds explaining that minIntervalSeconds
is preserved to maintain backward compatibility in responses/logs while
effectiveIntervalSeconds is the clearer internal name; update the block around
the variables effectiveIntervalSeconds and minIntervalSeconds so the intent is
explicit for future readers.
In `@frontend/app/api/shop/internal/shipping/np/sync/route.ts`:
- Around line 164-182: The route currently re-applies the floor to
parsed.data.minIntervalSeconds (using
getInternalShippingMinIntervalFloorSeconds) causing redundant clamping and
misleading wasClamped logs; fix by either (A) removing the route-level clamp and
use effectiveIntervalSeconds = parsed.data.minIntervalSeconds and pass that to
acquireJobSlot (keep wasClamped removed or set to false), or (B) if you need to
log whether the original request was below the floor, obtain the raw incoming
minIntervalSeconds before the schema transform (from the unvalidated request
payload) and compute wasClamped by comparing that rawRequested value to
getInternalShippingMinIntervalFloorSeconds(), while keeping
effectiveIntervalSeconds = parsed.data.minIntervalSeconds; update logging and
the call to acquireJobSlot accordingly (referencing internalNpSyncPayloadSchema,
applyInternalShippingMinIntervalFloor, parsed.data.minIntervalSeconds,
getInternalShippingMinIntervalFloorSeconds, wasClamped, acquireJobSlot).
In `@frontend/drizzle/0020_shop_orders_sweeps_partial_indexes.sql`:
- Around line 1-11: The migration creates a plain CREATE INDEX
"orders_sweep_stripe_created_claim_id_idx" on table "orders", which will take an
exclusive lock and block writes; either schedule this migration for a
low-traffic/maintenance window or change to CREATE INDEX CONCURRENTLY
"orders_sweep_stripe_created_claim_id_idx" and adjust the Drizzle migration so
it is executed outside a transaction (CONCURRENTLY cannot run inside a
transaction), ensuring any migration runner/transaction wrapper is disabled for
this statement and that you handle potential long-running index creation and
retry/error handling accordingly.
- Line 21: The file ends with the token "statement-breakpoint" but lacks a
trailing newline; open the file containing the "statement-breakpoint" token
(frontend/drizzle/0020_shop_orders_sweeps_partial_indexes.sql) and add a single
newline character after the final "statement-breakpoint" so the file terminates
with a newline (POSIX-compliant EOF).
In `@frontend/lib/services/orders/monobank-webhook.ts`:
- Around line 978-987: The code always calls
ensureQueuedShipmentAndOrderShippingStatus(...) even when the fast-path atomic
enqueue succeeded (atomicResult.shipmentQueued), causing unnecessary DB work;
change the logic to only call ensureQueuedShipmentAndOrderShippingStatus when
!atomicResult.shipmentQueued, then compute ensured and shipmentQueued as before
(shipmentQueued = atomicResult.shipmentQueued || ensured.insertedShipment ||
ensured.updatedOrder) so the helper is skipped on the successful atomic path;
update references to ensured and shipmentQueued accordingly.
In `@frontend/lib/services/orders/sweeps.ts`:
- Around line 77-127: Extract the repeated claim/update transaction into a
single helper like claimOrdersForSweep(tx, { batchSize, cutoff, hasExplicitIds,
orderIds, now, claimExpiresAt, runId, workerId }) that encapsulates building
baseConditions, selecting the claimable set (the claimable query using
tx.select(...).from(orders).where(...).for('update', { skipLocked: true })),
performing the tx.update(...) with the same .set({...}) and
.where(and(inArray(orders.id, claimable), or(isNull(orders.sweepClaimExpiresAt),
lt(orders.sweepClaimExpiresAt, now)))), and returning the updated ids; replace
the three repeated blocks (the db.transaction(...) that assigns claimed and uses
baseConditions/claimable/tx.update) with calls into this helper inside their
transactions, passing the appropriate parameters (batchSize, cutoff,
hasExplicitIds, options?.orderIds, now, claimExpiresAt, runId, workerId) so the
logic in one place controls claim selection and update.
In `@frontend/lib/services/shop/shipping/inventory-eligibility.ts`:
- Around line 16-25: The isInventoryCommittedForShipping and
inventoryCommittedForShippingSql functions are hardcoding 'reserved' instead of
using the existing INVENTORY_COMMITTED_FOR_SHIPPING constant/getter; update
these functions to derive their logic from INVENTORY_COMMITTED_FOR_SHIPPING (or
getInventoryCommittedForShippingStatuses()) so they honor any future statuses
added to that array—e.g., have isInventoryCommittedForShipping check membership
in INVENTORY_COMMITTED_FOR_SHIPPING and have inventoryCommittedForShippingSql
produce an SQL fragment that compares columnReference against the set of
statuses (using IN or equivalent) rather than a single literal.
In `@frontend/lib/tests/shop/monobank-payments-disabled.test.ts`:
- Around line 70-106: Extract the duplicated getOrSeedActiveTemplateProduct
function into a shared test helper module (e.g., create
frontend/lib/tests/helpers/seed-product.ts) and export it; update tests that
currently define the function (monobank-payments-disabled.test.ts,
checkout-no-payments.test.ts, checkout-monobank-idempotency-contract.test.ts) to
import { getOrSeedActiveTemplateProduct } from that helper instead of
duplicating the implementation; ensure the helper preserves the same behavior
and references (db, products, toDbMoney, crypto) and update any relative import
paths and test fixtures accordingly.
In `@frontend/lib/tests/shop/shipping-checkout-payload-phase6.test.ts`:
- Around line 74-93: The test title for the case exercising
buildCheckoutShippingPayload no longer matches its assertions; update the
it(...) description to reflect that it only asserts the result.ok is false and
result.code === 'SHIPPING_UNAVAILABLE' (for example: "returns
SHIPPING_UNAVAILABLE code when shipping is unavailable for unsupported country")
so the test name accurately describes the behavior tested in
buildCheckoutShippingPayload.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (25)
.github/workflows/shop-janitor-restock-stale.ymlfrontend/app/[locale]/shop/checkout/success/MonobankRedirectStatus.tsxfrontend/app/[locale]/shop/checkout/success/OrderStatusAutoRefresh.tsxfrontend/app/api/sessions/activity/route.tsfrontend/app/api/shop/internal/orders/restock-stale/route.tsfrontend/app/api/shop/internal/shipping/np/sync/route.tsfrontend/app/api/shop/internal/shipping/retention/run/route.tsfrontend/app/api/shop/internal/shipping/shipments/run/route.tsfrontend/app/api/shop/orders/[id]/status/route.tsfrontend/app/api/shop/webhooks/stripe/route.tsfrontend/components/shared/OnlineCounterPopup.tsxfrontend/drizzle/0020_shop_orders_sweeps_partial_indexes.sqlfrontend/drizzle/meta/0020_snapshot.jsonfrontend/drizzle/meta/_journal.jsonfrontend/lib/services/orders/monobank-webhook.tsfrontend/lib/services/orders/summary.tsfrontend/lib/services/orders/sweeps.tsfrontend/lib/services/shop/shipping/inventory-eligibility.tsfrontend/lib/tests/shop/checkout-monobank-idempotency-contract.test.tsfrontend/lib/tests/shop/checkout-no-payments.test.tsfrontend/lib/tests/shop/monobank-events-claim.test.tsfrontend/lib/tests/shop/monobank-payments-disabled.test.tsfrontend/lib/tests/shop/monobank-webhook-apply.test.tsfrontend/lib/tests/shop/shipping-checkout-payload-phase6.test.tsfrontend/lib/validation/shop-shipping.ts
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 64ca6a6375
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| from inserted_shipment | ||
| where orders.id = inserted_shipment.order_id | ||
| returning orders.id | ||
| where id in (select id from eligible_for_enqueue) |
There was a problem hiding this comment.
Restrict mark_queued to orders with queued shipment rows
This update now runs for every eligible_for_enqueue order, even when inserted_shipment inserted nothing because on conflict (order_id) do nothing hit an existing non-queued shipment row (for example, failed/delivered). In that case a replayed payment_intent.succeeded can incorrectly rewrite orders.shipping_status back to queued without any queued shipment to process, desynchronizing order state from shipping_shipments.
Useful? React with 👍 / 👎.
| update orders | ||
| set shipping_status = 'queued'::shipping_status, | ||
| updated_at = ${now} | ||
| where id = ${order.id}::uuid |
There was a problem hiding this comment.
Guard fallback shipping_status update on successful enqueue
In this fallback path, shipping_status is updated to queued unconditionally after an insert that may have been a no-op due to on conflict (order_id) do nothing. If an order already has a shipment row in a terminal/non-queued state, this block can still force orders.shipping_status='queued', causing status regressions on subsequent webhook reprocessing.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (5)
frontend/components/shared/OnlineCounterPopup.tsx (1)
61-63: UnnecessarysetTimeoutwrapper.The
setTimeout(..., 0)is unnecessary here. This code already runs inside auseEffect, which executes on the client after mount. CallingsetOnline(cachedOnline)directly is simpler and avoids an extra task scheduling.♻️ Suggested simplification
if (Number.isFinite(n)) { const cachedOnline = n; - setTimeout(() => setOnline(cachedOnline), 0); + setOnline(cachedOnline); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/components/shared/OnlineCounterPopup.tsx` around lines 61 - 63, The setTimeout wrapper around updating state is unnecessary; inside the OnlineCounterPopup component (within the useEffect where const cachedOnline = n is set) remove setTimeout(() => setOnline(cachedOnline), 0) and replace it with a direct call to setOnline(cachedOnline) so the state is updated synchronously from the effect.frontend/app/api/shop/internal/orders/restock-stale/route.ts (1)
237-255: ClarifylastRunTssemantics in logs/documentation.In the success case (lines 238-241),
lastRunTsis derived fromupdated_atwhich is set tonow()during the INSERT/UPDATE. This meanslastRunTsrepresents when the current run's gate was acquired, not when the job previously ran.In the rate-limited case (lines 253-255),
lastRunTsreflects the time the gate was last successfully acquired (i.e., the previous run).This asymmetry could cause confusion during log analysis. Consider either:
- Renaming to
gateUpdatedAtto clarify semantics, or- Adding a brief comment explaining the distinction
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/internal/orders/restock-stale/route.ts` around lines 237 - 255, The returned field lastRunTs is ambiguous because in the success branch (where rows.length > 0) it reflects the current gate acquisition time (from updated_at set to now()), while in the rate-limited branch it reflects the previous gate acquisition; update the API to make this explicit by either renaming lastRunTs to gateUpdatedAt everywhere (adjusting callers that consume the response) or adding a concise comment next to the uses of lastRunTs/normalizeDate (in the success branch and the internal_job_state query handling) explaining the semantic difference (current run vs previous run) so logs and consumers understand the asymmetry; reference the variables lastRunTs, normalizeDate, rows, rows2, and the internal_job_state query/jobName when making the change.frontend/lib/services/orders/sweeps.ts (2)
155-158: Unnecessary truthy check onidCond.
inArray()always returns anSQLWrapperinstance, never a falsy value. The conditional check is redundant since line 118 already guards against emptyorderIds.🧹 Optional simplification
if (hasExplicitIds && options?.orderIds?.length) { - const idCond = inArray(orders.id, options.orderIds); - if (idCond) baseConditions.push(idCond); + baseConditions.push(inArray(orders.id, options.orderIds)); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/orders/sweeps.ts` around lines 155 - 158, The conditional guard around idCond is redundant because inArray(...) always returns an SQLWrapper and options.orderIds is already checked by hasExplicitIds; remove the if (idCond) branch and directly compute const idCond = inArray(orders.id, options.orderIds); then always do baseConditions.push(idCond) so the code simply creates the condition and appends it to baseConditions (referencing hasExplicitIds, options.orderIds, inArray, orders.id, idCond, and baseConditions).
68-116: Consider extracting shared constants and parameter validation logic.The three sweep functions duplicate parameter validation logic and constants (
MIN_OLDER_MIN,MAX_OLDER_MIN,MIN_BATCH,MAX_BATCH,MIN_CLAIM_TTL,MAX_CLAIM_TTL, time budget constants). Extracting these into shared utilities could reduce maintenance burden.💡 Example shared utilities (sketch)
const SWEEP_LIMITS = { olderMin: { min: 10, max: 60 * 24 * 7 }, batch: { min: 25, max: 100 }, claimTtl: { min: 1, max: 60 }, timeBudget: { default: 20_000, min: 0, max: 25_000 }, } as const; function clampValue(raw: number, limits: { min: number; max: number; default?: number }): number { return Math.max(limits.min, Math.min(limits.max, Math.floor(raw))); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/orders/sweeps.ts` around lines 68 - 116, The duplicated min/max constants and validation logic inside restockStalePendingOrders (and the other two sweep functions in this file) should be extracted into a small shared utility: create a SWEEP_LIMITS object and a clampValue (or clampSweepParam) helper that accepts a raw value and limits and returns the validated integer, then replace the inline constants (MIN_OLDER_MIN, MAX_OLDER_MIN, MIN_BATCH, MAX_BATCH, MIN_CLAIM_TTL, MAX_CLAIM_TTL, DEFAULT_TIME_BUDGET_MS, MIN_TIME_BUDGET_MS, MAX_TIME_BUDGET_MS) and the repeated Math.max/Math.min/Math.floor code in restockStalePendingOrders and the other sweep functions with calls to the new utilities (use the same parameter names olderThanMinutes, batchSize, claimTtlMinutes, timeBudgetMs and keep workerId handling unchanged).frontend/lib/services/orders/checkout.ts (1)
943-944: Narrow the 23505 handler to the idempotency-key constraint.Catching all unique violations as idempotency collisions can mask unrelated DB integrity errors.
Suggested change
- if ((error as { code?: string }).code === '23505') { + const pgErr = error as { code?: string; constraint?: string }; + if ( + pgErr.code === '23505' && + pgErr.constraint === 'orders_idempotency_key_key' + ) { const existingOrder = await getOrderByIdempotencyKey(db, idempotencyKey); if (existingOrder) { ... } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/orders/checkout.ts` around lines 943 - 944, The current 23505 handler treats every unique-violation as an idempotency collision; change it to only handle the specific idempotency-key constraint by inspecting the Postgres error's constraint or detail before calling getOrderByIdempotencyKey. In the block where you check (error as { code?: string }).code === '23505', also check (error as any).constraint === '<YOUR_IDEMPOTENCY_CONSTRAINT_NAME>' (or as a fallback (error as any).detail.includes('idempotency_key')) and only then call getOrderByIdempotencyKey(db, idempotencyKey); otherwise rethrow the error so unrelated unique-violation errors are not misclassified. Ensure you replace '<YOUR_IDEMPOTENCY_CONSTRAINT_NAME>' with the actual DB constraint name for the idempotency_key.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/app/api/shop/internal/shipping/np/sync/route.ts`:
- Around line 162-170: Update the interval flooring logic in route.ts: import
getInternalShippingMinIntervalFloorSeconds from
"@/lib/validation/shop-shipping", then set effectiveIntervalSeconds =
Math.max(getInternalShippingMinIntervalFloorSeconds(),
parsed.data.minIntervalSeconds) instead of using parsed.data.minIntervalSeconds
directly, and set wasClamped to (effectiveIntervalSeconds !==
parsed.data.minIntervalSeconds) in the logInfo call; also change
requestedMinIntervalSeconds to use the original parsed.data.minIntervalSeconds
while keeping other meta (JOB_NAME, baseMeta) the same so logInfo reflects both
requested and effective values.
In `@frontend/app/api/shop/webhooks/stripe/route.ts`:
- Around line 475-476: The code updates orders.shipping_status to 'queued' based
solely on eligibility (the eligible_for_enqueue CTE) which can diverge from
actual shipping_shipments.status; change the update logic so
orders.shipping_status is only set to 'queued' when there is an associated
shipping_shipment whose status is actually 'queued' (e.g., join the orders
update to shipping_shipments and/or filter eligible_for_enqueue to only include
rows where shipping_shipments.status = 'queued', or use the INSERT/RETURNING
from shipping_shipments creation to drive the orders update), and apply the same
fix to the other occurrence handling lines ~1039-1073 so orders.shipping_status
remains consistent with shipping_shipments.status.
In `@frontend/lib/services/shop/shipping/inventory-eligibility.ts`:
- Around line 28-29: Replace the insecure manual quoting in the IN clause:
instead of building values with INVENTORY_COMMITTED_FOR_SHIPPING.map(v =>
`'${v}'`) and sql.raw, use Drizzle's inArray() to parameterize the list (e.g.,
sql`${columnReference} ${inArray(INVENTORY_COMMITTED_FOR_SHIPPING)}`) or, if
composing manually, map to sql`${v}` and join with sql.join(values, sql`, `);
update the function that returns sql`${columnReference} IN (...)` to use one of
these safe patterns referencing INVENTORY_COMMITTED_FOR_SHIPPING and
columnReference.
---
Nitpick comments:
In `@frontend/app/api/shop/internal/orders/restock-stale/route.ts`:
- Around line 237-255: The returned field lastRunTs is ambiguous because in the
success branch (where rows.length > 0) it reflects the current gate acquisition
time (from updated_at set to now()), while in the rate-limited branch it
reflects the previous gate acquisition; update the API to make this explicit by
either renaming lastRunTs to gateUpdatedAt everywhere (adjusting callers that
consume the response) or adding a concise comment next to the uses of
lastRunTs/normalizeDate (in the success branch and the internal_job_state query
handling) explaining the semantic difference (current run vs previous run) so
logs and consumers understand the asymmetry; reference the variables lastRunTs,
normalizeDate, rows, rows2, and the internal_job_state query/jobName when making
the change.
In `@frontend/components/shared/OnlineCounterPopup.tsx`:
- Around line 61-63: The setTimeout wrapper around updating state is
unnecessary; inside the OnlineCounterPopup component (within the useEffect where
const cachedOnline = n is set) remove setTimeout(() => setOnline(cachedOnline),
0) and replace it with a direct call to setOnline(cachedOnline) so the state is
updated synchronously from the effect.
In `@frontend/lib/services/orders/checkout.ts`:
- Around line 943-944: The current 23505 handler treats every unique-violation
as an idempotency collision; change it to only handle the specific
idempotency-key constraint by inspecting the Postgres error's constraint or
detail before calling getOrderByIdempotencyKey. In the block where you check
(error as { code?: string }).code === '23505', also check (error as
any).constraint === '<YOUR_IDEMPOTENCY_CONSTRAINT_NAME>' (or as a fallback
(error as any).detail.includes('idempotency_key')) and only then call
getOrderByIdempotencyKey(db, idempotencyKey); otherwise rethrow the error so
unrelated unique-violation errors are not misclassified. Ensure you replace
'<YOUR_IDEMPOTENCY_CONSTRAINT_NAME>' with the actual DB constraint name for the
idempotency_key.
In `@frontend/lib/services/orders/sweeps.ts`:
- Around line 155-158: The conditional guard around idCond is redundant because
inArray(...) always returns an SQLWrapper and options.orderIds is already
checked by hasExplicitIds; remove the if (idCond) branch and directly compute
const idCond = inArray(orders.id, options.orderIds); then always do
baseConditions.push(idCond) so the code simply creates the condition and appends
it to baseConditions (referencing hasExplicitIds, options.orderIds, inArray,
orders.id, idCond, and baseConditions).
- Around line 68-116: The duplicated min/max constants and validation logic
inside restockStalePendingOrders (and the other two sweep functions in this
file) should be extracted into a small shared utility: create a SWEEP_LIMITS
object and a clampValue (or clampSweepParam) helper that accepts a raw value and
limits and returns the validated integer, then replace the inline constants
(MIN_OLDER_MIN, MAX_OLDER_MIN, MIN_BATCH, MAX_BATCH, MIN_CLAIM_TTL,
MAX_CLAIM_TTL, DEFAULT_TIME_BUDGET_MS, MIN_TIME_BUDGET_MS, MAX_TIME_BUDGET_MS)
and the repeated Math.max/Math.min/Math.floor code in restockStalePendingOrders
and the other sweep functions with calls to the new utilities (use the same
parameter names olderThanMinutes, batchSize, claimTtlMinutes, timeBudgetMs and
keep workerId handling unchanged).
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (16)
frontend/app/[locale]/shop/checkout/success/OrderStatusAutoRefresh.tsxfrontend/app/api/sessions/activity/route.tsfrontend/app/api/shop/internal/orders/restock-stale/route.tsfrontend/app/api/shop/internal/shipping/np/sync/route.tsfrontend/app/api/shop/webhooks/stripe/route.tsfrontend/components/shared/OnlineCounterPopup.tsxfrontend/drizzle/0020_shop_orders_sweeps_partial_indexes.sqlfrontend/lib/services/orders/checkout.tsfrontend/lib/services/orders/monobank-webhook.tsfrontend/lib/services/orders/sweeps.tsfrontend/lib/services/shop/shipping/inventory-eligibility.tsfrontend/lib/tests/helpers/seed-product.tsfrontend/lib/tests/shop/checkout-monobank-idempotency-contract.test.tsfrontend/lib/tests/shop/checkout-no-payments.test.tsfrontend/lib/tests/shop/monobank-payments-disabled.test.tsfrontend/lib/tests/shop/shipping-checkout-payload-phase6.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/drizzle/0020_shop_orders_sweeps_partial_indexes.sql
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (5)
frontend/lib/services/orders/monobank-webhook.ts (1)
490-730: Reduce SQL branch duplication to avoid drift bugs.The canonical and non-canonical branches duplicate most CTE logic. This is brittle for future fixes and easy to desync. Consider extracting the shared CTE body and conditionally injecting only the canonical
inserted_payment_eventCTE.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/orders/monobank-webhook.ts` around lines 490 - 730, The two branches duplicate nearly identical CTEs (updated_order, updated_attempt, eligible_for_enqueue, inserted_shipment, queued_order_ids, shipping_status_update) with only the inserted_payment_event differing; refactor by extracting the shared CTE body into a single SQL template or helper (e.g., sharedOrderUpdateSql or buildSharedOrderCtes) and inject the inserted_payment_event CTE only when args.canonicalDualWriteEnabled (e.g., buildPaymentEventCte or paymentEventSql) before the eligible_for_enqueue CTE, then execute one db.execute(sql`...`) that conditionally includes the payment event CTE; ensure references like ${args.eventId}, ${args.attemptId}, ${args.invoiceId}, ${args.canonicalEventDedupeKey}, and ${args.mergedMetaSql} are passed into the conditional snippet so behavior and dedupe logic remain identical.frontend/lib/tests/shop/monobank-webhook-apply.test.ts (1)
94-98: Narrow the cleanup catch to migration-missing cases only.The blanket catch can hide real DB cleanup failures and create test cross-contamination. Prefer swallowing only “table does not exist” errors.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/tests/shop/monobank-webhook-apply.test.ts` around lines 94 - 98, The catch is too broad around the cleanup call db.delete(paymentEvents).where(...) and should only swallow "table missing" errors; change the catch to capture the error (e) and rethrow unless it matches missing-table conditions (e.g. SQLite messages like /no such table/i, Postgres code '42P01' or messages like /does not exist/i, or other DB-specific error codes). Keep references to the same call (db.delete(paymentEvents)) and the paymentEvents identifier so only migration-missing cases are ignored and any other DB failures still fail the test.frontend/lib/services/shop/shipping/admin-actions.ts (1)
467-478: Extract canonical audit arg construction into one helper.The same
canonicalobject assembly is repeated in multiple branches, which increases drift risk.Also applies to: 519-530, 564-575, 617-627, 661-672
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/shipping/admin-actions.ts` around lines 467 - 478, The repeated construction of the `canonical` audit object should be extracted into a single helper (e.g., buildCanonicalAuditArgs) that accepts the shared inputs (canonicalDualWriteEnabled, args.orderId, args.requestId, args.action, state.shipping_status, state.shipment_status, now) and returns the canonical object with enabled, dedupeKey (using buildShippingAdminAuditDedupe), and occurredAt; replace the inline `canonical: { ... }` blocks in admin-actions.ts (the branches that currently build canonical) with calls to this helper to eliminate duplication and keep a single source of truth.frontend/app/api/shop/webhooks/stripe/route.ts (1)
421-625: Consider consolidating canonical and non-canonical paid SQL paths.These branches are nearly identical; extracting shared SQL shape would reduce regression risk when logic changes later.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/api/shop/webhooks/stripe/route.ts` around lines 421 - 625, The two large SQL branches differ only by the inserted_payment_event CTE and the returning columns (total_amount_minor, currency included for canonical), so refactor by extracting a single SQL builder that constructs the shared WITH pipeline (updated_order, eligible_for_enqueue, inserted_shipment, queued_order_ids, mark_queued) and conditionally injects the inserted_payment_event CTE and altered RETURNING/select list when args.canonicalDualWriteEnabled is true; implement this as a helper (e.g., buildPaidSql or buildPaidQuery) used in route handler, referencing the existing symbols updated_order, inserted_payment_event, eligible_for_enqueue, inserted_shipment, queued_order_ids, mark_queued and the helper inventoryCommittedForShippingSql, and pass in args.canonicalEventPayload/args.canonicalEventDedupeKey when canonicalDualWriteEnabled to include the payment_event insert and select total_amount_minor/currency, otherwise omit them.frontend/lib/services/shop/events/dedupe-key.ts (1)
50-52: Guard against empty namespace input.An empty/whitespace namespace currently produces keys like
:v1:<hash>, which weakens namespace isolation.💡 Proposed fix
export function buildDedupeKey(namespace: string, seed: unknown): string { const normalizedNamespace = namespace.trim().toLowerCase(); + if (!normalizedNamespace) { + throw new Error('buildDedupeKey namespace must be non-empty'); + } const canonical = stableSerialize(seed);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/lib/services/shop/events/dedupe-key.ts` around lines 50 - 52, The buildDedupeKey function currently allows empty/whitespace namespaces which yields keys like `:v1:<hash>`; fix by validating the normalizedNamespace (namespace.trim().toLowerCase()) inside buildDedupeKey and reject empty input—e.g., if normalizedNamespace is empty throw a clear Error (or return a rejected result) stating the namespace must be non-empty—so that normalizedNamespace (and subsequent canonical/stableSerialize usage) always produces a proper namespaced key.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/app/api/shop/webhooks/stripe/route.ts`:
- Around line 1276-1324: The fallback path that inserts/updates
shipping_shipments can also transition a shipment/order to 'queued' but
currently doesn't call recordShippingMetric; after the db.execute(...) block
inside the if that checks order.shippingProvider === 'nova_poshta' add logic to
detect whether the operation actually resulted in a queued state (e.g., inspect
the execute result or re-query shipping_shipments for order.id and
status='queued') and if so call recordShippingMetric(order.id, 'nova_poshta')
(or the existing recordShippingMetric signature used in the main atomic path) to
emit the shipping metric when fallback enqueues.
In `@frontend/lib/services/shop/events/dedupe-key.ts`:
- Line 33: The current sort uses localeCompare which can vary across runtimes;
replace Object.keys(source).sort((a, b) => a.localeCompare(b)) with a
locale-agnostic comparator to ensure deterministic ordering (e.g.
Object.keys(source).sort((a,b) => a < b ? -1 : a > b ? 1 : 0)). Update the
sorting in dedupe-key.ts where the keys constant is built from
Object.keys(source) so the canonicalization fed into SHA256 is stable across
environments.
In `@frontend/lib/services/shop/events/write-payment-event.ts`:
- Around line 29-32: The code should validate an explicitly provided
args.dedupeKey and treat empty/whitespace-only strings as not provided so we
don't collide all writes; change the logic around dedupeKey to trim
args.dedupeKey and if the trimmed value is falsy fall back to
buildPaymentEventDedupeKey(args.dedupeSeed ?? {...}), ensuring the computed
dedupeKey passed into the insert (which uses onConflictDoNothing()) is never an
empty string; reference the dedupeKey variable, args.dedupeKey, args.dedupeSeed,
and buildPaymentEventDedupeKey when making this fix.
In `@frontend/lib/tests/shop/admin-shipping-canonical-audit.test.ts`:
- Around line 18-20: Move the modification of
process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE into the same try block that
restores it in finally so the environment is always reset even if the test
fails: capture prevDualWrite, then set
process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE = 'true' inside the try before
exercising the code that may throw, and in the finally restore
process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE = prevDualWrite (handling undefined
appropriately); apply the same change to the second setup/teardown pair
referenced around the 42-74 range so both test blocks always restore the
original env value.
In `@frontend/lib/tests/shop/monobank-webhook-apply.test.ts`:
- Around line 154-156: The test sets
process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE before entering the try/finally so
if setup throws the finally won't run and the env var leaks; move the assignment
of process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE = 'true' into the try block
(after capturing prevDualWrite) so the finally block that restores prevDualWrite
always executes, and apply the same change to the other occurrence around lines
227-231 (the same prevDualWrite/try/finally pattern).
In `@frontend/lib/tests/shop/stripe-webhook-psp-fields.test.ts`:
- Around line 130-132: The test sets
process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE directly to 'true' (prevDualWrite =
process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE;
process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE = 'true') before running
fixture/setup code, which can throw and skip the existing finally restoration;
fix by moving the env mutation inside a try block and ensuring restoration
occurs in an outer/finally that always runs (or immediately wrap the setup+test
in try { /* setup and test */ } finally {
process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE = prevDualWrite; }) and apply the
same pattern for the other occurrence referenced (around the 367–371 block) so
the original value is always restored even if setup fails.
---
Nitpick comments:
In `@frontend/app/api/shop/webhooks/stripe/route.ts`:
- Around line 421-625: The two large SQL branches differ only by the
inserted_payment_event CTE and the returning columns (total_amount_minor,
currency included for canonical), so refactor by extracting a single SQL builder
that constructs the shared WITH pipeline (updated_order, eligible_for_enqueue,
inserted_shipment, queued_order_ids, mark_queued) and conditionally injects the
inserted_payment_event CTE and altered RETURNING/select list when
args.canonicalDualWriteEnabled is true; implement this as a helper (e.g.,
buildPaidSql or buildPaidQuery) used in route handler, referencing the existing
symbols updated_order, inserted_payment_event, eligible_for_enqueue,
inserted_shipment, queued_order_ids, mark_queued and the helper
inventoryCommittedForShippingSql, and pass in
args.canonicalEventPayload/args.canonicalEventDedupeKey when
canonicalDualWriteEnabled to include the payment_event insert and select
total_amount_minor/currency, otherwise omit them.
In `@frontend/lib/services/orders/monobank-webhook.ts`:
- Around line 490-730: The two branches duplicate nearly identical CTEs
(updated_order, updated_attempt, eligible_for_enqueue, inserted_shipment,
queued_order_ids, shipping_status_update) with only the inserted_payment_event
differing; refactor by extracting the shared CTE body into a single SQL template
or helper (e.g., sharedOrderUpdateSql or buildSharedOrderCtes) and inject the
inserted_payment_event CTE only when args.canonicalDualWriteEnabled (e.g.,
buildPaymentEventCte or paymentEventSql) before the eligible_for_enqueue CTE,
then execute one db.execute(sql`...`) that conditionally includes the payment
event CTE; ensure references like ${args.eventId}, ${args.attemptId},
${args.invoiceId}, ${args.canonicalEventDedupeKey}, and ${args.mergedMetaSql}
are passed into the conditional snippet so behavior and dedupe logic remain
identical.
In `@frontend/lib/services/shop/events/dedupe-key.ts`:
- Around line 50-52: The buildDedupeKey function currently allows
empty/whitespace namespaces which yields keys like `:v1:<hash>`; fix by
validating the normalizedNamespace (namespace.trim().toLowerCase()) inside
buildDedupeKey and reject empty input—e.g., if normalizedNamespace is empty
throw a clear Error (or return a rejected result) stating the namespace must be
non-empty—so that normalizedNamespace (and subsequent canonical/stableSerialize
usage) always produces a proper namespaced key.
In `@frontend/lib/services/shop/shipping/admin-actions.ts`:
- Around line 467-478: The repeated construction of the `canonical` audit object
should be extracted into a single helper (e.g., buildCanonicalAuditArgs) that
accepts the shared inputs (canonicalDualWriteEnabled, args.orderId,
args.requestId, args.action, state.shipping_status, state.shipment_status, now)
and returns the canonical object with enabled, dedupeKey (using
buildShippingAdminAuditDedupe), and occurredAt; replace the inline `canonical: {
... }` blocks in admin-actions.ts (the branches that currently build canonical)
with calls to this helper to eliminate duplication and keep a single source of
truth.
In `@frontend/lib/tests/shop/monobank-webhook-apply.test.ts`:
- Around line 94-98: The catch is too broad around the cleanup call
db.delete(paymentEvents).where(...) and should only swallow "table missing"
errors; change the catch to capture the error (e) and rethrow unless it matches
missing-table conditions (e.g. SQLite messages like /no such table/i, Postgres
code '42P01' or messages like /does not exist/i, or other DB-specific error
codes). Keep references to the same call (db.delete(paymentEvents)) and the
paymentEvents identifier so only migration-missing cases are ignored and any
other DB failures still fail the test.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
frontend/app/api/shop/webhooks/stripe/route.tsfrontend/db/schema/shop.tsfrontend/drizzle/0021_solid_sage.sqlfrontend/drizzle/meta/0021_snapshot.jsonfrontend/drizzle/meta/_journal.jsonfrontend/lib/env/shop-canonical-events.tsfrontend/lib/services/orders/monobank-webhook.tsfrontend/lib/services/shop/events/dedupe-key.tsfrontend/lib/services/shop/events/write-admin-audit.tsfrontend/lib/services/shop/events/write-payment-event.tsfrontend/lib/services/shop/events/write-shipping-event.tsfrontend/lib/services/shop/shipping/admin-actions.tsfrontend/lib/tests/shop/admin-shipping-canonical-audit.test.tsfrontend/lib/tests/shop/monobank-webhook-apply.test.tsfrontend/lib/tests/shop/stripe-webhook-psp-fields.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- frontend/drizzle/meta/_journal.json
| if ( | ||
| !applyResult.shipmentQueued && | ||
| order.shippingRequired === true && | ||
| order.shippingProvider === 'nova_poshta' && | ||
| Boolean(order.shippingMethodCode) && | ||
| order.inventoryStatus === 'reserved' | ||
| ) { | ||
| await db.execute(sql` | ||
| with ensured_shipment as ( | ||
| insert into shipping_shipments ( | ||
| order_id, | ||
| provider, | ||
| status, | ||
| attempt_count, | ||
| created_at, | ||
| updated_at | ||
| ) values ( | ||
| ${order.id}::uuid, | ||
| 'nova_poshta', | ||
| 'queued', | ||
| 0, | ||
| ${now}, | ||
| ${now} | ||
| ) | ||
| on conflict (order_id) do update | ||
| set status = 'queued', | ||
| updated_at = ${now} | ||
| where shipping_shipments.provider = 'nova_poshta' | ||
| and shipping_shipments.status is distinct from 'queued' | ||
| returning order_id | ||
| ), | ||
| existing_shipment as ( | ||
| select order_id | ||
| from shipping_shipments | ||
| where order_id = ${order.id}::uuid | ||
| and status = 'queued' | ||
| ), | ||
| shipment_order_ids as ( | ||
| select order_id from ensured_shipment | ||
| union | ||
| select order_id from existing_shipment | ||
| ) | ||
| update orders | ||
| set shipping_status = 'queued'::shipping_status, | ||
| updated_at = ${now} | ||
| where id in (select order_id from shipment_order_ids) | ||
| and shipping_status is distinct from 'queued'::shipping_status | ||
| `); | ||
| } |
There was a problem hiding this comment.
Emit shipping metric when fallback enqueue actually queues.
This branch can transition shipment/order to queued, but recordShippingMetric is only called from the main atomic path (Line 1267), so queued events from fallback are currently untracked.
💡 Proposed fix
- await db.execute(sql`
+ const fallbackRes = await db.execute(sql`
with ensured_shipment as (
insert into shipping_shipments (
order_id,
@@
shipment_order_ids as (
select order_id from ensured_shipment
union
select order_id from existing_shipment
- )
- update orders
- set shipping_status = 'queued'::shipping_status,
- updated_at = ${now}
- where id in (select order_id from shipment_order_ids)
- and shipping_status is distinct from 'queued'::shipping_status
+ ),
+ updated_orders as (
+ update orders
+ set shipping_status = 'queued'::shipping_status,
+ updated_at = ${now}
+ where id in (select order_id from shipment_order_ids)
+ and shipping_status is distinct from 'queued'::shipping_status
+ returning id
+ )
+ select count(*)::int as queued_count from updated_orders
`);
+ const fallbackRow = readDbRows<{ queued_count?: number }>(fallbackRes)[0];
+ if (Number(fallbackRow?.queued_count ?? 0) > 0) {
+ recordShippingMetric({
+ name: 'queued',
+ source: 'stripe_webhook',
+ orderId: order.id,
+ requestId,
+ });
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/app/api/shop/webhooks/stripe/route.ts` around lines 1276 - 1324, The
fallback path that inserts/updates shipping_shipments can also transition a
shipment/order to 'queued' but currently doesn't call recordShippingMetric;
after the db.execute(...) block inside the if that checks order.shippingProvider
=== 'nova_poshta' add logic to detect whether the operation actually resulted in
a queued state (e.g., inspect the execute result or re-query shipping_shipments
for order.id and status='queued') and if so call recordShippingMetric(order.id,
'nova_poshta') (or the existing recordShippingMetric signature used in the main
atomic path) to emit the shipping metric when fallback enqueues.
|
|
||
| if (valueType === 'object') { | ||
| const source = value as Record<string, unknown>; | ||
| const keys = Object.keys(source).sort((a, b) => a.localeCompare(b)); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd "dedupe-key.ts" --type fRepository: DevLoversTeam/devlovers.net
Length of output: 118
🏁 Script executed:
cat -n frontend/lib/services/shop/events/dedupe-key.tsRepository: DevLoversTeam/devlovers.net
Length of output: 2773
🌐 Web query:
JavaScript String.localeCompare Node.js environment consistency identical ordering across locales
💡 Result:
String.prototype.localeCompare() is not guaranteed to produce identical ordering across locales or across Node.js environments, unless you tightly control both the locale inputs and the Intl/ICU data being used.
What’s consistent vs. what isn’t
- Locale matters: collation rules differ by language/region (e.g.,
"ä"sorts differently in German vs Swedish), so ordering can legitimately change when you change locales.localeCompare()delegates toIntl.Collatorwhen available. [3], [4] - Defaults are environment-dependent: if you don’t pass
locales/options, results depend on the runtime’s default locale and collation data. [3] - Even with the same locale, results can vary across builds/OS because Node relies on ICU data, and Node can be built/linked with different ICU configurations (
full-icu,small-icu,system-icu, etc.).system-icuin particular depends on whatever ICU data is installed on that machine. [1] - The spec explicitly allows implementation- and locale-dependent behavior for internationalization features (including collation), and the underlying locale data is expected to vary and change over time. [2]
How to get the most consistent ordering in Node
-
Always pass an explicit locale and options (or create a single
Intl.Collatorand reuse it):const collator = new Intl.Collator('en', { sensitivity: 'variant', numeric: false }); arr.sort(collator.compare);
This avoids relying on the host default locale. [3], [4]
-
Ensure the same ICU data everywhere
- Prefer official Node binaries (typically
full-icu), or otherwise make sure every environment uses the same ICU mode/data. [1]
- Prefer official Node binaries (typically
If you need strict determinism (byte-for-byte identical everywhere)
Don’t use locale collation—use a locale-free comparison (e.g., code-point / UTF-16 unit order, or a custom normalized+codepoint scheme). Locale-aware sorting is inherently allowed to vary across locales and implementations. [2], [3]
Sources: Node.js Intl/ICU build modes and their effects [1]; ECMA-402 “implementation dependent” behavior and data variability [2]; localeCompare() delegation/locale dependence [3]; Intl.Collator locale-dependent ordering examples [4].
Use locale-agnostic key sorting for deterministic canonicalization.
localeCompare() without an explicit locale is not guaranteed to produce identical ordering across environments. Ordering depends on the runtime's default locale and ICU configuration (full-icu, small-icu, system-icu), which can vary. Since this dedupe key is fed into SHA256 for deduplication across environments, any variance in key ordering will produce different hashes, breaking deduplication.
💡 Proposed fix
- const keys = Object.keys(source).sort((a, b) => a.localeCompare(b));
+ const keys = Object.keys(source).sort();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const keys = Object.keys(source).sort((a, b) => a.localeCompare(b)); | |
| const keys = Object.keys(source).sort(); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/lib/services/shop/events/dedupe-key.ts` at line 33, The current sort
uses localeCompare which can vary across runtimes; replace
Object.keys(source).sort((a, b) => a.localeCompare(b)) with a locale-agnostic
comparator to ensure deterministic ordering (e.g. Object.keys(source).sort((a,b)
=> a < b ? -1 : a > b ? 1 : 0)). Update the sorting in dedupe-key.ts where the
keys constant is built from Object.keys(source) so the canonicalization fed into
SHA256 is stable across environments.
| const dedupeKey = | ||
| args.dedupeKey ?? | ||
| buildPaymentEventDedupeKey( | ||
| args.dedupeSeed ?? { |
There was a problem hiding this comment.
Validate explicit dedupeKey before using it.
If args.dedupeKey is ''/whitespace, all writes collide on the same unique key and later events are silently dropped by onConflictDoNothing().
💡 Proposed fix
- const dedupeKey =
- args.dedupeKey ??
+ const explicitDedupeKey = args.dedupeKey?.trim();
+ const dedupeKey =
+ (explicitDedupeKey && explicitDedupeKey.length > 0
+ ? explicitDedupeKey
+ : undefined) ??
buildPaymentEventDedupeKey(
args.dedupeSeed ?? {
orderId: args.orderId,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/lib/services/shop/events/write-payment-event.ts` around lines 29 -
32, The code should validate an explicitly provided args.dedupeKey and treat
empty/whitespace-only strings as not provided so we don't collide all writes;
change the logic around dedupeKey to trim args.dedupeKey and if the trimmed
value is falsy fall back to buildPaymentEventDedupeKey(args.dedupeSeed ??
{...}), ensuring the computed dedupeKey passed into the insert (which uses
onConflictDoNothing()) is never an empty string; reference the dedupeKey
variable, args.dedupeKey, args.dedupeSeed, and buildPaymentEventDedupeKey when
making this fix.
| const prevDualWrite = process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE; | ||
| process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE = 'true'; | ||
|
|
There was a problem hiding this comment.
Wrap setup in the same try/finally that restores env.
On Line 19 the env var is mutated before the try block (Line 42). If order insertion fails first, the env var is left dirty for subsequent tests.
Also applies to: 42-74
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/lib/tests/shop/admin-shipping-canonical-audit.test.ts` around lines
18 - 20, Move the modification of process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE
into the same try block that restores it in finally so the environment is always
reset even if the test fails: capture prevDualWrite, then set
process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE = 'true' inside the try before
exercising the code that may throw, and in the finally restore
process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE = prevDualWrite (handling undefined
appropriately); apply the same change to the second setup/teardown pair
referenced around the 42-74 range so both test blocks always restore the
original env value.
| const prevDualWrite = process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE; | ||
| process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE = 'true'; | ||
|
|
There was a problem hiding this comment.
Prevent SHOP_CANONICAL_EVENTS_DUAL_WRITE leakage on setup failures.
On Line 155 the env var is set before the try starts (Line 172). If setup throws before Line 172, restoration in finally never runs and can affect later tests.
Suggested fix
- const prevDualWrite = process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE;
- process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE = 'true';
-
- const invoiceId = `inv_${crypto.randomUUID()}`;
- const { orderId } = await insertOrderAndAttempt({
- invoiceId,
- amountMinor: 1000,
- withShippingNp: true,
- inventoryStatus: 'reserved',
- });
-
- const rawBody = JSON.stringify({
- invoiceId,
- status: 'success',
- amount: 1000,
- ccy: 980,
- });
-
- try {
+ const prevDualWrite = process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE;
+ process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE = 'true';
+ let orderId: string | null = null;
+ let invoiceId: string | null = null;
+ try {
+ invoiceId = `inv_${crypto.randomUUID()}`;
+ ({ orderId } = await insertOrderAndAttempt({
+ invoiceId,
+ amountMinor: 1000,
+ withShippingNp: true,
+ inventoryStatus: 'reserved',
+ }));
+ const rawBody = JSON.stringify({
+ invoiceId,
+ status: 'success',
+ amount: 1000,
+ ccy: 980,
+ });
// ...test body...
} finally {
if (prevDualWrite === undefined) {
delete process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE;
} else {
process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE = prevDualWrite;
}
- await cleanup(orderId, invoiceId);
+ if (orderId && invoiceId) await cleanup(orderId, invoiceId);
}Also applies to: 227-231
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/lib/tests/shop/monobank-webhook-apply.test.ts` around lines 154 -
156, The test sets process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE before entering
the try/finally so if setup throws the finally won't run and the env var leaks;
move the assignment of process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE = 'true'
into the try block (after capturing prevDualWrite) so the finally block that
restores prevDualWrite always executes, and apply the same change to the other
occurrence around lines 227-231 (the same prevDualWrite/try/finally pattern).
| const prevDualWrite = process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE; | ||
| process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE = 'true'; | ||
|
|
There was a problem hiding this comment.
Guard env restoration even when fixture setup fails.
SHOP_CANONICAL_EVENTS_DUAL_WRITE is set on Line 131 before setup inserts; if setup throws before Line 249, restoration in finally is skipped.
Also applies to: 367-371
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/lib/tests/shop/stripe-webhook-psp-fields.test.ts` around lines 130 -
132, The test sets process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE directly to
'true' (prevDualWrite = process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE;
process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE = 'true') before running
fixture/setup code, which can throw and skip the existing finally restoration;
fix by moving the env mutation inside a try block and ensuring restoration
occurs in an outer/finally that always runs (or immediately wrap the setup+test
in try { /* setup and test */ } finally {
process.env.SHOP_CANONICAL_EVENTS_DUAL_WRITE = prevDualWrite; }) and apply the
same pattern for the other occurrence referenced (around the 367–371 block) so
the original value is always restored even if setup fails.
43f1e89 to
513bfbe
Compare
* (SP: 3) [Backend] add Nova Poshta shipping foundation + checkout persistence + async label workflow (#364) * (SP: 2) [Frontend] Reduce Vercel variable costs via caching and analytics cleanup (#367) * perf(vercel): cut runtime costs via notification, blog cache, and analytics changes * perf(blog): remove server searchParams usage to preserve ISR * fix(build): align Netlify Node version and remove SpeedInsights import * chore(release): bump version to 1.0.4 * (SP: 2) [Frontend] Remove [locale] layout force-dynamic and move auth to client-side (#370) * refactor(frontend): remove locale layout dynamic auth and move header auth client-side * fix(frontend): prevent stale auth responses in useAuth and remove redundant dashboard dynamic layout * (SP: 2) [Frontend] Reduce auth overhead and sync auth state across tabs (#372) * refactor(frontend): remove locale layout dynamic auth and move header auth client-side * fix(frontend): prevent stale auth responses in useAuth and remove redundant dashboard dynamic layout * feat(frontend): sync auth state across tabs via BroadcastChannel * (SP: 2) [Frontend] Quizzes page ISR + client-side progress + GitHub stars cache (#371) * perf(quiz-flow): move quiz progress to client-side fetch, enable ISR for quizzes page - Move user progress fetch from SSR to client-side API (/api/quiz/progress) - Remove force-dynamic and getCurrentUser() from quizzes page - Add revalidate=300 for ISR caching - Use window.history.replaceState for tab URL sync (avoid Next.js navigation) - Add forceMount to TabsContent to prevent layout shift on tab switch - Fix nested <main> — use <section> inside DynamicGridBackground - Cache GitHub stars count in sessionStorage to avoid refetch + re-animation * perf: replace useRef with useState lazy initializer in GitHubStarButton Fixes React 19 react-hooks/refs ESLint error — useRef.current cannot be read during render. Uses useState(getStoredStars) to capture the sessionStorage value once on mount instead. * fix: stop star icon trembling on hover in GitHubStarButton * (SP: 1) [Frontend] Fix quiz timer flash and card layout shift on quizzes page (#373) * perf(quiz-flow): move quiz progress to client-side fetch, enable ISR for quizzes page - Move user progress fetch from SSR to client-side API (/api/quiz/progress) - Remove force-dynamic and getCurrentUser() from quizzes page - Add revalidate=300 for ISR caching - Use window.history.replaceState for tab URL sync (avoid Next.js navigation) - Add forceMount to TabsContent to prevent layout shift on tab switch - Fix nested <main> — use <section> inside DynamicGridBackground - Cache GitHub stars count in sessionStorage to avoid refetch + re-animation * perf: replace useRef with useState lazy initializer in GitHubStarButton Fixes React 19 react-hooks/refs ESLint error — useRef.current cannot be read during render. Uses useState(getStoredStars) to capture the sessionStorage value once on mount instead. * fix: stop star icon trembling on hover in GitHubStarButton * fix: eliminate quiz timer flash on language switch Remove Suspense boundary (loading.tsx) that unmounted QuizContainer during locale navigation. Synchronous session restore via useReducer lazy initializer and correct timer initialization via useState lazy initializer prevent any visible state reset on language switch * fix: replace quiz card layout shift with skeleton grid during progress load * chore(release): v1.0.5 * (SP: 3)[Shop][DB] Reduce Neon compute: throttle janitor + relax checkout polling + add sweep indexes (#375) * (SP: 3) [Backend] add internal janitor (jobs 1-4), claim/lease + runbook (G0-G6) * (SP: 3) [Backend] add provider selector, fix payments gating, i18n checkout errors * Add shop category images to public * (SP: 3) [Shop][Monobank] I1 structured logging: codes + logging safety checks * (SP: 3) [Shop][Monobank] Fail-closed non-browser origin posture for webhook + janitor (ORIGIN_BLOCKED) * (SP: 3) [Shop][Monobank] [Shop][Monobank] J gate: add orders status ownership test and pass all pre-prod invariants * (SP: 3) [Shop][Monobank] review fixes (tests, logging, success UI) * (SP: 1) [Shop][Monobank] Tighten webhook log-code typing; harden DB tests; minor security/log/UI cleanups * (SP: 1) [Shop][Monobank] harden Monobank webhook (origin/PII-safe logs) and remove duplicate sha256 hashing * (SP: 1) [Cart] adding route for user orders to cart page * (SP: 1) [Cart] fix after review cart mpage and adding index for orders * (SP: 1) [Cart] Fix cart orders summary auth rendering and return totalCount for orders badge * (SP: 1) [Cart] remove console.warn from CartPageClient to satisfy monobank logging safety invariant, namespace localStorage cart by user and reset on auth change * (SP: 1) [Cart] rehydrate per cartOwnerId (remove didHydrate coupling) * (SP: 2)[Backend] shop/shipping schema migrations foundation * (SP: 2)[Backend] shop/shipping public routes + np cache + sync * (SP: 2)[Backend] shop/shipping: shipping persistence + currency policy * (SP: 2)[Backend] shop/shipping: webhook apply + psp fields + enqueue shipping * (SP: 2)[Backend] shop/shipping: shipments worker + internal run + np mock * (SP: 2)[Backend] shop/shipping: admin+ui shipping actions * (SP: 2)[Backend] shop/shipping: retention + log sanitizer + metrics * (SP: 1)[Backend] stabilize Monobank janitor (job1/job3) and fix failing apply-outcomes tests * (SP: 1) [db]: add shop shipping core migration * (SP: 1) [FIX] resolve merge artifacts in order details page * (SP: 1) [FIX] apply post-review fixes for shipping and admin flows * (SP: 1) [FIX] align cart shipping imports (localeToCountry + availability reason code) * (SP: 1) [FIX] hard-block checkout when shipping disabled + i18n reason mapping * (SP: 1) [FIX] harden webhook enqueue + shipping worker + NP catalog + cart fail-closed * (SP: 1) [FIX] Initialize shippingMethodsLoading to true to avoid premature checkout. * (SP: 1) [FIX] migration 17 * (SP: 1) [DB] migrarion to testind DB and adjusting tests * (SP: 1)[DB] slow down restock janitor + enforce prod interval floor * (SP: 1) [DB] add order status lite view (opt-in) + instrumentation * (SP: 1) [DB] replace checkout success router.refresh polling with backoff API polling * (SP: 1) [DB] throttle sessions activity heartbeat + use count(*) (PK invariant) * (SP: 1)[DB] enforce production min intervals for internal shipping jobs * (SP: 1) [DB] add minimal partial indexes for orders sweeps + rollout notes * (SP: 1) [DB] refactor sweep claim step to FOR UPDATE SKIP LOCKED batching * (SP: 1)[DB]: slow janitor schedule to every 30 minutes * (SP: 1)[DB] increase polling delays for MonobankRedirectStatus * (SP: 1)[FIX] harden webhooks + fix SSR hydration + janitor/np gates + sweeps refactor * (SP: 1)[FIX] harden shipping enqueue gating + apply NP interval floor --------- Co-authored-by: Liudmyla Sovetovs <milkaegik@gmail.com> Co-authored-by: Lesia Soloviova <106915140+LesiaUKR@users.noreply.github.com>
Description
This PR reduces unnecessary Neon DB compute wakeups by throttling background jobs and client-side status polling, and by making the janitor/order sweeps cheaper when they do run. The goal is to minimize frequent/expensive queries while keeping correctness (idempotency + safety gates) intact.
Related Issue
Issue: #<issue_number>
Changes
shop-janitor-restock-staleGitHub Actions schedule (now every 30 minutes) to reduce guaranteed DB wakeups.orderssweep selectors to avoid expensive scans under janitor sweeps.FOR UPDATE SKIP LOCKEDbatching (reduces contention and repeat work).router.refresh()polling.MonobankRedirectStatus(base delay + busy retry delay), keeping stop conditions and budget limits.internal_job_stategate + env floor.Database Changes (if applicable)
How Has This Been Tested?
Notes
DATABASE_URL.Screenshots (if applicable)
N/A
Checklist
Before submitting
Reviewers
Summary by CodeRabbit
Performance
New Features
Bug Fixes
Chores