[Fix]: Assortment of Bugs with Timefold Table and Payments #1348
Conversation
Before we only expired the when-purchase-expires items, and left it to the item-grant-repeat events to expire the rest. But, when a subscription ends all of them should be expired immediately.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a per-TimeFold downstream cascade registry and rewrites Changes
Sequence Diagram(s)sequenceDiagram
participant Q as Queue<br/>(BulldozerTimeFoldQueue)
participant P as Processor<br/>(bulldozer_timefold_process_queue)
participant SE as StorageEngine<br/>(BulldozerStorageEngine)
participant Reg as Registry<br/>(BulldozerTimeFoldDownstreamCascade)
participant Seq as TempSeq<br/>(__bulldozer_seq)
Note over P: For each distinct TimeFold with scheduledAt <= now()
P->>Reg: SELECT registry row for TimeFold
alt registry row missing
Note over P: skip draining this TimeFold (defer)
else registry row present
P->>Q: SELECT due rows for this TimeFold
loop drain TimeFold queue
Q-->>P: next queued reducer row
P->>SE: apply reducer → update state
SE-->>P: stateAfter + emitted rows
P->>Seq: INSERT emitted rows tagged by cascadeInputName
P->>Q: mark queue row consumed / upsert next scheduledAt
end
P->>Seq: SELECT accumulated rows for cascadeInputName
Seq-->>P: input rows
alt cascadeTemplate IS NOT NULL
P->>P: EXECUTE DO $tag$ ...cascadeTemplate... $tag$
P->>SE: downstream writes (via cascadeTemplate)
end
P->>Seq: DELETE accumulated rows for cascadeInputName
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ❌ 3❌ Failed checks (1 warning, 2 inconclusive)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile SummaryThis PR addresses three production bugs: (1) Confidence Score: 5/5Safe to merge — all three bugs are correctly fixed, logic is sound, and the changes are backed by comprehensive new tests. No P0 or P1 issues found. The deploy-window guard (CONTINUE when the cascade registry row is missing) is a thoughtful correctness safeguard. The previous_emitted_row_count offset logic is correct across multiple queue-row iterations. The ROUND(EXTRACT(EPOCH …) * 1000)::bigint fix addresses the described precision issue cleanly. All remaining notes (the pre-existing redundant ::bigint cast flagged in a previous review) are P2 style suggestions that do not block merge. No files require special attention. Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[pg_cron calls\nbulldozer_timefold_process_queue] --> B[Acquire advisory lock\nUpdate lastProcessedAt]
B --> C[CREATE TEMP TABLE __bulldozer_seq\nON COMMIT DROP]
C --> D{Any timefold with\ndue queue rows?}
D -- No --> Z[Done]
D -- Yes --> E[Outer loop: next distinct\ntableStoragePath]
E --> F{Registry row in\nBulldozerTimeFoldDownstreamCascade?}
F -- Missing: deploy-window gap --> G[CONTINUE — defer\ntimefold for this tick]
G --> D
F -- Found --> H[DELETE FROM __bulldozer_seq\nclean slate]
H --> I{Next due queue row\nfor this timefold?\nFOR UPDATE SKIP LOCKED}
I -- None left --> J{cascadeTemplate\nIS NOT NULL?}
I -- Found --> K[Delete queue row\nRun reducer loop\naccumulate newly_emitted_rows]
K --> L[Upsert state & row records\ninto BulldozerStorageEngine]
L --> M[INSERT newly_emitted_rows\ninto __bulldozer_seq\nunder cascadeInputName]
M --> N{next_timestamp > cutoff?}
N -- Yes --> O[Re-enqueue into\nBulldozerTimeFoldQueue]
O --> I
N -- No --> I
J -- No cascade --> P[DELETE FROM __bulldozer_seq]
P --> D
J -- Has cascade --> Q[EXECUTE cascadeTemplate\nreads __bulldozer_seq,\nwrites filter/map/LFold\nmaterialized outputs]
Q --> R[DELETE FROM __bulldozer_seq]
R --> D
Reviews (2): Last reviewed commit: "chore: schema model mismatch via externa..." | Re-trigger Greptile |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
apps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts (1)
265-294: Bigint cast fix looks correct and well-documented.The root cause (NUMERIC scale-6 serialization leaking into downstream JSONB
effectiveAtMillis) and the fix (casting tobigintat the root) are consistent with howcurrentMillisis consumed downstream: numeric comparisons (Lines 284, 290, 386) still coerce cleanly, andto_jsonb(${currentMillis})on Lines 337–338 will now serialize without the trailing.000000, matching theigr:<sub>:<millis>ID format built at Line 294.One minor nit: with
currentMillisnow already abigint, the::bigint::textdouble-cast inigrTxnIdat Line 294 is redundant —${currentMillis}::textwould suffice. Not a correctness issue, just cleanup if you're touching this area.♻️ Optional simplification
- const igrTxnId = `('igr:' || (${S}->>'subscriptionId') || ':' || ${currentMillis}::bigint::text)`; + const igrTxnId = `('igr:' || (${S}->>'subscriptionId') || ':' || ${currentMillis}::text)`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts` around lines 265 - 294, The bigint cast is already applied in currentMillis, so clean up the redundant double-cast in igrTxnId by replacing the explicit ::bigint::text with just ::text (i.e., use ${currentMillis}::text) inside the igrTxnId expression; update the igrTxnId template string accordingly so IDs remain byte-identical but avoid the unnecessary ::bigint re-cast.apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts (1)
484-534: Cascade template capturestriggersMap contents atinit()call time.This works under the current declare-all-then-init-all convention (all downstream tables register their triggers during declaration, before their upstream's
init()runs), but the contract is implicit. If anyone ever callstimeFoldTable.init()and then declares a new downstream consumer, the upstream's storedcascadeTemplatewon't reflect it, and queue-drained emissions will silently skip that consumer (while the inlinesetRowpath continues to work). The null-template comment only covers the "no triggers at all" case.Consider a short doc-comment on
registerRowChangeTriggerand/orinit()pinning this ordering requirement, or a dev-time assertion if triggers are added afterinit()has been observed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts` around lines 484 - 534, The stored cascadeTemplate is generated during timeFoldTable.init() from the current triggers map (collected via collectRowChangeTriggerStatements) so any registerRowChangeTrigger calls after init() won't be included; update the code to either (A) add a clear doc comment on registerRowChangeTrigger and init() stating the required declare-before-init ordering, and (B) add a runtime dev-mode assertion in registerRowChangeTrigger that throws/logs if called after init() (or conversely in init() set a flag like this.initialized and have registerRowChangeTrigger check it), referencing registerRowChangeTrigger, init, collectRowChangeTriggerStatements, toCascadeSqlBlock and the cascadeTemplate/BulldozerTimeFoldDownstreamCascade upsert so reviewers can find the relevant spots to change.apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql (1)
230-263: Consider consolidating the two passes overnewly_emitted_rows.The block iterates
jsonb_array_elements(newly_emitted_rows) WITH ORDINALITYtwice — once at lines 230-244 to write intoBulldozerStorageEngine, then again at 250-263 to populate__bulldozer_seqwith the sameprevious_emitted_row_count + rowIndexidentifier scheme. The two passes can be fused into a singleFOR ... IN SELECTthat performs both inserts, avoiding re-parsing/re-expanding the JSON array and keeping the identifier construction in one place.♻️ Example consolidation
- FOR new_row_record IN - SELECT - "rows"."rowData" AS "rowData", - "rows"."rowIndex" AS "rowIndex" - FROM jsonb_array_elements(newly_emitted_rows) WITH ORDINALITY AS "rows"("rowData", "rowIndex") - LOOP - INSERT INTO "BulldozerStorageEngine" ("id", "keyPath", "value") - VALUES ( - gen_random_uuid(), - rows_path || ARRAY[to_jsonb((queued_row."rowIdentifier" || ':' || (previous_emitted_row_count + new_row_record."rowIndex")::text)::text)]::jsonb[], - jsonb_build_object('rowData', new_row_record."rowData") - ) - ON CONFLICT ("keyPath") DO UPDATE - SET "value" = EXCLUDED."value"; - END LOOP; - - -- Accumulate the newly-emitted rows into __bulldozer_seq under this - -- timefold's cascade input name, shaped like timeFoldChangesTableName - -- (oldRowData/newRowData diff). Queue-drained emissions are always - -- new rows, so oldRowData/oldRowSortKey are null. - IF current_cascade_input_name IS NOT NULL THEN - INSERT INTO "__bulldozer_seq" ("__output_name", "__output_row") - SELECT - current_cascade_input_name, - jsonb_build_object( - 'groupKey', queued_row."groupKey", - 'rowIdentifier', queued_row."rowIdentifier" || ':' || (previous_emitted_row_count + "emitted"."rowIndex")::text, - 'oldRowSortKey', 'null'::jsonb, - 'newRowSortKey', 'null'::jsonb, - 'oldRowData', 'null'::jsonb, - 'newRowData', "emitted"."rowData" - ) - FROM jsonb_array_elements(newly_emitted_rows) WITH ORDINALITY AS "emitted"("rowData", "rowIndex"); - END IF; + FOR new_row_record IN + SELECT + "rows"."rowData" AS "rowData", + "rows"."rowIndex" AS "rowIndex", + queued_row."rowIdentifier" || ':' || (previous_emitted_row_count + "rows"."rowIndex")::text AS "emittedRowId" + FROM jsonb_array_elements(newly_emitted_rows) WITH ORDINALITY AS "rows"("rowData", "rowIndex") + LOOP + INSERT INTO "BulldozerStorageEngine" ("id", "keyPath", "value") + VALUES ( + gen_random_uuid(), + rows_path || ARRAY[to_jsonb(new_row_record."emittedRowId")]::jsonb[], + jsonb_build_object('rowData', new_row_record."rowData") + ) + ON CONFLICT ("keyPath") DO UPDATE + SET "value" = EXCLUDED."value"; + + IF current_cascade_input_name IS NOT NULL THEN + INSERT INTO "__bulldozer_seq" ("__output_name", "__output_row") + VALUES ( + current_cascade_input_name, + jsonb_build_object( + 'groupKey', queued_row."groupKey", + 'rowIdentifier', new_row_record."emittedRowId", + 'oldRowSortKey', 'null'::jsonb, + 'newRowSortKey', 'null'::jsonb, + 'oldRowData', 'null'::jsonb, + 'newRowData', new_row_record."rowData" + ) + ); + END IF; + END LOOP;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql` around lines 230 - 263, The code currently iterates jsonb_array_elements(newly_emitted_rows) twice—once in the FOR new_row_record loop to INSERT into "BulldozerStorageEngine" and again with jsonb_array_elements(...) AS "emitted" to INSERT into "__bulldozer_seq"; consolidate by using the single FOR new_row_record IN SELECT ... WITH ORDINALITY over newly_emitted_rows (same as the existing FOR new_row_record) and inside that loop perform both the INSERT into "BulldozerStorageEngine" (using rows_path || ARRAY[to_jsonb(... previous_emitted_row_count + new_row_record."rowIndex" ...)]) and, guarded by IF current_cascade_input_name IS NOT NULL, the INSERT into "__bulldozer_seq" (constructing the same rowIdentifier and newRowData from new_row_record."rowData"), then remove the second jsonb_array_elements pass to avoid double-parsing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/backend/src/lib/payments/schema/__tests__/integration-1-3-queue-drained.test.ts`:
- Around line 241-249: The test currently guards the assertion with "if
(midPeriodRow != null)" so a missing midPeriodRow silently passes; change this
to assert the row exists and then assert its quota value: remove the conditional
branch around the midPeriodRow check, add an unconditional expectation that
midPeriodRow is defined (e.g., expect(midPeriodRow).toBeDefined()) and then
assert expect(midPeriodRow.itemQuantities.quota).toBe(200). Update references to
the variables used in the snippet (rows, midPeriodRow, itemQuantities.quota) so
the test fails if no mid-period repeat row is emitted.
---
Nitpick comments:
In
`@apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql`:
- Around line 230-263: The code currently iterates
jsonb_array_elements(newly_emitted_rows) twice—once in the FOR new_row_record
loop to INSERT into "BulldozerStorageEngine" and again with
jsonb_array_elements(...) AS "emitted" to INSERT into "__bulldozer_seq";
consolidate by using the single FOR new_row_record IN SELECT ... WITH ORDINALITY
over newly_emitted_rows (same as the existing FOR new_row_record) and inside
that loop perform both the INSERT into "BulldozerStorageEngine" (using rows_path
|| ARRAY[to_jsonb(... previous_emitted_row_count + new_row_record."rowIndex"
...)]) and, guarded by IF current_cascade_input_name IS NOT NULL, the INSERT
into "__bulldozer_seq" (constructing the same rowIdentifier and newRowData from
new_row_record."rowData"), then remove the second jsonb_array_elements pass to
avoid double-parsing.
In `@apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts`:
- Around line 484-534: The stored cascadeTemplate is generated during
timeFoldTable.init() from the current triggers map (collected via
collectRowChangeTriggerStatements) so any registerRowChangeTrigger calls after
init() won't be included; update the code to either (A) add a clear doc comment
on registerRowChangeTrigger and init() stating the required declare-before-init
ordering, and (B) add a runtime dev-mode assertion in registerRowChangeTrigger
that throws/logs if called after init() (or conversely in init() set a flag like
this.initialized and have registerRowChangeTrigger check it), referencing
registerRowChangeTrigger, init, collectRowChangeTriggerStatements,
toCascadeSqlBlock and the cascadeTemplate/BulldozerTimeFoldDownstreamCascade
upsert so reviewers can find the relevant spots to change.
In `@apps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts`:
- Around line 265-294: The bigint cast is already applied in currentMillis, so
clean up the redundant double-cast in igrTxnId by replacing the explicit
::bigint::text with just ::text (i.e., use ${currentMillis}::text) inside the
igrTxnId expression; update the igrTxnId template string accordingly so IDs
remain byte-identical but avoid the unnecessary ::bigint re-cast.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7853034d-28e5-4106-ac90-660b3558b45c
📒 Files selected for processing (14)
apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sqlapps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.tsapps/backend/src/lib/bulldozer/db/index.fuzz.test.tsapps/backend/src/lib/bulldozer/db/index.perf.test.tsapps/backend/src/lib/bulldozer/db/index.test.tsapps/backend/src/lib/bulldozer/db/index.tsapps/backend/src/lib/bulldozer/db/tables/time-fold-table.tsapps/backend/src/lib/bulldozer/db/test-sql-loaders.tsapps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.tsapps/backend/src/lib/payments/schema/__tests__/integration-1-3-queue-drained.test.tsapps/backend/src/lib/payments/schema/__tests__/integration-1-3.test.tsapps/backend/src/lib/payments/schema/__tests__/phase-1.test.tsapps/backend/src/lib/payments/schema/__tests__/test-helpers.tsapps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts
946789f to
f72258e
Compare
There was a problem hiding this comment.
Pull request overview
Fixes several payment/timefold correctness issues by (1) ensuring queued (pg_cron) timefold ticks propagate through downstream cascades, (2) expiring when-repeated grants at subscription end, and (3) stabilizing IGR transaction ID generation to avoid numeric formatting mismatches.
Changes:
- Add
BulldozerTimeFoldDownstreamCascaderegistry + updatebulldozer_timefold_process_queue()to execute stored downstream cascade templates for queue-drained emissions. - Update subscription + one-time-purchase timefold reducers to prevent decimal-tailed millis values from leaking into txn IDs; expire
when-repeatedgrants on subscription-end. - Add extensive integration/regression tests for cascade propagation, idempotency, and payments lifecycle across inline vs queue-drained paths.
Reviewed changes
Copilot reviewed 18 out of 18 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| apps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts | Cast millis to integer form for stable IGR txn IDs; expire when-repeated grants on subscription end. |
| apps/backend/src/lib/payments/schema/phase-1/otp-timefold-algo.ts | Same millis/txn-id stabilization for one-time-purchase repeat logic. |
| apps/backend/src/lib/payments/schema/tests/test-helpers.ts | Add test DB options to exercise queued path + optionally install real process_queue() function. |
| apps/backend/src/lib/payments/schema/tests/phase-1.test.ts | Add unit-style regression coverage for subscription-end expiring when-repeated grants. |
| apps/backend/src/lib/payments/schema/tests/integration-1-3.test.ts | Add end-to-end regression tests for IGR txnId formatting and grant expiration behavior. |
| apps/backend/src/lib/payments/schema/tests/integration-1-3-queue-drained.test.ts | New: payments integration tests specifically for pg_cron/queue-drained propagation. |
| apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts | New: asserts process_queue() triggers downstream cascades (filter/map/etc.), including edge cases. |
| apps/backend/src/lib/bulldozer/db/test-sql-loaders.ts | New: test-only loader to extract process_queue() body from migration SQL. |
| apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts | Compile/store downstream cascade templates during init; clean up on delete. |
| apps/backend/src/lib/bulldozer/db/index.ts | Add toCascadeSqlBlock() + safe dollar-quote tag selection to prevent delimiter collisions. |
| apps/backend/src/lib/bulldozer/db/index.test.ts | Update test DB setup to include new cascade registry table. |
| apps/backend/src/lib/bulldozer/db/index.perf.test.ts | Update perf test DB setup to include cascade registry table. |
| apps/backend/src/lib/bulldozer/db/index.fuzz.test.ts | Update fuzz test DB setup to include cascade registry table. |
| apps/backend/scripts/verify-data-integrity/index.ts | Re-enable payments verifier invocation when payments config is present. |
| apps/backend/prisma/schema.prisma | Document external management of BulldozerTimeFoldDownstreamCascade (no Prisma model). |
| apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.ts | New: migration-level test validating cascade registry + queue-drain execution + idempotency. |
| apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql | New: create cascade registry table + rewrite bulldozer_timefold_process_queue() to execute templates. |
| apps/backend/prisma.config.ts | Mark BulldozerTimeFoldDownstreamCascade as an external table for Prisma tooling. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts (1)
479-534:⚠️ Potential issue | 🟡 MinorCascade template is captured at
init()time—no enforcement that triggers registered before initialization.
collectRowChangeTriggerStatementsis called once insideinit(), and the result persists toBulldozerTimeFoldDownstreamCascade. While all downstream tables currently register their triggers duringdeclare*Table()calls (before any init() runs), there is no code-level guard preventingregisterRowChangeTriggerfrom being called after init() has completed. If that occurs, the persisted cascade becomes stale, and queue-drained emissions will bypass new downstream nodes.The schema construction pattern (all declarations complete, then batch init) naturally maintains this invariant, but it would be safer to encode it explicitly—either with an assertion in
registerRowChangeTriggerthat fails post-init, or by re-emitting the cascade upsert whenever triggers change.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts` around lines 479 - 534, The init() call computes cascadeTemplate via collectRowChangeTriggerStatements and persists it to BulldozerTimeFoldDownstreamCascade, but registerRowChangeTrigger can still be called later, leaving the persisted cascade stale; fix by either (A) adding a runtime guard in registerRowChangeTrigger that throws if the timefold is already initialized (introduce/check an "initialized" flag set at the end of init()), or (B) recompute and upsert the cascade when triggers change (call collectRowChangeTriggerStatements + toCascadeSqlBlock and run the same INSERT ... ON CONFLICT upsert used in init() to update "cascadeTemplate"/"cascadeInputName"/"updatedAt"); reference collectRowChangeTriggerStatements, toCascadeSqlBlock, init(), registerRowChangeTrigger, cascadeTemplate and BulldozerTimeFoldDownstreamCascade when implementing.
🧹 Nitpick comments (2)
apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts (1)
22-29: KeepTIMEFOLD_OUTPUT_CHANGE_COLUMNSin lockstep withtimeFoldChangesTableName.This constant and the schema passed to
.toStatement(timeFoldChangesTableName, ...)at line 426 are now two sources of truth for the same column shape. If one changes and the other doesn't, the cascade will either fail atINSERT INTO __bulldozer_seqor silently mis-type downstream columns. Consider referencing the same literal at both call sites (e.g. useTIMEFOLD_OUTPUT_CHANGE_COLUMNSon line 426 as well).♻️ Proposed deduplication
- `.toStatement(timeFoldChangesTableName, '"groupKey" jsonb, "rowIdentifier" text, "oldRowSortKey" jsonb, "newRowSortKey" jsonb, "oldRowData" jsonb, "newRowData" jsonb'), + `.toStatement(timeFoldChangesTableName, TIMEFOLD_OUTPUT_CHANGE_COLUMNS),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts` around lines 22 - 29, The TIMEFOLD_OUTPUT_CHANGE_COLUMNS string and the schema passed to .toStatement(timeFoldChangesTableName, ...) are duplicated sources of truth; replace the hard-coded column schema used in the .toStatement call with the TIMEFOLD_OUTPUT_CHANGE_COLUMNS constant so timeFoldChangesTableName, TIMEFOLD_OUTPUT_CHANGE_COLUMNS, and the .toStatement(...) invocation all reference the same literal (or import the constant into the scope used by createApplyChangesStatements) to ensure the column shape stays in lockstep.apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql (1)
295-308: Cleanup branch is effectively dead on the normal drain path.Lines 212–228 unconditionally upsert a
state_row_pathwhosekeyPathParentisstates_path, so theNOT EXISTS ... keyPathParent = states_pathpredicate on line 300–304 is false for every row the drain actually processed. If this block exists to handle an exotic reducer-deleted-its-own-state scenario, please drop a comment explaining it; otherwise consider removing it so readers don't assume it's doing work.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql` around lines 295 - 308, The cleanup IF block that checks NOT EXISTS for "keyPathParent" = states_path is effectively dead because earlier code upserts a state_row_path with keyPathParent = states_path (see rows_path, states_path, group_path and "BulldozerStorageEngine"), so either remove this obsolete conditional DELETE block entirely or add an explicit comment above it explaining the intended exotic edge-case (e.g., reducer deleted its own state after drain) and why the NOT EXISTS check remains necessary; update the code around the IF...DELETE that references rows_path, states_path, and group_path accordingly to reflect the chosen action.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts`:
- Around line 479-534: The init() call computes cascadeTemplate via
collectRowChangeTriggerStatements and persists it to
BulldozerTimeFoldDownstreamCascade, but registerRowChangeTrigger can still be
called later, leaving the persisted cascade stale; fix by either (A) adding a
runtime guard in registerRowChangeTrigger that throws if the timefold is already
initialized (introduce/check an "initialized" flag set at the end of init()), or
(B) recompute and upsert the cascade when triggers change (call
collectRowChangeTriggerStatements + toCascadeSqlBlock and run the same INSERT
... ON CONFLICT upsert used in init() to update
"cascadeTemplate"/"cascadeInputName"/"updatedAt"); reference
collectRowChangeTriggerStatements, toCascadeSqlBlock, init(),
registerRowChangeTrigger, cascadeTemplate and BulldozerTimeFoldDownstreamCascade
when implementing.
---
Nitpick comments:
In
`@apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql`:
- Around line 295-308: The cleanup IF block that checks NOT EXISTS for
"keyPathParent" = states_path is effectively dead because earlier code upserts a
state_row_path with keyPathParent = states_path (see rows_path, states_path,
group_path and "BulldozerStorageEngine"), so either remove this obsolete
conditional DELETE block entirely or add an explicit comment above it explaining
the intended exotic edge-case (e.g., reducer deleted its own state after drain)
and why the NOT EXISTS check remains necessary; update the code around the
IF...DELETE that references rows_path, states_path, and group_path accordingly
to reflect the chosen action.
In `@apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts`:
- Around line 22-29: The TIMEFOLD_OUTPUT_CHANGE_COLUMNS string and the schema
passed to .toStatement(timeFoldChangesTableName, ...) are duplicated sources of
truth; replace the hard-coded column schema used in the .toStatement call with
the TIMEFOLD_OUTPUT_CHANGE_COLUMNS constant so timeFoldChangesTableName,
TIMEFOLD_OUTPUT_CHANGE_COLUMNS, and the .toStatement(...) invocation all
reference the same literal (or import the constant into the scope used by
createApplyChangesStatements) to ensure the column shape stays in lockstep.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 053a56c0-5e7d-4ae3-a722-bd650f09a331
📒 Files selected for processing (17)
apps/backend/prisma.config.tsapps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sqlapps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.tsapps/backend/prisma/schema.prismaapps/backend/scripts/verify-data-integrity/index.tsapps/backend/src/lib/bulldozer/db/index.fuzz.test.tsapps/backend/src/lib/bulldozer/db/index.perf.test.tsapps/backend/src/lib/bulldozer/db/index.test.tsapps/backend/src/lib/bulldozer/db/index.tsapps/backend/src/lib/bulldozer/db/tables/time-fold-table.tsapps/backend/src/lib/bulldozer/db/test-sql-loaders.tsapps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.tsapps/backend/src/lib/payments/schema/__tests__/integration-1-3-queue-drained.test.tsapps/backend/src/lib/payments/schema/__tests__/integration-1-3.test.tsapps/backend/src/lib/payments/schema/__tests__/test-helpers.tsapps/backend/src/lib/payments/schema/phase-1/otp-timefold-algo.tsapps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts
✅ Files skipped from review due to trivial changes (8)
- apps/backend/prisma.config.ts
- apps/backend/prisma/schema.prisma
- apps/backend/src/lib/bulldozer/db/index.test.ts
- apps/backend/src/lib/bulldozer/db/index.perf.test.ts
- apps/backend/src/lib/bulldozer/db/index.fuzz.test.ts
- apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.ts
- apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts
- apps/backend/src/lib/bulldozer/db/index.ts
🚧 Files skipped from review as they are similar to previous changes (4)
- apps/backend/src/lib/bulldozer/db/test-sql-loaders.ts
- apps/backend/src/lib/payments/schema/tests/integration-1-3-queue-drained.test.ts
- apps/backend/src/lib/payments/schema/tests/integration-1-3.test.ts
- apps/backend/src/lib/payments/schema/tests/test-helpers.ts
…s and downstream transactions PG 12+ returns EXTRACT(EPOCH ...) as NUMERIC with scale 6, so the reducer's currentMillis serialized into JSONB as "604800000.000000". transactions.ts built igr txn IDs via ->>effectiveAtMillis (decimal-tailed) while the same reducer emitted item-quantity-expire.adjustedTransactionId via ::bigint::text (decimal-free). The mismatch meant a subscription-end following an item-grant-repeat silently failed to expire the igr's grant, leaving when-repeated balances stuck at the last-granted quantity.
f72258e to
88e681b
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql`:
- Around line 107-113: Replace the targeted cleanup that only deletes rows
matching current_cascade_input_name with an unconditional purge of the temp
sequence table to avoid leftover intermediate outputs: inside the conditional
block around current_cascade_input_name (the IF ... THEN ... END IF; that runs
before each cascade execution), change the logic so the function executes DELETE
FROM "__bulldozer_seq" unconditionally (remove the WHERE "__output_name" =
current_cascade_input_name filter and the NULL check) to ensure all
__bulldozer_seq rows are cleared between cascade executions.
In `@apps/backend/src/lib/bulldozer/db/index.ts`:
- Around line 216-224: buildExecutableStatementsBlock currently always
normalizes "$$" which corrupts literal "$$" strings when used in the cascade
path; modify buildExecutableStatementsBlock to accept a boolean flag (e.g.,
normalizeDollarQuotes = true) and only perform the $$ -> $__bulldozer_do_inline$
rewrite when that flag is true, update toExecutableSqlTransaction to call
buildExecutableStatementsBlock with normalizeDollarQuotes=true, and change the
cascade caller (the code around toCascadeSqlBlock / the call site using
buildExecutableStatementsBlock in this diff) to pass normalizeDollarQuotes=false
so the outer randomized DO tag (chosen by chooseSafeDollarQuoteTag) can safely
contain any user "$$" literals.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e0c8483d-4d1d-4c94-a673-7abd3c05813e
📒 Files selected for processing (17)
apps/backend/prisma.config.tsapps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sqlapps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.tsapps/backend/prisma/schema.prismaapps/backend/src/lib/bulldozer/db/index.fuzz.test.tsapps/backend/src/lib/bulldozer/db/index.perf.test.tsapps/backend/src/lib/bulldozer/db/index.test.tsapps/backend/src/lib/bulldozer/db/index.tsapps/backend/src/lib/bulldozer/db/row-change-trigger-dispatch.tsapps/backend/src/lib/bulldozer/db/tables/time-fold-table.tsapps/backend/src/lib/bulldozer/db/test-sql-loaders.tsapps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.tsapps/backend/src/lib/payments/schema/__tests__/integration-1-3-queue-drained.test.tsapps/backend/src/lib/payments/schema/__tests__/integration-1-3.test.tsapps/backend/src/lib/payments/schema/__tests__/test-helpers.tsapps/backend/src/lib/payments/schema/phase-1/otp-timefold-algo.tsapps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts
✅ Files skipped from review due to trivial changes (3)
- apps/backend/src/lib/bulldozer/db/row-change-trigger-dispatch.ts
- apps/backend/src/lib/bulldozer/db/index.test.ts
- apps/backend/prisma.config.ts
🚧 Files skipped from review as they are similar to previous changes (6)
- apps/backend/prisma/schema.prisma
- apps/backend/src/lib/bulldozer/db/index.perf.test.ts
- apps/backend/src/lib/bulldozer/db/test-sql-loaders.ts
- apps/backend/src/lib/bulldozer/db/index.fuzz.test.ts
- apps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts
- apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts
88e681b to
a98283e
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts (1)
357-380: Avoid casts in the row-data assertions.The
as object/as Record<string, unknown>casts bypass the runtime checks these tests are otherwise careful about. A small helper that validates object shape and numeric fields keeps the assertions defensive without weakening the type system.Example helper direction
+ function numberField(value: unknown, field: string): number { + if (value == null || typeof value !== "object") { + throw new Error(`Expected object rowdata, got ${typeof value}`); + } + const fieldValue = Reflect.get(value, field); + if (typeof fieldValue !== "number") { + throw new Error(`Expected numeric '${field}', got ${typeof fieldValue}`); + } + return fieldValue; + } + expect(filtered.map((r) => r.rowdata).sort((a, b) => - Number(Reflect.get(a as object, "value")) - Number(Reflect.get(b as object, "value")) + numberField(a, "value") - numberField(b, "value") )).toEqual([As per coding guidelines, “Do NOT use
as/any/type casts or anything else like that to bypass the type system.”Also applies to: 437-448
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts` around lines 357 - 380, The tests are using casts like `as object`/`as Record<string, unknown>` around `r.rowdata` which bypass runtime checks; replace those casts by adding a small helper (e.g., `assertRowDataIsObjectWithNumber(fieldName: string, row: unknown): number`) that validates `r.rowdata` is an object and that the specified numeric field exists and is a number, returning the numeric value; then update the sort and equality assertions for `filtered`, `mapped`, and `reMapped` to call this helper to obtain numeric values for sorting/comparison instead of using `Reflect.get(... as object)` so the tests remain defensive without using type casts.apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.ts (1)
23-36: Assert the registry schema contract, not just column order.This guard would still pass if
tableStoragePathstopped beingJSONB[],cascadeInputNamebecame nullable, or the primary key ontableStoragePathwas dropped. Please include type/nullability and PK assertions so the migration test protects the actual contract frommigration.sql.Test hardening direction
- const registryColumnRows = await sql<Array<{ column_name: string }>>` - SELECT column_name + const registryColumnRows = await sql<Array<{ + column_name: string, + data_type: string, + udt_name: string, + is_nullable: string, + }>>` + SELECT column_name, data_type, udt_name, is_nullable FROM information_schema.columns WHERE table_schema = 'public' AND table_name = 'BulldozerTimeFoldDownstreamCascade' ORDER BY ordinal_position `; - expect(registryColumnRows.map((r) => r.column_name)).toEqual([ - "tableStoragePath", - "cascadeInputName", - "cascadeTemplate", - "createdAt", - "updatedAt", + expect(registryColumnRows).toMatchObject([ + { column_name: "tableStoragePath", data_type: "ARRAY", udt_name: "_jsonb", is_nullable: "NO" }, + { column_name: "cascadeInputName", data_type: "text", is_nullable: "NO" }, + { column_name: "cascadeTemplate", data_type: "text", is_nullable: "YES" }, + { column_name: "createdAt", data_type: "timestamp without time zone", is_nullable: "NO" }, + { column_name: "updatedAt", data_type: "timestamp without time zone", is_nullable: "NO" }, ]); + + const registryPkRows = await sql<Array<{ column_name: string }>>` + SELECT kcu.column_name + FROM information_schema.table_constraints tc + JOIN information_schema.key_column_usage kcu + ON tc.constraint_name = kcu.constraint_name + AND tc.table_schema = kcu.table_schema + WHERE tc.table_schema = 'public' + AND tc.table_name = 'BulldozerTimeFoldDownstreamCascade' + AND tc.constraint_type = 'PRIMARY KEY' + ORDER BY kcu.ordinal_position + `; + expect(registryPkRows.map((r) => r.column_name)).toEqual(["tableStoragePath"]);Based on learnings, “When writing database migration files, ALWAYS ALWAYS add tests for all the potential edge cases!”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.ts` around lines 23 - 36, The test currently only asserts column order for table BuldozerTimeFoldDownstreamCascade using registryColumnRows; enhance it to assert the full schema contract by querying information_schema.columns (or pg_catalog) to verify each column's data_type and is_nullable for "tableStoragePath", "cascadeInputName", "cascadeTemplate", "createdAt", and "updatedAt", and add an assertion that "tableStoragePath" is of type JSONB[] (or correct pg type) and non-null if required; additionally query pg_constraint/pg_index to assert the primary key exists on "tableStoragePath" (or the expected PK column(s)) and include explicit expectations for nullability of "cascadeInputName" and any other constraints to ensure the migration preserves types, nullability, and PKs rather than only column order.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql`:
- Around line 95-133: The migration currently treats a missing
BulldozerTimeFoldDownstreamCascade row as “no cascade” and proceeds to drain
BuldozerTimeFoldQueue, which loses downstream work; change the lookup so you
detect whether a registry row was found (e.g. capture FOUND or set a boolean
like current_cascade_exists after the SELECT INTO current_cascade_input_name,
current_cascade_template FROM "BulldozerTimeFoldDownstreamCascade" ...), and if
no row was found then skip the inner drain loop (leave queued rows alone)
instead of proceeding to delete queued_row from "BulldozerTimeFoldQueue";
preserve the existing semantics where a present row with
current_cascade_template IS NULL still allows draining (initialized but no
triggers).
In `@apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts`:
- Around line 43-55: getTestDbUrls currently constructs a random dbName then
later tests re-derive it by parsing the URL which can fail when the connection
string query contains slashes; update getTestDbUrls to include the generated
dbName in its return object (e.g., return { full, base, dbName }) and change
callers (tests that use the parsed name) to use the returned dbName instead of
extracting it from full or base so CREATE DATABASE uses the exact generated
name.
---
Nitpick comments:
In
`@apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.ts`:
- Around line 23-36: The test currently only asserts column order for table
BuldozerTimeFoldDownstreamCascade using registryColumnRows; enhance it to assert
the full schema contract by querying information_schema.columns (or pg_catalog)
to verify each column's data_type and is_nullable for "tableStoragePath",
"cascadeInputName", "cascadeTemplate", "createdAt", and "updatedAt", and add an
assertion that "tableStoragePath" is of type JSONB[] (or correct pg type) and
non-null if required; additionally query pg_constraint/pg_index to assert the
primary key exists on "tableStoragePath" (or the expected PK column(s)) and
include explicit expectations for nullability of "cascadeInputName" and any
other constraints to ensure the migration preserves types, nullability, and PKs
rather than only column order.
In `@apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts`:
- Around line 357-380: The tests are using casts like `as object`/`as
Record<string, unknown>` around `r.rowdata` which bypass runtime checks; replace
those casts by adding a small helper (e.g.,
`assertRowDataIsObjectWithNumber(fieldName: string, row: unknown): number`) that
validates `r.rowdata` is an object and that the specified numeric field exists
and is a number, returning the numeric value; then update the sort and equality
assertions for `filtered`, `mapped`, and `reMapped` to call this helper to
obtain numeric values for sorting/comparison instead of using `Reflect.get(...
as object)` so the tests remain defensive without using type casts.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b198467f-5e5f-4fad-8ba4-ceb8cb881e50
📒 Files selected for processing (16)
apps/backend/prisma.config.tsapps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sqlapps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.tsapps/backend/prisma/schema.prismaapps/backend/src/lib/bulldozer/db/index.fuzz.test.tsapps/backend/src/lib/bulldozer/db/index.perf.test.tsapps/backend/src/lib/bulldozer/db/index.test.tsapps/backend/src/lib/bulldozer/db/index.tsapps/backend/src/lib/bulldozer/db/row-change-trigger-dispatch.tsapps/backend/src/lib/bulldozer/db/tables/time-fold-table.tsapps/backend/src/lib/bulldozer/db/test-sql-loaders.tsapps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.tsapps/backend/src/lib/payments/schema/__tests__/integration-1-3-queue-drained.test.tsapps/backend/src/lib/payments/schema/__tests__/integration-1-3.test.tsapps/backend/src/lib/payments/schema/__tests__/test-helpers.tsapps/backend/src/lib/payments/schema/phase-1/otp-timefold-algo.ts
✅ Files skipped from review due to trivial changes (5)
- apps/backend/prisma.config.ts
- apps/backend/src/lib/bulldozer/db/index.perf.test.ts
- apps/backend/src/lib/bulldozer/db/test-sql-loaders.ts
- apps/backend/prisma/schema.prisma
- apps/backend/src/lib/bulldozer/db/index.test.ts
This meant downstream tables of the timefold werent updating with the new rows. This wasnt caught in our old tests since they just advanced the timestamp.
PK on JSONB[] (tableStoragePath) — not expressible via Prisma's @id list types are treated as non-required). Managed entirely by bulldozer code via raw SQL. See schema.prisma note next to the BulldozerTimeFoldMetadata model.
a98283e to
70a261a
Compare
|
@greptileai refresh summary |
Documents fix for subscription item expiration and scheduled payment event processing issues from PR #1348.
Context
process_queuepg_cron was run, only updated its timestamps. It didn't trigger the tables that took in the timefold as an input to recompute.Summary of Changes
We fixed the bugs.
Summary by CodeRabbit
New Features
Improvements
Tests