Skip to content

[Fix]: Assortment of Bugs with Timefold Table and Payments #1348

Merged
N2D4 merged 6 commits intodevfrom
bs-timefold-fixes
Apr 18, 2026
Merged

[Fix]: Assortment of Bugs with Timefold Table and Payments #1348
N2D4 merged 6 commits intodevfrom
bs-timefold-fixes

Conversation

@nams1570
Copy link
Copy Markdown
Collaborator

@nams1570 nams1570 commented Apr 18, 2026

Context

  1. We didn't have subscription-end events/txns expire item quantity changes with expire= "when-repeated" before. This meant a user could have had extra items from an old subscription apply.
  2. The timefold when the process_queue pg_cron was run, only updated its timestamps. It didn't trigger the tables that took in the timefold as an input to recompute.
  3. we used timestamp as part of the txn id generation, but we didnt cast it so floating point precision issues would have led to mismatches/ false negatives

Summary of Changes

We fixed the bugs.

Summary by CodeRabbit

  • New Features

    • Added a downstream cascade registry and runtime cascade execution so queued timefold emissions can propagate into downstream materialized outputs.
  • Improvements

    • Queue processing now drains per-table and executes cascades once per timefold, defers draining when a cascade is missing, and uses a temporary per-run input buffer.
    • Improved millisecond timestamp handling and expanded expiry logic to include repeat-tied expirations.
  • Tests

    • Added extensive migration-level and integration tests plus test helpers validating cascade behavior, multi-hop propagation, idempotency, and payments lifecycle scenarios.

Before we only expired the when-purchase-expires items, and left it to the item-grant-repeat events to expire the rest.
But, when a subscription ends all of them should be expired immediately.
@nams1570 nams1570 requested review from N2D4 and Copilot and removed request for Copilot April 18, 2026 09:20
@vercel
Copy link
Copy Markdown

vercel Bot commented Apr 18, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
stack-auth-hosted-components Ready Ready Preview, Comment Apr 18, 2026 8:18pm
stack-backend Ready Ready Preview, Comment Apr 18, 2026 8:18pm
stack-dashboard Ready Ready Preview, Comment Apr 18, 2026 8:18pm
stack-demo Ready Ready Preview, Comment Apr 18, 2026 8:18pm
stack-docs Ready Ready Preview, Comment Apr 18, 2026 8:18pm
stack-preview-backend Ready Ready Preview, Comment Apr 18, 2026 8:18pm
stack-preview-dashboard Ready Ready Preview, Comment Apr 18, 2026 8:18pm

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 18, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a per-TimeFold downstream cascade registry and rewrites public.bulldozer_timefold_process_queue() to drain queues grouped by TimeFold, seed a temp __bulldozer_seq with emitted rows, and conditionally execute a stored cascade SQL template per TimeFold; several tests and SQL-generation helpers updated and TimeFold reducers use bigint timestamps.

Changes

Cohort / File(s) Summary
Core migration & process function
apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql
Adds BulldozerTimeFoldDownstreamCascade table; fully rewrites public.bulldozer_timefold_process_queue() to loop per tableStoragePath, use a temporary __bulldozer_seq, collect emitted rows, and execute per-TimeFold cascadeTemplate.
Migration test
apps/backend/prisma/migrations/.../tests/downstream-cascade.ts
New post-migration Vitest that seeds registry, enqueues reducer work, invokes process_queue(), and asserts queue draining, state updates, downstream writes, and idempotency; also tests missing-registry deferred draining.
SQL generation & helpers
apps/backend/src/lib/bulldozer/db/index.ts, apps/backend/src/lib/bulldozer/db/test-sql-loaders.ts
Introduces buildExecutableStatementsBlock, chooseSafeDollarQuoteTag, randomized DO-dollar quoting, toCascadeSqlBlock export, refactors toExecutableSqlTransaction, and adds test loader to extract process_queue SQL from migration.
TimeFold integration
apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts, apps/backend/src/lib/bulldozer/db/row-change-trigger-dispatch.ts
init() now compiles row-change trigger statements into a cascadeTemplate, upserts registry rows into BulldozerTimeFoldDownstreamCascade; delete() removes registry rows. CHANGE_OUTPUT_COLUMNS made exported.
Tests — DB setup changes
apps/backend/src/lib/bulldozer/db/index.test.ts, .../index.fuzz.test.ts, .../index.perf.test.ts
Test fixtures now drop/create BulldozerTimeFoldDownstreamCascade in beforeEach to ensure the table exists for integration/perf/fuzz suites.
Integration tests — cascade behavior
apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts
New extensive sequential integration suite validating queued draining, multi-hop cascades, idempotency, deleted-downstream behavior, literal-dollar-tag edge-case, and deploy-window/missing-registry deferral semantics.
Payments: test helpers & integration tests
apps/backend/src/lib/payments/schema/__tests__/test-helpers.ts, .../integration-1-3-queue-drained.test.ts, .../integration-1-3.test.ts, .../phase-1.test.ts
Adds createTestDb options (controlling lastProcessedAt and optional process_queue install), helpers setLastProcessedAt, processQueue, countQueueRows, and multiple new integration tests exercising queue-drained payments flows and expiry/repeat semantics.
Payments: reducer SQL tweaks
apps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts, .../otp-timefold-algo.ts
Switches currentMillis to rounded bigint and constructs txn IDs as bigint-text; expands subscription-end expiry filter to include when-repeated.
Utilities & exports
apps/backend/src/lib/bulldozer/db/test-sql-loaders.ts, apps/backend/src/lib/bulldozer/db/index.ts
New test SQL loader export and added utilities for safe-dollar quoting and seeded cascade SQL generation.
Prisma config & docs
apps/backend/prisma.config.ts, apps/backend/prisma/schema.prisma
Registers public.BulldozerTimeFoldDownstreamCascade as an external table and documents why it is not modeled in Prisma.

Sequence Diagram(s)

sequenceDiagram
    participant Q as Queue<br/>(BulldozerTimeFoldQueue)
    participant P as Processor<br/>(bulldozer_timefold_process_queue)
    participant SE as StorageEngine<br/>(BulldozerStorageEngine)
    participant Reg as Registry<br/>(BulldozerTimeFoldDownstreamCascade)
    participant Seq as TempSeq<br/>(__bulldozer_seq)

    Note over P: For each distinct TimeFold with scheduledAt <= now()
    P->>Reg: SELECT registry row for TimeFold
    alt registry row missing
        Note over P: skip draining this TimeFold (defer)
    else registry row present
        P->>Q: SELECT due rows for this TimeFold
        loop drain TimeFold queue
            Q-->>P: next queued reducer row
            P->>SE: apply reducer → update state
            SE-->>P: stateAfter + emitted rows
            P->>Seq: INSERT emitted rows tagged by cascadeInputName
            P->>Q: mark queue row consumed / upsert next scheduledAt
        end
        P->>Seq: SELECT accumulated rows for cascadeInputName
        Seq-->>P: input rows
        alt cascadeTemplate IS NOT NULL
            P->>P: EXECUTE DO $tag$ ...cascadeTemplate... $tag$
            P->>SE: downstream writes (via cascadeTemplate)
        end
        P->>Seq: DELETE accumulated rows for cascadeInputName
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • N2D4

Poem

🐇

I hopped through queues at dawn's first light,
Seeded a stream of rows in flight,
A cascade template took a leap,
Downstream ponds woke from their sleep,
I thumped my foot — all tests say "right!"

🚥 Pre-merge checks | ❌ 3

❌ Failed checks (1 warning, 2 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 40.91% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title is vague and overly generic, using the phrase 'Assortment of Bugs' without clearly indicating the primary changes or which systems are affected. Refine the title to be more specific about the main fix; for example, 'Add downstream cascade support and fix timefold emissions' or 'Fix subscription-end expiry and timefold queue-drain propagation'.
Description check ❓ Inconclusive The PR description provides clear context on three bugs and states they were fixed, but lacks detailed explanation of implementation changes, affected files, and testing approach. Expand the description to explain how each bug was fixed: (1) which SQL changes enable expire='when-repeated' expiry, (2) how downstream cascade mechanism works, (3) how timestamp casting in txn ID generation prevents precision issues. Include affected files and testing strategy.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch bs-timefold-fixes

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 18, 2026

Greptile Summary

This PR addresses three production bugs: (1) subscription-end events now correctly expire when-repeated item-quantity grants (previously only when-purchase-expires was included, leaving stale quotas after cancel-at-period-end); (2) bulldozer_timefold_process_queue now runs the downstream cascade (filter/map/LFold tables) after draining each timefold's queue rows — previously only the timefold's own state was updated while all downstream materialized outputs silently missed the update; (3) the currentMillis timestamp used in txn-ID generation is now cast to bigint (with explicit ROUND) before ::text, eliminating floating-point serialization tails that caused igr txnId mismatches. All three fixes are backed by new integration and migration-level tests.

Confidence Score: 5/5

Safe to merge — all three bugs are correctly fixed, logic is sound, and the changes are backed by comprehensive new tests.

No P0 or P1 issues found. The deploy-window guard (CONTINUE when the cascade registry row is missing) is a thoughtful correctness safeguard. The previous_emitted_row_count offset logic is correct across multiple queue-row iterations. The ROUND(EXTRACT(EPOCH …) * 1000)::bigint fix addresses the described precision issue cleanly. All remaining notes (the pre-existing redundant ::bigint cast flagged in a previous review) are P2 style suggestions that do not block merge.

No files require special attention.

Important Files Changed

Filename Overview
apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql Adds BulldozerTimeFoldDownstreamCascade table and rewrites process_queue to drain per-timefold, populate __bulldozer_seq, and EXECUTE the stored cascade template; deploy-window guard (CONTINUE when no registry row) prevents desync.
apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts init() now calls toCascadeSqlBlock on the registered triggers map and upserts the template into BulldozerTimeFoldDownstreamCascade; delete() also cleans up the registry row; logic is correct assuming init() is called after all downstream tables are declared.
apps/backend/src/lib/bulldozer/db/index.ts New toCascadeSqlBlock helper and refactored buildExecutableStatementsBlock with seededSeqOutputs support; dollar-quote tag randomisation is well-handled via chooseSafeDollarQuoteTag.
apps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts Bug 1 fix: endItemQuantityChangesToExpire now includes 'when-repeated' grants; bug 3 fix: currentMillis uses ROUND(EXTRACT(EPOCH …) * 1000)::bigint for txn ID consistency. Both changes are sound.
apps/backend/src/lib/payments/schema/phase-1/otp-timefold-algo.ts Same bigint/ROUND fix applied to OTP's currentMillis; previousGrantsToExpire added for when-repeated grants on item-grant-repeat events; logic mirrors subscription algo correctly.
apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts 784-line integration test covering cascade propagation, multi-hop, idempotency, and deploy-window defer; comprehensive regression guard for the queue-drain cascade bug.
apps/backend/src/lib/payments/schema/tests/integration-1-3-queue-drained.test.ts New queue-drained variant of the payments integration tests exercises cancel-at-period-end and monthly repeat tick propagation through the cascade on the pg_cron path.
apps/backend/src/lib/bulldozer/db/test-sql-loaders.ts New test-only utility that loads the process_queue function body from the migration file; avoids drift between test and prod function body.
apps/backend/src/lib/payments/schema/tests/test-helpers.ts createTestDb gains lastProcessedAt and installProcessQueueFn options and sets up BulldozerTimeFoldDownstreamCascade in the test schema; processQueue helper added.
apps/backend/prisma.config.ts BulldozerTimeFoldDownstreamCascade added to externalTables (JSONB[] PK not expressible in Prisma schema); correct approach.

Flowchart

%%{init: {'theme': 'neutral'}}%%
flowchart TD
    A[pg_cron calls\nbulldozer_timefold_process_queue] --> B[Acquire advisory lock\nUpdate lastProcessedAt]
    B --> C[CREATE TEMP TABLE __bulldozer_seq\nON COMMIT DROP]
    C --> D{Any timefold with\ndue queue rows?}
    D -- No --> Z[Done]
    D -- Yes --> E[Outer loop: next distinct\ntableStoragePath]
    E --> F{Registry row in\nBulldozerTimeFoldDownstreamCascade?}
    F -- Missing: deploy-window gap --> G[CONTINUE — defer\ntimefold for this tick]
    G --> D
    F -- Found --> H[DELETE FROM __bulldozer_seq\nclean slate]
    H --> I{Next due queue row\nfor this timefold?\nFOR UPDATE SKIP LOCKED}
    I -- None left --> J{cascadeTemplate\nIS NOT NULL?}
    I -- Found --> K[Delete queue row\nRun reducer loop\naccumulate newly_emitted_rows]
    K --> L[Upsert state & row records\ninto BulldozerStorageEngine]
    L --> M[INSERT newly_emitted_rows\ninto __bulldozer_seq\nunder cascadeInputName]
    M --> N{next_timestamp > cutoff?}
    N -- Yes --> O[Re-enqueue into\nBulldozerTimeFoldQueue]
    O --> I
    N -- No --> I
    J -- No cascade --> P[DELETE FROM __bulldozer_seq]
    P --> D
    J -- Has cascade --> Q[EXECUTE cascadeTemplate\nreads __bulldozer_seq,\nwrites filter/map/LFold\nmaterialized outputs]
    Q --> R[DELETE FROM __bulldozer_seq]
    R --> D
Loading

Reviews (2): Last reviewed commit: "chore: schema model mismatch via externa..." | Re-trigger Greptile

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
apps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts (1)

265-294: Bigint cast fix looks correct and well-documented.

The root cause (NUMERIC scale-6 serialization leaking into downstream JSONB effectiveAtMillis) and the fix (casting to bigint at the root) are consistent with how currentMillis is consumed downstream: numeric comparisons (Lines 284, 290, 386) still coerce cleanly, and to_jsonb(${currentMillis}) on Lines 337–338 will now serialize without the trailing .000000, matching the igr:<sub>:<millis> ID format built at Line 294.

One minor nit: with currentMillis now already a bigint, the ::bigint::text double-cast in igrTxnId at Line 294 is redundant — ${currentMillis}::text would suffice. Not a correctness issue, just cleanup if you're touching this area.

♻️ Optional simplification
-  const igrTxnId = `('igr:' || (${S}->>'subscriptionId') || ':' || ${currentMillis}::bigint::text)`;
+  const igrTxnId = `('igr:' || (${S}->>'subscriptionId') || ':' || ${currentMillis}::text)`;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts`
around lines 265 - 294, The bigint cast is already applied in currentMillis, so
clean up the redundant double-cast in igrTxnId by replacing the explicit
::bigint::text with just ::text (i.e., use ${currentMillis}::text) inside the
igrTxnId expression; update the igrTxnId template string accordingly so IDs
remain byte-identical but avoid the unnecessary ::bigint re-cast.
apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts (1)

484-534: Cascade template captures triggers Map contents at init() call time.

This works under the current declare-all-then-init-all convention (all downstream tables register their triggers during declaration, before their upstream's init() runs), but the contract is implicit. If anyone ever calls timeFoldTable.init() and then declares a new downstream consumer, the upstream's stored cascadeTemplate won't reflect it, and queue-drained emissions will silently skip that consumer (while the inline setRow path continues to work). The null-template comment only covers the "no triggers at all" case.

Consider a short doc-comment on registerRowChangeTrigger and/or init() pinning this ordering requirement, or a dev-time assertion if triggers are added after init() has been observed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts` around lines 484
- 534, The stored cascadeTemplate is generated during timeFoldTable.init() from
the current triggers map (collected via collectRowChangeTriggerStatements) so
any registerRowChangeTrigger calls after init() won't be included; update the
code to either (A) add a clear doc comment on registerRowChangeTrigger and
init() stating the required declare-before-init ordering, and (B) add a runtime
dev-mode assertion in registerRowChangeTrigger that throws/logs if called after
init() (or conversely in init() set a flag like this.initialized and have
registerRowChangeTrigger check it), referencing registerRowChangeTrigger, init,
collectRowChangeTriggerStatements, toCascadeSqlBlock and the
cascadeTemplate/BulldozerTimeFoldDownstreamCascade upsert so reviewers can find
the relevant spots to change.
apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql (1)

230-263: Consider consolidating the two passes over newly_emitted_rows.

The block iterates jsonb_array_elements(newly_emitted_rows) WITH ORDINALITY twice — once at lines 230-244 to write into BulldozerStorageEngine, then again at 250-263 to populate __bulldozer_seq with the same previous_emitted_row_count + rowIndex identifier scheme. The two passes can be fused into a single FOR ... IN SELECT that performs both inserts, avoiding re-parsing/re-expanding the JSON array and keeping the identifier construction in one place.

♻️ Example consolidation
-      FOR new_row_record IN
-        SELECT
-          "rows"."rowData" AS "rowData",
-          "rows"."rowIndex" AS "rowIndex"
-        FROM jsonb_array_elements(newly_emitted_rows) WITH ORDINALITY AS "rows"("rowData", "rowIndex")
-      LOOP
-        INSERT INTO "BulldozerStorageEngine" ("id", "keyPath", "value")
-        VALUES (
-          gen_random_uuid(),
-          rows_path || ARRAY[to_jsonb((queued_row."rowIdentifier" || ':' || (previous_emitted_row_count + new_row_record."rowIndex")::text)::text)]::jsonb[],
-          jsonb_build_object('rowData', new_row_record."rowData")
-        )
-        ON CONFLICT ("keyPath") DO UPDATE
-        SET "value" = EXCLUDED."value";
-      END LOOP;
-
-      -- Accumulate the newly-emitted rows into __bulldozer_seq under this
-      -- timefold's cascade input name, shaped like timeFoldChangesTableName
-      -- (oldRowData/newRowData diff). Queue-drained emissions are always
-      -- new rows, so oldRowData/oldRowSortKey are null.
-      IF current_cascade_input_name IS NOT NULL THEN
-        INSERT INTO "__bulldozer_seq" ("__output_name", "__output_row")
-        SELECT
-          current_cascade_input_name,
-          jsonb_build_object(
-            'groupKey', queued_row."groupKey",
-            'rowIdentifier', queued_row."rowIdentifier" || ':' || (previous_emitted_row_count + "emitted"."rowIndex")::text,
-            'oldRowSortKey', 'null'::jsonb,
-            'newRowSortKey', 'null'::jsonb,
-            'oldRowData', 'null'::jsonb,
-            'newRowData', "emitted"."rowData"
-          )
-        FROM jsonb_array_elements(newly_emitted_rows) WITH ORDINALITY AS "emitted"("rowData", "rowIndex");
-      END IF;
+      FOR new_row_record IN
+        SELECT
+          "rows"."rowData" AS "rowData",
+          "rows"."rowIndex" AS "rowIndex",
+          queued_row."rowIdentifier" || ':' || (previous_emitted_row_count + "rows"."rowIndex")::text AS "emittedRowId"
+        FROM jsonb_array_elements(newly_emitted_rows) WITH ORDINALITY AS "rows"("rowData", "rowIndex")
+      LOOP
+        INSERT INTO "BulldozerStorageEngine" ("id", "keyPath", "value")
+        VALUES (
+          gen_random_uuid(),
+          rows_path || ARRAY[to_jsonb(new_row_record."emittedRowId")]::jsonb[],
+          jsonb_build_object('rowData', new_row_record."rowData")
+        )
+        ON CONFLICT ("keyPath") DO UPDATE
+        SET "value" = EXCLUDED."value";
+
+        IF current_cascade_input_name IS NOT NULL THEN
+          INSERT INTO "__bulldozer_seq" ("__output_name", "__output_row")
+          VALUES (
+            current_cascade_input_name,
+            jsonb_build_object(
+              'groupKey', queued_row."groupKey",
+              'rowIdentifier', new_row_record."emittedRowId",
+              'oldRowSortKey', 'null'::jsonb,
+              'newRowSortKey', 'null'::jsonb,
+              'oldRowData', 'null'::jsonb,
+              'newRowData', new_row_record."rowData"
+            )
+          );
+        END IF;
+      END LOOP;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql`
around lines 230 - 263, The code currently iterates
jsonb_array_elements(newly_emitted_rows) twice—once in the FOR new_row_record
loop to INSERT into "BulldozerStorageEngine" and again with
jsonb_array_elements(...) AS "emitted" to INSERT into "__bulldozer_seq";
consolidate by using the single FOR new_row_record IN SELECT ... WITH ORDINALITY
over newly_emitted_rows (same as the existing FOR new_row_record) and inside
that loop perform both the INSERT into "BulldozerStorageEngine" (using rows_path
|| ARRAY[to_jsonb(... previous_emitted_row_count + new_row_record."rowIndex"
...)]) and, guarded by IF current_cascade_input_name IS NOT NULL, the INSERT
into "__bulldozer_seq" (constructing the same rowIdentifier and newRowData from
new_row_record."rowData"), then remove the second jsonb_array_elements pass to
avoid double-parsing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/backend/src/lib/payments/schema/__tests__/integration-1-3-queue-drained.test.ts`:
- Around line 241-249: The test currently guards the assertion with "if
(midPeriodRow != null)" so a missing midPeriodRow silently passes; change this
to assert the row exists and then assert its quota value: remove the conditional
branch around the midPeriodRow check, add an unconditional expectation that
midPeriodRow is defined (e.g., expect(midPeriodRow).toBeDefined()) and then
assert expect(midPeriodRow.itemQuantities.quota).toBe(200). Update references to
the variables used in the snippet (rows, midPeriodRow, itemQuantities.quota) so
the test fails if no mid-period repeat row is emitted.

---

Nitpick comments:
In
`@apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql`:
- Around line 230-263: The code currently iterates
jsonb_array_elements(newly_emitted_rows) twice—once in the FOR new_row_record
loop to INSERT into "BulldozerStorageEngine" and again with
jsonb_array_elements(...) AS "emitted" to INSERT into "__bulldozer_seq";
consolidate by using the single FOR new_row_record IN SELECT ... WITH ORDINALITY
over newly_emitted_rows (same as the existing FOR new_row_record) and inside
that loop perform both the INSERT into "BulldozerStorageEngine" (using rows_path
|| ARRAY[to_jsonb(... previous_emitted_row_count + new_row_record."rowIndex"
...)]) and, guarded by IF current_cascade_input_name IS NOT NULL, the INSERT
into "__bulldozer_seq" (constructing the same rowIdentifier and newRowData from
new_row_record."rowData"), then remove the second jsonb_array_elements pass to
avoid double-parsing.

In `@apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts`:
- Around line 484-534: The stored cascadeTemplate is generated during
timeFoldTable.init() from the current triggers map (collected via
collectRowChangeTriggerStatements) so any registerRowChangeTrigger calls after
init() won't be included; update the code to either (A) add a clear doc comment
on registerRowChangeTrigger and init() stating the required declare-before-init
ordering, and (B) add a runtime dev-mode assertion in registerRowChangeTrigger
that throws/logs if called after init() (or conversely in init() set a flag like
this.initialized and have registerRowChangeTrigger check it), referencing
registerRowChangeTrigger, init, collectRowChangeTriggerStatements,
toCascadeSqlBlock and the cascadeTemplate/BulldozerTimeFoldDownstreamCascade
upsert so reviewers can find the relevant spots to change.

In `@apps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts`:
- Around line 265-294: The bigint cast is already applied in currentMillis, so
clean up the redundant double-cast in igrTxnId by replacing the explicit
::bigint::text with just ::text (i.e., use ${currentMillis}::text) inside the
igrTxnId expression; update the igrTxnId template string accordingly so IDs
remain byte-identical but avoid the unnecessary ::bigint re-cast.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7853034d-28e5-4106-ac90-660b3558b45c

📥 Commits

Reviewing files that changed from the base of the PR and between f4ca6cb and 946789f.

📒 Files selected for processing (14)
  • apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql
  • apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.ts
  • apps/backend/src/lib/bulldozer/db/index.fuzz.test.ts
  • apps/backend/src/lib/bulldozer/db/index.perf.test.ts
  • apps/backend/src/lib/bulldozer/db/index.test.ts
  • apps/backend/src/lib/bulldozer/db/index.ts
  • apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts
  • apps/backend/src/lib/bulldozer/db/test-sql-loaders.ts
  • apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts
  • apps/backend/src/lib/payments/schema/__tests__/integration-1-3-queue-drained.test.ts
  • apps/backend/src/lib/payments/schema/__tests__/integration-1-3.test.ts
  • apps/backend/src/lib/payments/schema/__tests__/phase-1.test.ts
  • apps/backend/src/lib/payments/schema/__tests__/test-helpers.ts
  • apps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes several payment/timefold correctness issues by (1) ensuring queued (pg_cron) timefold ticks propagate through downstream cascades, (2) expiring when-repeated grants at subscription end, and (3) stabilizing IGR transaction ID generation to avoid numeric formatting mismatches.

Changes:

  • Add BulldozerTimeFoldDownstreamCascade registry + update bulldozer_timefold_process_queue() to execute stored downstream cascade templates for queue-drained emissions.
  • Update subscription + one-time-purchase timefold reducers to prevent decimal-tailed millis values from leaking into txn IDs; expire when-repeated grants on subscription-end.
  • Add extensive integration/regression tests for cascade propagation, idempotency, and payments lifecycle across inline vs queue-drained paths.

Reviewed changes

Copilot reviewed 18 out of 18 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
apps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts Cast millis to integer form for stable IGR txn IDs; expire when-repeated grants on subscription end.
apps/backend/src/lib/payments/schema/phase-1/otp-timefold-algo.ts Same millis/txn-id stabilization for one-time-purchase repeat logic.
apps/backend/src/lib/payments/schema/tests/test-helpers.ts Add test DB options to exercise queued path + optionally install real process_queue() function.
apps/backend/src/lib/payments/schema/tests/phase-1.test.ts Add unit-style regression coverage for subscription-end expiring when-repeated grants.
apps/backend/src/lib/payments/schema/tests/integration-1-3.test.ts Add end-to-end regression tests for IGR txnId formatting and grant expiration behavior.
apps/backend/src/lib/payments/schema/tests/integration-1-3-queue-drained.test.ts New: payments integration tests specifically for pg_cron/queue-drained propagation.
apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts New: asserts process_queue() triggers downstream cascades (filter/map/etc.), including edge cases.
apps/backend/src/lib/bulldozer/db/test-sql-loaders.ts New: test-only loader to extract process_queue() body from migration SQL.
apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts Compile/store downstream cascade templates during init; clean up on delete.
apps/backend/src/lib/bulldozer/db/index.ts Add toCascadeSqlBlock() + safe dollar-quote tag selection to prevent delimiter collisions.
apps/backend/src/lib/bulldozer/db/index.test.ts Update test DB setup to include new cascade registry table.
apps/backend/src/lib/bulldozer/db/index.perf.test.ts Update perf test DB setup to include cascade registry table.
apps/backend/src/lib/bulldozer/db/index.fuzz.test.ts Update fuzz test DB setup to include cascade registry table.
apps/backend/scripts/verify-data-integrity/index.ts Re-enable payments verifier invocation when payments config is present.
apps/backend/prisma/schema.prisma Document external management of BulldozerTimeFoldDownstreamCascade (no Prisma model).
apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.ts New: migration-level test validating cascade registry + queue-drain execution + idempotency.
apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql New: create cascade registry table + rewrite bulldozer_timefold_process_queue() to execute templates.
apps/backend/prisma.config.ts Mark BulldozerTimeFoldDownstreamCascade as an external table for Prisma tooling.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread apps/backend/src/lib/payments/schema/phase-1/otp-timefold-algo.ts Outdated
Comment thread apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts Outdated
Comment thread apps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts (1)

479-534: ⚠️ Potential issue | 🟡 Minor

Cascade template is captured at init() time—no enforcement that triggers registered before initialization.

collectRowChangeTriggerStatements is called once inside init(), and the result persists to BulldozerTimeFoldDownstreamCascade. While all downstream tables currently register their triggers during declare*Table() calls (before any init() runs), there is no code-level guard preventing registerRowChangeTrigger from being called after init() has completed. If that occurs, the persisted cascade becomes stale, and queue-drained emissions will bypass new downstream nodes.

The schema construction pattern (all declarations complete, then batch init) naturally maintains this invariant, but it would be safer to encode it explicitly—either with an assertion in registerRowChangeTrigger that fails post-init, or by re-emitting the cascade upsert whenever triggers change.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts` around lines 479
- 534, The init() call computes cascadeTemplate via
collectRowChangeTriggerStatements and persists it to
BulldozerTimeFoldDownstreamCascade, but registerRowChangeTrigger can still be
called later, leaving the persisted cascade stale; fix by either (A) adding a
runtime guard in registerRowChangeTrigger that throws if the timefold is already
initialized (introduce/check an "initialized" flag set at the end of init()), or
(B) recompute and upsert the cascade when triggers change (call
collectRowChangeTriggerStatements + toCascadeSqlBlock and run the same INSERT
... ON CONFLICT upsert used in init() to update
"cascadeTemplate"/"cascadeInputName"/"updatedAt"); reference
collectRowChangeTriggerStatements, toCascadeSqlBlock, init(),
registerRowChangeTrigger, cascadeTemplate and BulldozerTimeFoldDownstreamCascade
when implementing.
🧹 Nitpick comments (2)
apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts (1)

22-29: Keep TIMEFOLD_OUTPUT_CHANGE_COLUMNS in lockstep with timeFoldChangesTableName.

This constant and the schema passed to .toStatement(timeFoldChangesTableName, ...) at line 426 are now two sources of truth for the same column shape. If one changes and the other doesn't, the cascade will either fail at INSERT INTO __bulldozer_seq or silently mis-type downstream columns. Consider referencing the same literal at both call sites (e.g. use TIMEFOLD_OUTPUT_CHANGE_COLUMNS on line 426 as well).

♻️ Proposed deduplication
-      `.toStatement(timeFoldChangesTableName, '"groupKey" jsonb, "rowIdentifier" text, "oldRowSortKey" jsonb, "newRowSortKey" jsonb, "oldRowData" jsonb, "newRowData" jsonb'),
+      `.toStatement(timeFoldChangesTableName, TIMEFOLD_OUTPUT_CHANGE_COLUMNS),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts` around lines 22
- 29, The TIMEFOLD_OUTPUT_CHANGE_COLUMNS string and the schema passed to
.toStatement(timeFoldChangesTableName, ...) are duplicated sources of truth;
replace the hard-coded column schema used in the .toStatement call with the
TIMEFOLD_OUTPUT_CHANGE_COLUMNS constant so timeFoldChangesTableName,
TIMEFOLD_OUTPUT_CHANGE_COLUMNS, and the .toStatement(...) invocation all
reference the same literal (or import the constant into the scope used by
createApplyChangesStatements) to ensure the column shape stays in lockstep.
apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql (1)

295-308: Cleanup branch is effectively dead on the normal drain path.

Lines 212–228 unconditionally upsert a state_row_path whose keyPathParent is states_path, so the NOT EXISTS ... keyPathParent = states_path predicate on line 300–304 is false for every row the drain actually processed. If this block exists to handle an exotic reducer-deleted-its-own-state scenario, please drop a comment explaining it; otherwise consider removing it so readers don't assume it's doing work.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql`
around lines 295 - 308, The cleanup IF block that checks NOT EXISTS for
"keyPathParent" = states_path is effectively dead because earlier code upserts a
state_row_path with keyPathParent = states_path (see rows_path, states_path,
group_path and "BulldozerStorageEngine"), so either remove this obsolete
conditional DELETE block entirely or add an explicit comment above it explaining
the intended exotic edge-case (e.g., reducer deleted its own state after drain)
and why the NOT EXISTS check remains necessary; update the code around the
IF...DELETE that references rows_path, states_path, and group_path accordingly
to reflect the chosen action.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts`:
- Around line 479-534: The init() call computes cascadeTemplate via
collectRowChangeTriggerStatements and persists it to
BulldozerTimeFoldDownstreamCascade, but registerRowChangeTrigger can still be
called later, leaving the persisted cascade stale; fix by either (A) adding a
runtime guard in registerRowChangeTrigger that throws if the timefold is already
initialized (introduce/check an "initialized" flag set at the end of init()), or
(B) recompute and upsert the cascade when triggers change (call
collectRowChangeTriggerStatements + toCascadeSqlBlock and run the same INSERT
... ON CONFLICT upsert used in init() to update
"cascadeTemplate"/"cascadeInputName"/"updatedAt"); reference
collectRowChangeTriggerStatements, toCascadeSqlBlock, init(),
registerRowChangeTrigger, cascadeTemplate and BulldozerTimeFoldDownstreamCascade
when implementing.

---

Nitpick comments:
In
`@apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql`:
- Around line 295-308: The cleanup IF block that checks NOT EXISTS for
"keyPathParent" = states_path is effectively dead because earlier code upserts a
state_row_path with keyPathParent = states_path (see rows_path, states_path,
group_path and "BulldozerStorageEngine"), so either remove this obsolete
conditional DELETE block entirely or add an explicit comment above it explaining
the intended exotic edge-case (e.g., reducer deleted its own state after drain)
and why the NOT EXISTS check remains necessary; update the code around the
IF...DELETE that references rows_path, states_path, and group_path accordingly
to reflect the chosen action.

In `@apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts`:
- Around line 22-29: The TIMEFOLD_OUTPUT_CHANGE_COLUMNS string and the schema
passed to .toStatement(timeFoldChangesTableName, ...) are duplicated sources of
truth; replace the hard-coded column schema used in the .toStatement call with
the TIMEFOLD_OUTPUT_CHANGE_COLUMNS constant so timeFoldChangesTableName,
TIMEFOLD_OUTPUT_CHANGE_COLUMNS, and the .toStatement(...) invocation all
reference the same literal (or import the constant into the scope used by
createApplyChangesStatements) to ensure the column shape stays in lockstep.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 053a56c0-5e7d-4ae3-a722-bd650f09a331

📥 Commits

Reviewing files that changed from the base of the PR and between 946789f and f72258e.

📒 Files selected for processing (17)
  • apps/backend/prisma.config.ts
  • apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql
  • apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.ts
  • apps/backend/prisma/schema.prisma
  • apps/backend/scripts/verify-data-integrity/index.ts
  • apps/backend/src/lib/bulldozer/db/index.fuzz.test.ts
  • apps/backend/src/lib/bulldozer/db/index.perf.test.ts
  • apps/backend/src/lib/bulldozer/db/index.test.ts
  • apps/backend/src/lib/bulldozer/db/index.ts
  • apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts
  • apps/backend/src/lib/bulldozer/db/test-sql-loaders.ts
  • apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts
  • apps/backend/src/lib/payments/schema/__tests__/integration-1-3-queue-drained.test.ts
  • apps/backend/src/lib/payments/schema/__tests__/integration-1-3.test.ts
  • apps/backend/src/lib/payments/schema/__tests__/test-helpers.ts
  • apps/backend/src/lib/payments/schema/phase-1/otp-timefold-algo.ts
  • apps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts
✅ Files skipped from review due to trivial changes (8)
  • apps/backend/prisma.config.ts
  • apps/backend/prisma/schema.prisma
  • apps/backend/src/lib/bulldozer/db/index.test.ts
  • apps/backend/src/lib/bulldozer/db/index.perf.test.ts
  • apps/backend/src/lib/bulldozer/db/index.fuzz.test.ts
  • apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.ts
  • apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts
  • apps/backend/src/lib/bulldozer/db/index.ts
🚧 Files skipped from review as they are similar to previous changes (4)
  • apps/backend/src/lib/bulldozer/db/test-sql-loaders.ts
  • apps/backend/src/lib/payments/schema/tests/integration-1-3-queue-drained.test.ts
  • apps/backend/src/lib/payments/schema/tests/integration-1-3.test.ts
  • apps/backend/src/lib/payments/schema/tests/test-helpers.ts

…s and downstream transactions

PG 12+ returns EXTRACT(EPOCH ...) as NUMERIC with scale 6, so the reducer's currentMillis serialized into JSONB as "604800000.000000". transactions.ts built igr txn IDs via ->>effectiveAtMillis (decimal-tailed) while the same reducer emitted item-quantity-expire.adjustedTransactionId via ::bigint::text (decimal-free). The mismatch meant a subscription-end following an item-grant-repeat silently failed to expire the igr's grant, leaving when-repeated balances stuck at the last-granted quantity.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql`:
- Around line 107-113: Replace the targeted cleanup that only deletes rows
matching current_cascade_input_name with an unconditional purge of the temp
sequence table to avoid leftover intermediate outputs: inside the conditional
block around current_cascade_input_name (the IF ... THEN ... END IF; that runs
before each cascade execution), change the logic so the function executes DELETE
FROM "__bulldozer_seq" unconditionally (remove the WHERE "__output_name" =
current_cascade_input_name filter and the NULL check) to ensure all
__bulldozer_seq rows are cleared between cascade executions.

In `@apps/backend/src/lib/bulldozer/db/index.ts`:
- Around line 216-224: buildExecutableStatementsBlock currently always
normalizes "$$" which corrupts literal "$$" strings when used in the cascade
path; modify buildExecutableStatementsBlock to accept a boolean flag (e.g.,
normalizeDollarQuotes = true) and only perform the $$ -> $__bulldozer_do_inline$
rewrite when that flag is true, update toExecutableSqlTransaction to call
buildExecutableStatementsBlock with normalizeDollarQuotes=true, and change the
cascade caller (the code around toCascadeSqlBlock / the call site using
buildExecutableStatementsBlock in this diff) to pass normalizeDollarQuotes=false
so the outer randomized DO tag (chosen by chooseSafeDollarQuoteTag) can safely
contain any user "$$" literals.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e0c8483d-4d1d-4c94-a673-7abd3c05813e

📥 Commits

Reviewing files that changed from the base of the PR and between f72258e and 88e681b.

📒 Files selected for processing (17)
  • apps/backend/prisma.config.ts
  • apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql
  • apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.ts
  • apps/backend/prisma/schema.prisma
  • apps/backend/src/lib/bulldozer/db/index.fuzz.test.ts
  • apps/backend/src/lib/bulldozer/db/index.perf.test.ts
  • apps/backend/src/lib/bulldozer/db/index.test.ts
  • apps/backend/src/lib/bulldozer/db/index.ts
  • apps/backend/src/lib/bulldozer/db/row-change-trigger-dispatch.ts
  • apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts
  • apps/backend/src/lib/bulldozer/db/test-sql-loaders.ts
  • apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts
  • apps/backend/src/lib/payments/schema/__tests__/integration-1-3-queue-drained.test.ts
  • apps/backend/src/lib/payments/schema/__tests__/integration-1-3.test.ts
  • apps/backend/src/lib/payments/schema/__tests__/test-helpers.ts
  • apps/backend/src/lib/payments/schema/phase-1/otp-timefold-algo.ts
  • apps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts
✅ Files skipped from review due to trivial changes (3)
  • apps/backend/src/lib/bulldozer/db/row-change-trigger-dispatch.ts
  • apps/backend/src/lib/bulldozer/db/index.test.ts
  • apps/backend/prisma.config.ts
🚧 Files skipped from review as they are similar to previous changes (6)
  • apps/backend/prisma/schema.prisma
  • apps/backend/src/lib/bulldozer/db/index.perf.test.ts
  • apps/backend/src/lib/bulldozer/db/test-sql-loaders.ts
  • apps/backend/src/lib/bulldozer/db/index.fuzz.test.ts
  • apps/backend/src/lib/payments/schema/phase-1/subscription-timefold-algo.ts
  • apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts

Comment thread apps/backend/src/lib/bulldozer/db/index.ts
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts (1)

357-380: Avoid casts in the row-data assertions.

The as object / as Record<string, unknown> casts bypass the runtime checks these tests are otherwise careful about. A small helper that validates object shape and numeric fields keeps the assertions defensive without weakening the type system.

Example helper direction
+  function numberField(value: unknown, field: string): number {
+    if (value == null || typeof value !== "object") {
+      throw new Error(`Expected object rowdata, got ${typeof value}`);
+    }
+    const fieldValue = Reflect.get(value, field);
+    if (typeof fieldValue !== "number") {
+      throw new Error(`Expected numeric '${field}', got ${typeof fieldValue}`);
+    }
+    return fieldValue;
+  }
+
     expect(filtered.map((r) => r.rowdata).sort((a, b) =>
-      Number(Reflect.get(a as object, "value")) - Number(Reflect.get(b as object, "value"))
+      numberField(a, "value") - numberField(b, "value")
     )).toEqual([

As per coding guidelines, “Do NOT use as/any/type casts or anything else like that to bypass the type system.”

Also applies to: 437-448

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts` around
lines 357 - 380, The tests are using casts like `as object`/`as Record<string,
unknown>` around `r.rowdata` which bypass runtime checks; replace those casts by
adding a small helper (e.g., `assertRowDataIsObjectWithNumber(fieldName: string,
row: unknown): number`) that validates `r.rowdata` is an object and that the
specified numeric field exists and is a number, returning the numeric value;
then update the sort and equality assertions for `filtered`, `mapped`, and
`reMapped` to call this helper to obtain numeric values for sorting/comparison
instead of using `Reflect.get(... as object)` so the tests remain defensive
without using type casts.
apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.ts (1)

23-36: Assert the registry schema contract, not just column order.

This guard would still pass if tableStoragePath stopped being JSONB[], cascadeInputName became nullable, or the primary key on tableStoragePath was dropped. Please include type/nullability and PK assertions so the migration test protects the actual contract from migration.sql.

Test hardening direction
-  const registryColumnRows = await sql<Array<{ column_name: string }>>`
-    SELECT column_name
+  const registryColumnRows = await sql<Array<{
+    column_name: string,
+    data_type: string,
+    udt_name: string,
+    is_nullable: string,
+  }>>`
+    SELECT column_name, data_type, udt_name, is_nullable
     FROM information_schema.columns
     WHERE table_schema = 'public'
       AND table_name = 'BulldozerTimeFoldDownstreamCascade'
     ORDER BY ordinal_position
   `;
-  expect(registryColumnRows.map((r) => r.column_name)).toEqual([
-    "tableStoragePath",
-    "cascadeInputName",
-    "cascadeTemplate",
-    "createdAt",
-    "updatedAt",
+  expect(registryColumnRows).toMatchObject([
+    { column_name: "tableStoragePath", data_type: "ARRAY", udt_name: "_jsonb", is_nullable: "NO" },
+    { column_name: "cascadeInputName", data_type: "text", is_nullable: "NO" },
+    { column_name: "cascadeTemplate", data_type: "text", is_nullable: "YES" },
+    { column_name: "createdAt", data_type: "timestamp without time zone", is_nullable: "NO" },
+    { column_name: "updatedAt", data_type: "timestamp without time zone", is_nullable: "NO" },
   ]);
+
+  const registryPkRows = await sql<Array<{ column_name: string }>>`
+    SELECT kcu.column_name
+    FROM information_schema.table_constraints tc
+    JOIN information_schema.key_column_usage kcu
+      ON tc.constraint_name = kcu.constraint_name
+     AND tc.table_schema = kcu.table_schema
+    WHERE tc.table_schema = 'public'
+      AND tc.table_name = 'BulldozerTimeFoldDownstreamCascade'
+      AND tc.constraint_type = 'PRIMARY KEY'
+    ORDER BY kcu.ordinal_position
+  `;
+  expect(registryPkRows.map((r) => r.column_name)).toEqual(["tableStoragePath"]);

Based on learnings, “When writing database migration files, ALWAYS ALWAYS add tests for all the potential edge cases!”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.ts`
around lines 23 - 36, The test currently only asserts column order for table
BuldozerTimeFoldDownstreamCascade using registryColumnRows; enhance it to assert
the full schema contract by querying information_schema.columns (or pg_catalog)
to verify each column's data_type and is_nullable for "tableStoragePath",
"cascadeInputName", "cascadeTemplate", "createdAt", and "updatedAt", and add an
assertion that "tableStoragePath" is of type JSONB[] (or correct pg type) and
non-null if required; additionally query pg_constraint/pg_index to assert the
primary key exists on "tableStoragePath" (or the expected PK column(s)) and
include explicit expectations for nullability of "cascadeInputName" and any
other constraints to ensure the migration preserves types, nullability, and PKs
rather than only column order.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql`:
- Around line 95-133: The migration currently treats a missing
BulldozerTimeFoldDownstreamCascade row as “no cascade” and proceeds to drain
BuldozerTimeFoldQueue, which loses downstream work; change the lookup so you
detect whether a registry row was found (e.g. capture FOUND or set a boolean
like current_cascade_exists after the SELECT INTO current_cascade_input_name,
current_cascade_template FROM "BulldozerTimeFoldDownstreamCascade" ...), and if
no row was found then skip the inner drain loop (leave queued rows alone)
instead of proceeding to delete queued_row from "BulldozerTimeFoldQueue";
preserve the existing semantics where a present row with
current_cascade_template IS NULL still allows draining (initialized but no
triggers).

In `@apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts`:
- Around line 43-55: getTestDbUrls currently constructs a random dbName then
later tests re-derive it by parsing the URL which can fail when the connection
string query contains slashes; update getTestDbUrls to include the generated
dbName in its return object (e.g., return { full, base, dbName }) and change
callers (tests that use the parsed name) to use the returned dbName instead of
extracting it from full or base so CREATE DATABASE uses the exact generated
name.

---

Nitpick comments:
In
`@apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.ts`:
- Around line 23-36: The test currently only asserts column order for table
BuldozerTimeFoldDownstreamCascade using registryColumnRows; enhance it to assert
the full schema contract by querying information_schema.columns (or pg_catalog)
to verify each column's data_type and is_nullable for "tableStoragePath",
"cascadeInputName", "cascadeTemplate", "createdAt", and "updatedAt", and add an
assertion that "tableStoragePath" is of type JSONB[] (or correct pg type) and
non-null if required; additionally query pg_constraint/pg_index to assert the
primary key exists on "tableStoragePath" (or the expected PK column(s)) and
include explicit expectations for nullability of "cascadeInputName" and any
other constraints to ensure the migration preserves types, nullability, and PKs
rather than only column order.

In `@apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts`:
- Around line 357-380: The tests are using casts like `as object`/`as
Record<string, unknown>` around `r.rowdata` which bypass runtime checks; replace
those casts by adding a small helper (e.g.,
`assertRowDataIsObjectWithNumber(fieldName: string, row: unknown): number`) that
validates `r.rowdata` is an object and that the specified numeric field exists
and is a number, returning the numeric value; then update the sort and equality
assertions for `filtered`, `mapped`, and `reMapped` to call this helper to
obtain numeric values for sorting/comparison instead of using `Reflect.get(...
as object)` so the tests remain defensive without using type casts.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b198467f-5e5f-4fad-8ba4-ceb8cb881e50

📥 Commits

Reviewing files that changed from the base of the PR and between 88e681b and a98283e.

📒 Files selected for processing (16)
  • apps/backend/prisma.config.ts
  • apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/migration.sql
  • apps/backend/prisma/migrations/20260417000000_bulldozer_timefold_downstream_cascade/tests/downstream-cascade.ts
  • apps/backend/prisma/schema.prisma
  • apps/backend/src/lib/bulldozer/db/index.fuzz.test.ts
  • apps/backend/src/lib/bulldozer/db/index.perf.test.ts
  • apps/backend/src/lib/bulldozer/db/index.test.ts
  • apps/backend/src/lib/bulldozer/db/index.ts
  • apps/backend/src/lib/bulldozer/db/row-change-trigger-dispatch.ts
  • apps/backend/src/lib/bulldozer/db/tables/time-fold-table.ts
  • apps/backend/src/lib/bulldozer/db/test-sql-loaders.ts
  • apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts
  • apps/backend/src/lib/payments/schema/__tests__/integration-1-3-queue-drained.test.ts
  • apps/backend/src/lib/payments/schema/__tests__/integration-1-3.test.ts
  • apps/backend/src/lib/payments/schema/__tests__/test-helpers.ts
  • apps/backend/src/lib/payments/schema/phase-1/otp-timefold-algo.ts
✅ Files skipped from review due to trivial changes (5)
  • apps/backend/prisma.config.ts
  • apps/backend/src/lib/bulldozer/db/index.perf.test.ts
  • apps/backend/src/lib/bulldozer/db/test-sql-loaders.ts
  • apps/backend/prisma/schema.prisma
  • apps/backend/src/lib/bulldozer/db/index.test.ts

Comment thread apps/backend/src/lib/bulldozer/db/timefold-queue-downstream.test.ts
This meant downstream tables of the timefold werent updating with the new rows. This wasnt caught in our old tests since they just advanced the timestamp.
PK on JSONB[] (tableStoragePath) — not expressible via Prisma's @id list types are treated as non-required). Managed entirely by bulldozer code via raw SQL. See schema.prisma note next to the
BulldozerTimeFoldMetadata model.
@nams1570
Copy link
Copy Markdown
Collaborator Author

@greptileai refresh summary

@N2D4 N2D4 merged commit 847d14d into dev Apr 18, 2026
33 of 38 checks passed
@N2D4 N2D4 deleted the bs-timefold-fixes branch April 18, 2026 21:17
promptless Bot added a commit that referenced this pull request Apr 18, 2026
Documents fix for subscription item expiration and scheduled payment
event processing issues from PR #1348.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants