Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a middleware interception hook (first non- ChangesInterception + Content-Hashing Flow
sequenceDiagram
participant Runtime as RuntimeCore
participant Runner as runWithMiddleware
participant M1 as Middleware A
participant M2 as Middleware B
participant Driver
participant Consumer
rect rgba(200,150,255,0.5)
Note over Runtime,Consumer: Intercept Hit Path
Runtime->>Runner: execute(plan)
Runner->>M1: intercept(plan, ctx)
M1-->>Runner: InterceptResult { rows }
Runner->>M1: afterExecute({ source: 'middleware', rowCount, completed })
Runner->>Consumer: yield rows
Runner-->>Runtime: complete
end
rect rgba(150,200,255,0.5)
Note over Runtime,Consumer: Passthrough (Miss) Path
Runtime->>Runner: execute(plan)
Runner->>M1: intercept(plan, ctx)
M1-->>Runner: undefined
Runner->>M2: intercept(plan, ctx)
M2-->>Runner: undefined
Runner->>M1: beforeExecute(plan, ctx)
Runner->>Driver: runDriver(plan)
Driver-->>Runner: AsyncIterable<row>
Runner->>M1: onRow(row, ctx)
Runner->>Consumer: yield row
Runner->>M1: afterExecute({ source: 'driver', rowCount, completed })
Runner-->>Runtime: complete
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Comment |
@prisma-next/mongo-runtime
@prisma-next/family-mongo
@prisma-next/sql-runtime
@prisma-next/family-sql
@prisma-next/extension-arktype-json
@prisma-next/middleware-telemetry
@prisma-next/mongo
@prisma-next/extension-paradedb
@prisma-next/extension-pgvector
@prisma-next/postgres
@prisma-next/sql-orm-client
@prisma-next/sqlite
@prisma-next/target-mongo
@prisma-next/adapter-mongo
@prisma-next/driver-mongo
@prisma-next/contract
@prisma-next/utils
@prisma-next/config
@prisma-next/errors
@prisma-next/framework-components
@prisma-next/operations
@prisma-next/ts-render
@prisma-next/contract-authoring
@prisma-next/ids
@prisma-next/psl-parser
@prisma-next/psl-printer
@prisma-next/cli
@prisma-next/emitter
@prisma-next/migration-tools
prisma-next
@prisma-next/vite-plugin-contract-emit
@prisma-next/mongo-codec
@prisma-next/mongo-contract
@prisma-next/mongo-value
@prisma-next/mongo-contract-psl
@prisma-next/mongo-contract-ts
@prisma-next/mongo-emitter
@prisma-next/mongo-schema-ir
@prisma-next/mongo-query-ast
@prisma-next/mongo-orm
@prisma-next/mongo-query-builder
@prisma-next/mongo-lowering
@prisma-next/mongo-wire
@prisma-next/sql-contract
@prisma-next/sql-errors
@prisma-next/sql-operations
@prisma-next/sql-schema-ir
@prisma-next/sql-contract-psl
@prisma-next/sql-contract-ts
@prisma-next/sql-contract-emitter
@prisma-next/sql-lane-query-builder
@prisma-next/sql-relational-core
@prisma-next/sql-builder
@prisma-next/target-postgres
@prisma-next/target-sqlite
@prisma-next/adapter-postgres
@prisma-next/adapter-sqlite
@prisma-next/driver-postgres
@prisma-next/driver-sqlite
commit: |
0bd6ea9 to
2525dbe
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/1-framework/0-foundation/utils/src/canonical-stringify.ts`:
- Around line 79-100: The current serializer accepts any object and uses
Object.keys in writePlainObject, which hides symbol-keyed properties and
collapses non-plain objects (Map, Set, RegExp, class instances) into identical
representations; update the logic so writePlainObject only accepts true plain
objects (check Object.getPrototypeOf(obj) === Object.prototype ||
Object.getPrototypeOf(obj) === null) and reject (throw) any other object types
(Map, Set, RegExp, Date, class instances) upstream in the write function, and
also detect symbol-keyed properties via Object.getOwnPropertySymbols(obj) and
throw if any exist so callers are forced to handle those cases explicitly;
reference writePlainObject and write to locate where to add these guards and
where to surface the errors.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: d7ab81c9-cbce-4630-a095-9a33d4419846
⛔ Files ignored due to path filters (4)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yamlprojects/middleware-intercept-and-cache/follow-ups.mdis excluded by!projects/**projects/middleware-intercept-and-cache/plan.mdis excluded by!projects/**projects/middleware-intercept-and-cache/spec.mdis excluded by!projects/**
📒 Files selected for processing (33)
packages/1-framework/0-foundation/utils/package.jsonpackages/1-framework/0-foundation/utils/src/canonical-stringify.tspackages/1-framework/0-foundation/utils/src/exports/canonical-stringify.tspackages/1-framework/0-foundation/utils/src/exports/hash-content.tspackages/1-framework/0-foundation/utils/src/hash-content.tspackages/1-framework/0-foundation/utils/test/canonical-stringify.test.tspackages/1-framework/0-foundation/utils/test/hash-content.test.tspackages/1-framework/0-foundation/utils/tsdown.config.tspackages/1-framework/1-core/framework-components/src/exports/runtime.tspackages/1-framework/1-core/framework-components/src/run-with-middleware.tspackages/1-framework/1-core/framework-components/src/runtime-middleware.tspackages/1-framework/1-core/framework-components/test/mock-family.test.tspackages/1-framework/1-core/framework-components/test/run-with-middleware.intercept.test.tspackages/1-framework/1-core/framework-components/test/run-with-middleware.test.tspackages/1-framework/1-core/framework-components/test/runtime-core.test.tspackages/1-framework/1-core/framework-components/test/runtime-core.types.test-d.tspackages/1-framework/1-core/framework-components/test/runtime-middleware.types.test-d.tspackages/2-mongo-family/7-runtime/package.jsonpackages/2-mongo-family/7-runtime/src/content-hash.tspackages/2-mongo-family/7-runtime/src/mongo-runtime.tspackages/2-mongo-family/7-runtime/test/content-hash.test.tspackages/2-mongo-family/7-runtime/test/mongo-middleware.test.tspackages/2-mongo-family/7-runtime/test/mongo-runtime.types.test-d.tspackages/2-sql/5-runtime/src/content-hash.tspackages/2-sql/5-runtime/src/sql-runtime.tspackages/2-sql/5-runtime/test/before-compile-chain.test.tspackages/2-sql/5-runtime/test/budgets.test.tspackages/2-sql/5-runtime/test/content-hash.test.tspackages/2-sql/5-runtime/test/intercept-decoding.test.tspackages/2-sql/5-runtime/test/lints.test.tspackages/3-extensions/middleware-telemetry/src/telemetry-middleware.tspackages/3-extensions/middleware-telemetry/test/telemetry-middleware.test.tstest/integration/test/cross-package/cross-family-middleware.test.ts
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/1-framework/1-core/framework-components/src/runtime-middleware.ts (1)
89-115:⚠️ Potential issue | 🟠 MajorAlign
RuntimeMiddleware's default plan type to match the runtime's actual type.A middleware authored as plain
RuntimeMiddlewarehasplan: QueryPlan, butctx.contentHash()accepts onlyExecutionPlan. SinceExecutionPlan extends QueryPlan, you cannot pass aQueryPlanwhere anExecutionPlanis expected—requiring an unnecessary cast. The docstring confirms these hooks run on the lowered plan (which is anExecutionPlan). Changing the default fromQueryPlantoExecutionPlanaligns the type with runtime reality and enablescontentHashusage without casting.Suggested direction
-export interface RuntimeMiddleware<TPlan extends QueryPlan = QueryPlan> { +export interface RuntimeMiddleware<TPlan extends ExecutionPlan = ExecutionPlan> {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/1-framework/1-core/framework-components/src/runtime-middleware.ts` around lines 89 - 115, The RuntimeMiddleware generic default should be ExecutionPlan not QueryPlan: update the RuntimeMiddleware<TPlan extends QueryPlan = QueryPlan> declaration so the default type parameter is ExecutionPlan (e.g., RuntimeMiddleware<TPlan extends QueryPlan = ExecutionPlan>) so hooks like intercept and beforeExecute receive the lowered ExecutionPlan and can call ctx.contentHash() without casting; adjust any imports/type names if necessary and ensure references to intercept and beforeExecute in this file remain consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@projects/middleware-intercept-and-cache/plan.md`:
- Around line 42-47: The plan references a stale hashing API and sync behavior:
update all mentions (including the checklist items for hashContent,
SqlRuntimeImpl/MongoRuntimeImpl, tests and acceptance criteria) to match the
landed API where contentHash(exec) returns a Promise<string> and the digest
prefix is "sha512:<hex>" (not "blake2b512:"); adjust descriptions to require
async contentHash signatures (Promise) and change expected regexes in tests to
^sha512:[0-9a-f]{128}$, and ensure references to computeSqlFingerprint are kept
separate from contentHash behavior in the SqlRuntimeImpl and MongoRuntimeImpl
items (use hashContent + canonicalStringify as described but with the
async/sha512 contract).
In `@projects/middleware-intercept-and-cache/spec.md`:
- Line 145: Reword the ambiguous phrase about raw vs decoded rows to clarify
that interceptors/cache middleware store raw (undecoded) rows while the SQL
runtime performs the decode before returning rows to callers; update the text
referencing InterceptResult.rows, runWithMiddleware, and onRow to something like
“cache middleware stores raw rows; the SQL runtime decodes them the same way as
driver rows before yielding to consumers,” and apply the same clarification to
the other occurrence that mentions returned/decoded rows.
- Line 359: The file fails markdownlint MD047 for trailing newlines; open the
spec.md and ensure the end of the file after the "ORM terminal enumeration."
paragraph contains exactly one newline character (no extra blank lines or
missing newline). Remove any extra trailing blank lines and add a single
trailing '\n' so the file ends with one newline.
- Line 133: The spec incorrectly documents
RuntimeMiddlewareContext.contentHash(exec: ExecutionPlan) as returning string
while the runtime API is async (Promise<string>); update the spec text (all
mentions of contentHash, including the later occurrences that duplicate this
contract) to state that contentHash returns a Promise<string> and explain that
implementations must return a bounded, opaque digest produced asynchronously
(e.g., by calling hashContent) so typings and runtime behavior match the
framework API and tests referencing RuntimeMiddlewareContext.contentHash and
ExecutionPlan remain correct.
- Line 133: The spec's `contentHash(exec: ExecutionPlan)` description must be
updated to match the implementation: change the signature to `contentHash(exec:
ExecutionPlan): Promise<string>`, and replace BLAKE2b-512 references with
"SHA-512 via WebCrypto"; specify the output format as the literal prefix
`sha512:` followed by a 128-character hex digest (fixed 135-character string
total). Update every occurrence of the old algorithm/format (including the main
`contentHash` requirement and the "Open Questions" hashing discussion) so they
reference the async return type, SHA-512/WebCrypto, and the `sha512:HEX128`
output format consistently.
- Around line 182-191: The fenced TypeScript snippet defining CacheStore and
CachedEntry lacks blank lines before and after the ```typescript block; add a
single blank line immediately above the opening ```typescript fence and a single
blank line immediately below the closing ``` fence so the CacheStore and
CachedEntry code block is surrounded by blank lines to satisfy markdownlint
MD031.
---
Outside diff comments:
In `@packages/1-framework/1-core/framework-components/src/runtime-middleware.ts`:
- Around line 89-115: The RuntimeMiddleware generic default should be
ExecutionPlan not QueryPlan: update the RuntimeMiddleware<TPlan extends
QueryPlan = QueryPlan> declaration so the default type parameter is
ExecutionPlan (e.g., RuntimeMiddleware<TPlan extends QueryPlan = ExecutionPlan>)
so hooks like intercept and beforeExecute receive the lowered ExecutionPlan and
can call ctx.contentHash() without casting; adjust any imports/type names if
necessary and ensure references to intercept and beforeExecute in this file
remain consistent.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 55571a78-cfc7-4476-8fe2-30426b8a5b4f
📒 Files selected for processing (27)
packages/1-framework/0-foundation/utils/package.jsonpackages/1-framework/0-foundation/utils/src/exports/hash-content.tspackages/1-framework/0-foundation/utils/src/hash-content.tspackages/1-framework/0-foundation/utils/test/hash-content.test.tspackages/1-framework/0-foundation/utils/tsdown.config.tspackages/1-framework/1-core/framework-components/src/runtime-middleware.tspackages/1-framework/1-core/framework-components/test/mock-family.test.tspackages/1-framework/1-core/framework-components/test/run-with-middleware.intercept.test.tspackages/1-framework/1-core/framework-components/test/run-with-middleware.test.tspackages/1-framework/1-core/framework-components/test/runtime-core.test.tspackages/1-framework/1-core/framework-components/test/runtime-core.types.test-d.tspackages/1-framework/1-core/framework-components/test/runtime-middleware.types.test-d.tspackages/2-mongo-family/7-runtime/src/content-hash.tspackages/2-mongo-family/7-runtime/src/mongo-runtime.tspackages/2-mongo-family/7-runtime/test/content-hash.test.tspackages/2-mongo-family/7-runtime/test/mongo-middleware.test.tspackages/2-mongo-family/7-runtime/test/mongo-runtime.types.test-d.tspackages/2-sql/5-runtime/src/content-hash.tspackages/2-sql/5-runtime/src/sql-runtime.tspackages/2-sql/5-runtime/test/before-compile-chain.test.tspackages/2-sql/5-runtime/test/budgets.test.tspackages/2-sql/5-runtime/test/content-hash.test.tspackages/2-sql/5-runtime/test/lints.test.tspackages/3-extensions/middleware-telemetry/test/telemetry-middleware.test.tsprojects/middleware-intercept-and-cache/plan.mdprojects/middleware-intercept-and-cache/spec.mdtest/integration/test/cross-package/cross-family-middleware.test.ts
2218fae to
6dea48a
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (1)
packages/1-framework/1-core/framework-components/test/run-with-middleware.intercept.test.ts (1)
580-587: ⚡ Quick winReplace inline conditional object spread with
ifDefined()helper.The conditional spread in
mw()breaks the repo’s TS object-spread convention. PreferifDefined()for this pattern.♻️ Suggested refactor
+import { ifDefined } from '@prisma-next/utils/defined'; ... function mw(label: string, doesIntercept: boolean): RuntimeMiddleware<MockExec> { return { name: label, - ...(doesIntercept - ? { - async intercept(): Promise<InterceptResult | undefined> { - events.push(`${label}:intercept`); - throw interceptError; - }, - } - : {}), + ...ifDefined( + doesIntercept, + { + async intercept(): Promise<InterceptResult | undefined> { + events.push(`${label}:intercept`); + throw interceptError; + }, + }, + ), async afterExecute(_plan, result) { observed.push({ label, source: result.source, completed: result.completed,As per coding guidelines, “Use
ifDefined()from@prisma-next/utils/definedfor conditional object spreads instead of inline conditional spread patterns.”🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/1-framework/1-core/framework-components/test/run-with-middleware.intercept.test.ts` around lines 580 - 587, Replace the inline conditional object spread inside the mw(...) call with the ifDefined helper: import ifDefined from '@prisma-next/utils/defined' and replace the ...(doesIntercept ? { async intercept(): Promise<InterceptResult | undefined> { events.push(`${label}:intercept`); throw interceptError; }, } : {}) pattern with ...ifDefined(doesIntercept, { async intercept(): Promise<InterceptResult | undefined> { events.push(`${label}:intercept`); throw interceptError; } }); keep the same intercept function, doesIntercept flag, interceptError, events and label identifiers unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In
`@packages/1-framework/1-core/framework-components/test/run-with-middleware.intercept.test.ts`:
- Around line 580-587: Replace the inline conditional object spread inside the
mw(...) call with the ifDefined helper: import ifDefined from
'@prisma-next/utils/defined' and replace the ...(doesIntercept ? { async
intercept(): Promise<InterceptResult | undefined> {
events.push(`${label}:intercept`); throw interceptError; }, } : {}) pattern with
...ifDefined(doesIntercept, { async intercept(): Promise<InterceptResult |
undefined> { events.push(`${label}:intercept`); throw interceptError; } }); keep
the same intercept function, doesIntercept flag, interceptError, events and
label identifiers unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: d57da623-bd3b-4867-96d1-4b39a8d007ed
⛔ Files ignored due to path filters (2)
projects/middleware-intercept-and-cache/plan.mdis excluded by!projects/**projects/middleware-intercept-and-cache/spec.mdis excluded by!projects/**
📒 Files selected for processing (26)
packages/1-framework/0-foundation/utils/package.jsonpackages/1-framework/0-foundation/utils/src/exports/hash-content.tspackages/1-framework/0-foundation/utils/src/hash-content.tspackages/1-framework/0-foundation/utils/test/hash-content.test.tspackages/1-framework/0-foundation/utils/tsdown.config.tspackages/1-framework/1-core/framework-components/src/runtime-middleware.tspackages/1-framework/1-core/framework-components/test/mock-family.test.tspackages/1-framework/1-core/framework-components/test/run-with-middleware.intercept.test.tspackages/1-framework/1-core/framework-components/test/run-with-middleware.test.tspackages/1-framework/1-core/framework-components/test/runtime-core.test.tspackages/1-framework/1-core/framework-components/test/runtime-core.types.test-d.tspackages/1-framework/1-core/framework-components/test/runtime-middleware.types.test-d.tspackages/2-mongo-family/7-runtime/src/content-hash.tspackages/2-mongo-family/7-runtime/src/mongo-runtime.tspackages/2-mongo-family/7-runtime/test/content-hash.test.tspackages/2-mongo-family/7-runtime/test/mongo-middleware.test.tspackages/2-mongo-family/7-runtime/test/mongo-runtime.types.test-d.tspackages/2-sql/5-runtime/src/content-hash.tspackages/2-sql/5-runtime/src/sql-runtime.tspackages/2-sql/5-runtime/test/before-compile-chain.test.tspackages/2-sql/5-runtime/test/budgets.test.tspackages/2-sql/5-runtime/test/content-hash.test.tspackages/2-sql/5-runtime/test/intercept-decoding.test.tspackages/2-sql/5-runtime/test/lints.test.tspackages/3-extensions/middleware-telemetry/test/telemetry-middleware.test.tstest/integration/test/cross-package/cross-family-middleware.test.ts
✅ Files skipped from review due to trivial changes (8)
- packages/1-framework/0-foundation/utils/tsdown.config.ts
- packages/2-sql/5-runtime/test/lints.test.ts
- packages/1-framework/0-foundation/utils/src/exports/hash-content.ts
- packages/1-framework/1-core/framework-components/test/run-with-middleware.test.ts
- packages/2-mongo-family/7-runtime/test/mongo-runtime.types.test-d.ts
- packages/2-sql/5-runtime/test/before-compile-chain.test.ts
- packages/1-framework/1-core/framework-components/test/runtime-core.test.ts
- packages/1-framework/0-foundation/utils/test/hash-content.test.ts
🚧 Files skipped from review as they are similar to previous changes (9)
- packages/2-sql/5-runtime/src/sql-runtime.ts
- packages/1-framework/0-foundation/utils/src/hash-content.ts
- packages/1-framework/1-core/framework-components/test/runtime-core.types.test-d.ts
- packages/2-mongo-family/7-runtime/test/mongo-middleware.test.ts
- packages/1-framework/0-foundation/utils/package.json
- packages/2-mongo-family/7-runtime/src/mongo-runtime.ts
- packages/2-sql/5-runtime/src/content-hash.ts
- test/integration/test/cross-package/cross-family-middleware.test.ts
- packages/2-sql/5-runtime/test/intercept-decoding.test.ts
c67c412 to
583b581
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
packages/1-framework/0-foundation/utils/src/canonical-stringify.ts (1)
57-100:⚠️ Potential issue | 🟠 Major | ⚡ Quick winReject non-plain objects and symbol-keyed properties before hashing.
Object.keys()drops symbol fields, and the current fallback serializesMap,Set, boxed primitives,RegExp, and class instances as if they were plain objects. That can collapse distinct inputs onto the same content hash and break execution identity.💡 Suggested hardening
function write(value: unknown, seen: Set<object>): string { @@ - return writePlainObject(obj as Record<string, unknown>, seen); + if (!isPlainObject(obj)) { + throw new TypeError('canonicalStringify: unsupported object type'); + } + return writePlainObject(obj, seen); } finally { seen.delete(obj); } } + +function isPlainObject(value: object): value is Record<string, unknown> { + const proto = Object.getPrototypeOf(value); + return proto === Object.prototype || proto === null; +} function writePlainObject(obj: Record<string, unknown>, seen: Set<object>): string { + if (Object.getOwnPropertySymbols(obj).length > 0) { + throw new TypeError('canonicalStringify: symbol-keyed properties are not supported'); + } const keys = Object.keys(obj).sort();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/1-framework/0-foundation/utils/src/canonical-stringify.ts` around lines 57 - 100, Reject non-plain objects and symbol-keyed properties before hashing by adding validation in the object path (the branch that currently does "const obj = value as object" and the writePlainObject function). Specifically, before calling writePlainObject or adding to seen, throw a TypeError if Object.getOwnPropertySymbols(obj).length > 0, if Object.getPrototypeOf(obj) is not Object.prototype or null (to reject class instances, Map, Set, RegExp, boxed primitives, etc.), or if obj is a boxed primitive (obj instanceof String/Number/Boolean) or any builtin container (Map/Set) — keep Date and Uint8Array special-cases as they are. Implement these checks either immediately after "const obj = value as object" or at the top of writePlainObject so callers (write and writePlainObject) will reject unsafe objects instead of serializing them as plain objects.
🧹 Nitpick comments (2)
packages/2-sql/5-runtime/test/intercept-decoding.test.ts (1)
208-223: ⚡ Quick winDrive the intercept-decoding check through a query plan.
createJsonProjectionPlan()returns a pre-loweredSqlExecutionPlan, so this suite never exercisesrunBeforeCompileor the SQL lowering path. If the cache-interceptor behavior is meant to be validated end-to-end, please switch this helper to aSqlQueryPlanso the runtime still performs lowering before decoding.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/5-runtime/test/intercept-decoding.test.ts` around lines 208 - 223, The test helper createJsonProjectionPlan currently returns a pre-lowered SqlExecutionPlan which bypasses runBeforeCompile and the SQL lowering path; change this helper to return a SqlQueryPlan (or construct the equivalent query-plan object that the runtime expects) so the runtime will perform lowering before decoding; update the helper's return type and construction to produce a SqlQueryPlan (referencing createJsonProjectionPlan, SqlExecutionPlan -> SqlQueryPlan, and ensuring runBeforeCompile is exercised) so the cache-interceptor behavior is validated end-to-end.packages/1-framework/1-core/framework-components/test/run-with-middleware.intercept.test.ts (1)
577-616: 💤 Low valueReplace the conditional spread in
mwwithifDefined().This is the one spot that diverges from the repo’s test-file convention and makes the helper harder to scan. As per coding guidelines, use
ifDefined()from@prisma-next/utils/definedfor conditional object spreads instead of inline conditional spread patterns.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/1-framework/1-core/framework-components/test/run-with-middleware.intercept.test.ts` around lines 577 - 616, The mw helper currently uses an inline conditional spread to add the intercept method; replace that pattern by importing and using ifDefined from '@prisma-next/utils/defined' to conditionally include the intercept property on the returned object in mw(label: string, doesIntercept: boolean): RuntimeMiddleware<MockExec>, e.g. compute the intercept object when doesIntercept is true and wrap it with ifDefined(...) when building the returned object so the shape matches other tests; ensure the ifDefined import is added and that the returned object still exposes name, afterExecute and the optional intercept (throwing interceptError) exactly as before so runWithMiddleware and the test assertions remain unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@packages/1-framework/0-foundation/utils/src/canonical-stringify.ts`:
- Around line 57-100: Reject non-plain objects and symbol-keyed properties
before hashing by adding validation in the object path (the branch that
currently does "const obj = value as object" and the writePlainObject function).
Specifically, before calling writePlainObject or adding to seen, throw a
TypeError if Object.getOwnPropertySymbols(obj).length > 0, if
Object.getPrototypeOf(obj) is not Object.prototype or null (to reject class
instances, Map, Set, RegExp, boxed primitives, etc.), or if obj is a boxed
primitive (obj instanceof String/Number/Boolean) or any builtin container
(Map/Set) — keep Date and Uint8Array special-cases as they are. Implement these
checks either immediately after "const obj = value as object" or at the top of
writePlainObject so callers (write and writePlainObject) will reject unsafe
objects instead of serializing them as plain objects.
---
Nitpick comments:
In
`@packages/1-framework/1-core/framework-components/test/run-with-middleware.intercept.test.ts`:
- Around line 577-616: The mw helper currently uses an inline conditional spread
to add the intercept method; replace that pattern by importing and using
ifDefined from '@prisma-next/utils/defined' to conditionally include the
intercept property on the returned object in mw(label: string, doesIntercept:
boolean): RuntimeMiddleware<MockExec>, e.g. compute the intercept object when
doesIntercept is true and wrap it with ifDefined(...) when building the returned
object so the shape matches other tests; ensure the ifDefined import is added
and that the returned object still exposes name, afterExecute and the optional
intercept (throwing interceptError) exactly as before so runWithMiddleware and
the test assertions remain unchanged.
In `@packages/2-sql/5-runtime/test/intercept-decoding.test.ts`:
- Around line 208-223: The test helper createJsonProjectionPlan currently
returns a pre-lowered SqlExecutionPlan which bypasses runBeforeCompile and the
SQL lowering path; change this helper to return a SqlQueryPlan (or construct the
equivalent query-plan object that the runtime expects) so the runtime will
perform lowering before decoding; update the helper's return type and
construction to produce a SqlQueryPlan (referencing createJsonProjectionPlan,
SqlExecutionPlan -> SqlQueryPlan, and ensuring runBeforeCompile is exercised) so
the cache-interceptor behavior is validated end-to-end.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: 14272dfc-1040-4e52-8002-82ec30d2a918
⛔ Files ignored due to path filters (4)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yamlprojects/middleware-intercept-and-cache/follow-ups.mdis excluded by!projects/**projects/middleware-intercept-and-cache/plan.mdis excluded by!projects/**projects/middleware-intercept-and-cache/spec.mdis excluded by!projects/**
📒 Files selected for processing (33)
packages/1-framework/0-foundation/utils/package.jsonpackages/1-framework/0-foundation/utils/src/canonical-stringify.tspackages/1-framework/0-foundation/utils/src/exports/canonical-stringify.tspackages/1-framework/0-foundation/utils/src/exports/hash-content.tspackages/1-framework/0-foundation/utils/src/hash-content.tspackages/1-framework/0-foundation/utils/test/canonical-stringify.test.tspackages/1-framework/0-foundation/utils/test/hash-content.test.tspackages/1-framework/0-foundation/utils/tsdown.config.tspackages/1-framework/1-core/framework-components/src/exports/runtime.tspackages/1-framework/1-core/framework-components/src/run-with-middleware.tspackages/1-framework/1-core/framework-components/src/runtime-middleware.tspackages/1-framework/1-core/framework-components/test/mock-family.test.tspackages/1-framework/1-core/framework-components/test/run-with-middleware.intercept.test.tspackages/1-framework/1-core/framework-components/test/run-with-middleware.test.tspackages/1-framework/1-core/framework-components/test/runtime-core.test.tspackages/1-framework/1-core/framework-components/test/runtime-core.types.test-d.tspackages/1-framework/1-core/framework-components/test/runtime-middleware.types.test-d.tspackages/2-mongo-family/7-runtime/package.jsonpackages/2-mongo-family/7-runtime/src/content-hash.tspackages/2-mongo-family/7-runtime/src/mongo-runtime.tspackages/2-mongo-family/7-runtime/test/content-hash.test.tspackages/2-mongo-family/7-runtime/test/mongo-middleware.test.tspackages/2-mongo-family/7-runtime/test/mongo-runtime.types.test-d.tspackages/2-sql/5-runtime/src/content-hash.tspackages/2-sql/5-runtime/src/sql-runtime.tspackages/2-sql/5-runtime/test/before-compile-chain.test.tspackages/2-sql/5-runtime/test/budgets.test.tspackages/2-sql/5-runtime/test/content-hash.test.tspackages/2-sql/5-runtime/test/intercept-decoding.test.tspackages/2-sql/5-runtime/test/lints.test.tspackages/3-extensions/middleware-telemetry/src/telemetry-middleware.tspackages/3-extensions/middleware-telemetry/test/telemetry-middleware.test.tstest/integration/test/cross-package/cross-family-middleware.test.ts
✅ Files skipped from review due to trivial changes (15)
- packages/2-mongo-family/7-runtime/package.json
- packages/2-mongo-family/7-runtime/test/mongo-runtime.types.test-d.ts
- packages/1-framework/0-foundation/utils/tsdown.config.ts
- packages/1-framework/0-foundation/utils/src/exports/hash-content.ts
- packages/1-framework/1-core/framework-components/src/exports/runtime.ts
- packages/2-mongo-family/7-runtime/test/mongo-middleware.test.ts
- packages/1-framework/1-core/framework-components/test/runtime-core.types.test-d.ts
- packages/1-framework/0-foundation/utils/package.json
- packages/1-framework/0-foundation/utils/src/exports/canonical-stringify.ts
- packages/2-sql/5-runtime/test/before-compile-chain.test.ts
- packages/1-framework/1-core/framework-components/test/mock-family.test.ts
- packages/1-framework/0-foundation/utils/test/hash-content.test.ts
- packages/1-framework/0-foundation/utils/test/canonical-stringify.test.ts
- packages/3-extensions/middleware-telemetry/src/telemetry-middleware.ts
- packages/1-framework/1-core/framework-components/test/runtime-middleware.types.test-d.ts
🚧 Files skipped from review as they are similar to previous changes (7)
- packages/2-sql/5-runtime/test/lints.test.ts
- packages/1-framework/1-core/framework-components/test/runtime-core.test.ts
- packages/2-mongo-family/7-runtime/src/content-hash.ts
- packages/2-mongo-family/7-runtime/src/mongo-runtime.ts
- packages/2-sql/5-runtime/src/content-hash.ts
- packages/2-mongo-family/7-runtime/test/content-hash.test.ts
- packages/2-sql/5-runtime/test/content-hash.test.ts
9f41286 to
cf4af51
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (1)
packages/2-sql/5-runtime/src/sql-runtime.ts (1)
164-164: Avoid blind-castingexecincontentHash(line 164).The cast
as SqlExecutionPlandoesn't verify thatexecactually contains the requiredsqlandparamsproperties. Add a shape guard to ensure only properly lowered plans reachcomputeSqlContentHash().Suggested change
- contentHash: (exec) => computeSqlContentHash(exec as SqlExecutionPlan), + contentHash: (exec) => { + if (!('sql' in exec) || !('params' in exec)) { + throw runtimeError( + 'RUNTIME.INVALID_PLAN', + 'contentHash requires a lowered SQL execution plan', + ); + } + return computeSqlContentHash(exec as SqlExecutionPlan); + },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/2-sql/5-runtime/src/sql-runtime.ts` at line 164, The contentHash callback blindly casts exec to SqlExecutionPlan before calling computeSqlContentHash; add a shape/type guard that verifies exec has the required properties (e.g., sql and params) and that they are the expected types before invoking computeSqlContentHash, and handle non-matching shapes (return undefined/null or skip hashing) so only valid lowered plans reach computeSqlContentHash; update the contentHash implementation to perform this runtime check referencing contentHash, computeSqlContentHash, SqlExecutionPlan and exec.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/2-sql/5-runtime/src/sql-runtime.ts`:
- Line 164: The contentHash callback blindly casts exec to SqlExecutionPlan
before calling computeSqlContentHash; add a shape/type guard that verifies exec
has the required properties (e.g., sql and params) and that they are the
expected types before invoking computeSqlContentHash, and handle non-matching
shapes (return undefined/null or skip hashing) so only valid lowered plans reach
computeSqlContentHash; update the contentHash implementation to perform this
runtime check referencing contentHash, computeSqlContentHash, SqlExecutionPlan
and exec.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
Run ID: b3df5951-1c22-496f-911f-5a0a9ebed2af
⛔ Files ignored due to path filters (3)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yamlprojects/middleware-intercept-and-cache/plan.mdis excluded by!projects/**projects/middleware-intercept-and-cache/spec.mdis excluded by!projects/**
📒 Files selected for processing (34)
docs/architecture docs/subsystems/4. Runtime & Middleware Framework.mdpackages/1-framework/0-foundation/utils/package.jsonpackages/1-framework/0-foundation/utils/src/canonical-stringify.tspackages/1-framework/0-foundation/utils/src/exports/canonical-stringify.tspackages/1-framework/0-foundation/utils/src/exports/hash-content.tspackages/1-framework/0-foundation/utils/src/hash-content.tspackages/1-framework/0-foundation/utils/test/canonical-stringify.test.tspackages/1-framework/0-foundation/utils/test/hash-content.test.tspackages/1-framework/0-foundation/utils/tsdown.config.tspackages/1-framework/1-core/framework-components/src/exports/runtime.tspackages/1-framework/1-core/framework-components/src/run-with-middleware.tspackages/1-framework/1-core/framework-components/src/runtime-middleware.tspackages/1-framework/1-core/framework-components/test/mock-family.test.tspackages/1-framework/1-core/framework-components/test/run-with-middleware.intercept.test.tspackages/1-framework/1-core/framework-components/test/run-with-middleware.test.tspackages/1-framework/1-core/framework-components/test/runtime-core.test.tspackages/1-framework/1-core/framework-components/test/runtime-core.types.test-d.tspackages/1-framework/1-core/framework-components/test/runtime-middleware.types.test-d.tspackages/2-mongo-family/7-runtime/package.jsonpackages/2-mongo-family/7-runtime/src/content-hash.tspackages/2-mongo-family/7-runtime/src/mongo-runtime.tspackages/2-mongo-family/7-runtime/test/content-hash.test.tspackages/2-mongo-family/7-runtime/test/mongo-middleware.test.tspackages/2-mongo-family/7-runtime/test/mongo-runtime.types.test-d.tspackages/2-sql/5-runtime/src/content-hash.tspackages/2-sql/5-runtime/src/sql-runtime.tspackages/2-sql/5-runtime/test/before-compile-chain.test.tspackages/2-sql/5-runtime/test/budgets.test.tspackages/2-sql/5-runtime/test/content-hash.test.tspackages/2-sql/5-runtime/test/intercept-decoding.test.tspackages/2-sql/5-runtime/test/lints.test.tspackages/3-extensions/middleware-telemetry/src/telemetry-middleware.tspackages/3-extensions/middleware-telemetry/test/telemetry-middleware.test.tstest/integration/test/cross-package/cross-family-middleware.test.ts
✅ Files skipped from review due to trivial changes (11)
- packages/2-sql/5-runtime/test/before-compile-chain.test.ts
- packages/2-mongo-family/7-runtime/package.json
- packages/1-framework/0-foundation/utils/src/exports/hash-content.ts
- packages/1-framework/0-foundation/utils/src/exports/canonical-stringify.ts
- packages/1-framework/0-foundation/utils/tsdown.config.ts
- packages/2-mongo-family/7-runtime/src/content-hash.ts
- packages/1-framework/0-foundation/utils/package.json
- packages/1-framework/1-core/framework-components/test/runtime-core.types.test-d.ts
- packages/2-sql/5-runtime/test/lints.test.ts
- packages/1-framework/1-core/framework-components/src/runtime-middleware.ts
- packages/1-framework/0-foundation/utils/test/hash-content.test.ts
🚧 Files skipped from review as they are similar to previous changes (10)
- packages/2-mongo-family/7-runtime/test/mongo-runtime.types.test-d.ts
- packages/3-extensions/middleware-telemetry/src/telemetry-middleware.ts
- packages/1-framework/1-core/framework-components/src/exports/runtime.ts
- packages/1-framework/0-foundation/utils/src/hash-content.ts
- packages/3-extensions/middleware-telemetry/test/telemetry-middleware.test.ts
- packages/2-sql/5-runtime/src/content-hash.ts
- packages/2-mongo-family/7-runtime/src/mongo-runtime.ts
- packages/1-framework/1-core/framework-components/test/mock-family.test.ts
- packages/1-framework/1-core/framework-components/test/runtime-core.test.ts
- packages/1-framework/1-core/framework-components/src/run-with-middleware.ts
… with high-level plan Restructure the cipherstash-integration project from a single-spec layout into a three-component umbrella: - project-1/: searchable-encryption MVP (the previous umbrella spec rescoped as Project 1, plus its 5 task specs moved under specs/) - project-2/: planner-driven DDL + expanded surface (new stub) - sql-raw-factory/: public raw\`...\` template-literal factory (moved from projects/sql-raw-factory/) New artifacts at the umbrella level: - spec.md: scope of the umbrella, why three components, cross-component design decisions (RawSqlExpr ownership, DataTransformOperation choice, end-to-end-tested-only scope), in-flight framework dependency status (PRs #400/#402 merged 2026-05-01; #404/#409 still open). - plan.md: component-level sequencing. Phase A is Project 1 (critical path, gated externally on #404 + #409); phase B is sql-raw-factory and Project 2 in parallel afterwards, each with its own gating story (sql-raw-factory blocks on Project 1's RawSqlExpr AST node landing; Project 2 blocks on Project 1 + TML-2338 + TML-2339). Project 1's spec is reframed: drop "this is the umbrella" language, add a header pointing to the new umbrella, repath all relative references for the new directory depth (3-up from project-1/, 4-up from project-1/specs/, 3-up from sql-raw-factory/). Resolve the previously- open question about Project 2's on-disk slug. No content changes to the per-task specs beyond reference repaths.
…e-slice milestones project-1/plan.md sequences Project 1 as five end-to-end-demoable milestones rather than one-milestone-per-task-spec: - M1: framework SPI (raw-sql-ast-node + middleware-param-transform; no user-facing surface yet, but the seams unblock other extensions) - M2: store-only round-trip (psl + envelope-codec storage path; encrypted column type works for storage; no operators yet) - M3: eq operator + manual addSearchConfig migration (first searchable round-trip end-to-end against live EQL) - M4: ilike + activatePendingSearches + decryptAll (completes Project 1's user-facing surface; all UMB ACs green) - M5: close-out per projects/README.md lifecycle Records that Project 1 is independent of both open framework PRs: - #404 (invariant-aware ref routing): migration factories carry invariantId fields regardless; the routing benefit is retroactive when #404 lands. - #409 (middleware intercept + contentHash): edits the same RuntimeMiddleware types but adds non-overlapping fields; whichever lands first, the other rebases mechanically. Updates the umbrella plan and spec to reflect the new posture: PRs #404 and #409 demoted from "hard gating" to "coordinate-only" / "not a dependency"; status table marks Project 1's plan as drafted.
… with high-level plan Restructure the cipherstash-integration project from a single-spec layout into a three-component umbrella: - project-1/: searchable-encryption MVP (the previous umbrella spec rescoped as Project 1, plus its 5 task specs moved under specs/) - project-2/: planner-driven DDL + expanded surface (new stub) - sql-raw-factory/: public raw\`...\` template-literal factory (moved from projects/sql-raw-factory/) New artifacts at the umbrella level: - spec.md: scope of the umbrella, why three components, cross-component design decisions (RawSqlExpr ownership, DataTransformOperation choice, end-to-end-tested-only scope), in-flight framework dependency status (PRs #400/#402 merged 2026-05-01; #404/#409 still open). - plan.md: component-level sequencing. Phase A is Project 1 (critical path, gated externally on #404 + #409); phase B is sql-raw-factory and Project 2 in parallel afterwards, each with its own gating story (sql-raw-factory blocks on Project 1's RawSqlExpr AST node landing; Project 2 blocks on Project 1 + TML-2338 + TML-2339). Project 1's spec is reframed: drop "this is the umbrella" language, add a header pointing to the new umbrella, repath all relative references for the new directory depth (3-up from project-1/, 4-up from project-1/specs/, 3-up from sql-raw-factory/). Resolve the previously- open question about Project 2's on-disk slug. No content changes to the per-task specs beyond reference repaths.
…e-slice milestones project-1/plan.md sequences Project 1 as five end-to-end-demoable milestones rather than one-milestone-per-task-spec: - M1: framework SPI (raw-sql-ast-node + middleware-param-transform; no user-facing surface yet, but the seams unblock other extensions) - M2: store-only round-trip (psl + envelope-codec storage path; encrypted column type works for storage; no operators yet) - M3: eq operator + manual addSearchConfig migration (first searchable round-trip end-to-end against live EQL) - M4: ilike + activatePendingSearches + decryptAll (completes Project 1's user-facing surface; all UMB ACs green) - M5: close-out per projects/README.md lifecycle Records that Project 1 is independent of both open framework PRs: - #404 (invariant-aware ref routing): migration factories carry invariantId fields regardless; the routing benefit is retroactive when #404 lands. - #409 (middleware intercept + contentHash): edits the same RuntimeMiddleware types but adds non-overlapping fields; whichever lands first, the other rebases mechanically. Updates the umbrella plan and spec to reflect the new posture: PRs #404 and #409 demoted from "hard gating" to "coordinate-only" / "not a dependency"; status table marks Project 1's plan as drafted.
… with high-level plan Restructure the cipherstash-integration project from a single-spec layout into a three-component umbrella: - project-1/: searchable-encryption MVP (the previous umbrella spec rescoped as Project 1, plus its 5 task specs moved under specs/) - project-2/: planner-driven DDL + expanded surface (new stub) - sql-raw-factory/: public raw\`...\` template-literal factory (moved from projects/sql-raw-factory/) New artifacts at the umbrella level: - spec.md: scope of the umbrella, why three components, cross-component design decisions (RawSqlExpr ownership, DataTransformOperation choice, end-to-end-tested-only scope), in-flight framework dependency status (PRs #400/#402 merged 2026-05-01; #404/#409 still open). - plan.md: component-level sequencing. Phase A is Project 1 (critical path, gated externally on #404 + #409); phase B is sql-raw-factory and Project 2 in parallel afterwards, each with its own gating story (sql-raw-factory blocks on Project 1's RawSqlExpr AST node landing; Project 2 blocks on Project 1 + TML-2338 + TML-2339). Project 1's spec is reframed: drop "this is the umbrella" language, add a header pointing to the new umbrella, repath all relative references for the new directory depth (3-up from project-1/, 4-up from project-1/specs/, 3-up from sql-raw-factory/). Resolve the previously- open question about Project 2's on-disk slug. No content changes to the per-task specs beyond reference repaths.
…e-slice milestones project-1/plan.md sequences Project 1 as five end-to-end-demoable milestones rather than one-milestone-per-task-spec: - M1: framework SPI (raw-sql-ast-node + middleware-param-transform; no user-facing surface yet, but the seams unblock other extensions) - M2: store-only round-trip (psl + envelope-codec storage path; encrypted column type works for storage; no operators yet) - M3: eq operator + manual addSearchConfig migration (first searchable round-trip end-to-end against live EQL) - M4: ilike + activatePendingSearches + decryptAll (completes Project 1's user-facing surface; all UMB ACs green) - M5: close-out per projects/README.md lifecycle Records that Project 1 is independent of both open framework PRs: - #404 (invariant-aware ref routing): migration factories carry invariantId fields regardless; the routing benefit is retroactive when #404 lands. - #409 (middleware intercept + contentHash): edits the same RuntimeMiddleware types but adds non-overlapping fields; whichever lands first, the other rebases mechanically. Updates the umbrella plan and spec to reflect the new posture: PRs #404 and #409 demoted from "hard gating" to "coordinate-only" / "not a dependency"; status table marks Project 1's plan as drafted.
… with high-level plan Restructure the cipherstash-integration project from a single-spec layout into a three-component umbrella: - project-1/: searchable-encryption MVP (the previous umbrella spec rescoped as Project 1, plus its 5 task specs moved under specs/) - project-2/: planner-driven DDL + expanded surface (new stub) - sql-raw-factory/: public raw\`...\` template-literal factory (moved from projects/sql-raw-factory/) New artifacts at the umbrella level: - spec.md: scope of the umbrella, why three components, cross-component design decisions (RawSqlExpr ownership, DataTransformOperation choice, end-to-end-tested-only scope), in-flight framework dependency status (PRs #400/#402 merged 2026-05-01; #404/#409 still open). - plan.md: component-level sequencing. Phase A is Project 1 (critical path, gated externally on #404 + #409); phase B is sql-raw-factory and Project 2 in parallel afterwards, each with its own gating story (sql-raw-factory blocks on Project 1's RawSqlExpr AST node landing; Project 2 blocks on Project 1 + TML-2338 + TML-2339). Project 1's spec is reframed: drop "this is the umbrella" language, add a header pointing to the new umbrella, repath all relative references for the new directory depth (3-up from project-1/, 4-up from project-1/specs/, 3-up from sql-raw-factory/). Resolve the previously- open question about Project 2's on-disk slug. No content changes to the per-task specs beyond reference repaths.
…e-slice milestones project-1/plan.md sequences Project 1 as five end-to-end-demoable milestones rather than one-milestone-per-task-spec: - M1: framework SPI (raw-sql-ast-node + middleware-param-transform; no user-facing surface yet, but the seams unblock other extensions) - M2: store-only round-trip (psl + envelope-codec storage path; encrypted column type works for storage; no operators yet) - M3: eq operator + manual addSearchConfig migration (first searchable round-trip end-to-end against live EQL) - M4: ilike + activatePendingSearches + decryptAll (completes Project 1's user-facing surface; all UMB ACs green) - M5: close-out per projects/README.md lifecycle Records that Project 1 is independent of both open framework PRs: - #404 (invariant-aware ref routing): migration factories carry invariantId fields regardless; the routing benefit is retroactive when #404 lands. - #409 (middleware intercept + contentHash): edits the same RuntimeMiddleware types but adds non-overlapping fields; whichever lands first, the other rebases mechanically. Updates the umbrella plan and spec to reflect the new posture: PRs #404 and #409 demoted from "hard gating" to "coordinate-only" / "not a dependency"; status table marks Project 1's plan as drafted.
… with high-level plan Restructure the cipherstash-integration project from a single-spec layout into a three-component umbrella: - project-1/: searchable-encryption MVP (the previous umbrella spec rescoped as Project 1, plus its 5 task specs moved under specs/) - project-2/: planner-driven DDL + expanded surface (new stub) - sql-raw-factory/: public raw\`...\` template-literal factory (moved from projects/sql-raw-factory/) New artifacts at the umbrella level: - spec.md: scope of the umbrella, why three components, cross-component design decisions (RawSqlExpr ownership, DataTransformOperation choice, end-to-end-tested-only scope), in-flight framework dependency status (PRs #400/#402 merged 2026-05-01; #404/#409 still open). - plan.md: component-level sequencing. Phase A is Project 1 (critical path, gated externally on #404 + #409); phase B is sql-raw-factory and Project 2 in parallel afterwards, each with its own gating story (sql-raw-factory blocks on Project 1's RawSqlExpr AST node landing; Project 2 blocks on Project 1 + TML-2338 + TML-2339). Project 1's spec is reframed: drop "this is the umbrella" language, add a header pointing to the new umbrella, repath all relative references for the new directory depth (3-up from project-1/, 4-up from project-1/specs/, 3-up from sql-raw-factory/). Resolve the previously- open question about Project 2's on-disk slug. No content changes to the per-task specs beyond reference repaths.
…e-slice milestones project-1/plan.md sequences Project 1 as five end-to-end-demoable milestones rather than one-milestone-per-task-spec: - M1: framework SPI (raw-sql-ast-node + middleware-param-transform; no user-facing surface yet, but the seams unblock other extensions) - M2: store-only round-trip (psl + envelope-codec storage path; encrypted column type works for storage; no operators yet) - M3: eq operator + manual addSearchConfig migration (first searchable round-trip end-to-end against live EQL) - M4: ilike + activatePendingSearches + decryptAll (completes Project 1's user-facing surface; all UMB ACs green) - M5: close-out per projects/README.md lifecycle Records that Project 1 is independent of both open framework PRs: - #404 (invariant-aware ref routing): migration factories carry invariantId fields regardless; the routing benefit is retroactive when #404 lands. - #409 (middleware intercept + contentHash): edits the same RuntimeMiddleware types but adds non-overlapping fields; whichever lands first, the other rebases mechanically. Updates the umbrella plan and spec to reflect the new posture: PRs #404 and #409 demoted from "hard gating" to "coordinate-only" / "not a dependency"; status table marks Project 1's plan as drafted.
cd84937 to
e41a83a
Compare
…tion Adds a deterministic, JSON-like string serializer designed for use as a stable identity / cache key. Two values that are structurally equivalent — regardless of object key insertion order — produce the same string, while values that differ in any meaningful way (including types JSON would conflate, like BigInt(1) vs 1) produce different strings. Supported inputs: - null / undefined (distinguishable) - boolean, string, number (incl. NaN, Infinity, -Infinity, +0/-0) - bigint (suffixed with 'n' to disambiguate from number) - Date (tagged + ISO string) - Buffer / Uint8Array (tagged + hex-encoded) - arrays (order-preserving) - plain objects (key-sorted, recursive) Throws on functions, symbols, and circular references. This is the first addition for the cache middleware project (TML-2143 M1.0a). The next tasks consume canonicalStringify from RuntimeMiddlewareContext.identityKey implementations in the SQL and Mongo runtimes. Refs: TML-2143
…reContext Adds a required identityKey(exec: ExecutionPlan) => string method on RuntimeMiddlewareContext. Family runtimes own the implementation: SQL will compose meta.storageHash + sql + canonicalStringify(params); Mongo will compose meta.storageHash + canonicalStringify(command). Two semantically equivalent executions return the same string. The returned string is intended to be consumed directly as a Map key — no additional hashing layer. Empirical evidence from prior Prisma query-plan caching work shows V8's internal string interning and hashing dominates any user-space hash function, so adding SHA-256/FNV/ xxhash on top would make the keying slower, not faster. This unblocks the cache middleware (TML-2143) — middleware can then compute cache keys via ctx.identityKey(exec) without depending on any family-specific package or having to inspect plan internals. Field is required, not optional: half-populated contexts undermine the point. Existing in-repo fixtures (test contexts in framework-components, sql-runtime, mongo-runtime, middleware-telemetry, and the cross-family integration test) gain a stub identityKey: () => 'mock-key'. Production identityKey implementations in SqlRuntimeImpl and MongoRuntimeImpl are stubbed with 'identity-key-not-implemented' in this commit; real implementations land in TML-2143 M1.0b and M1.0c respectively. The stubs let the framework-level change land cleanly without breaking typecheck across the workspace. Refs: TML-2143
Replaces the stub from M1.0 with the real SQL identity-key composition: storageHash + raw SQL + canonicalStringify(params), separated by '|', then hashed via hashIdentity into a bounded BLAKE2b-512 digest. Three-component canonical composition: 1. meta.storageHash discriminates by schema. A migration changes the storage hash, which invalidates cached entries automatically (no per-app invalidation logic needed for schema changes). 2. exec.sql is the raw lowered SQL text. Two queries with different structure produce different keys. Note that we deliberately do NOT reuse computeSqlFingerprint here: that helper strips literals to group executions by statement shape (used by telemetry), which is the opposite of what an identity key needs — we want per-execution discrimination, not per-statement-shape grouping. 3. canonicalStringify(exec.params) produces a deterministic serialization that is stable across object key insertion order and distinguishes types JSON would conflate (BigInt(1) vs 1, +0 vs -0, null vs undefined, Date vs ISO string, Buffer vs number array). The composed canonical string is piped through hashIdentity (BLAKE2b- 512). Two reasons for hashing instead of using the canonical string directly: (a) bounded memory — a query bound to a 10 MB JSON column would otherwise produce a 10 MB cache key, scaling to gigabytes at maxEntries=1000; (b) sensitive-data isolation — parameter values appear verbatim in the canonical string and would otherwise leak into debug logs, Redis KEYS / MONITOR output, persistence dumps, and any user-supplied CacheStore implementation. Lives in a new sql-runtime/src/identity-key.ts module so the cache middleware can be tested without it (cache middleware tests use mock RuntimeMiddlewareContext instances), but the SQL runtime owns the production implementation. Coverage: - Stability: identical plans, repeated invocations, params object key order (top-level + nested) all yield the same key. - Discrimination: differing storageHash, sql, param values, param position, BigInt vs number, null vs undefined, Date instants, Buffer bytes — all yield distinct keys. - Shape: blake2b512:HEXDIGEST format, fixed length regardless of payload size, opaque (no raw SQL or params embedded). Refs: TML-2143
Replaces the stub from M1.0 with the real Mongo identity-key
composition: storageHash + canonicalStringify(command), separated by
'|', then hashed via hashIdentity into a bounded BLAKE2b-512 digest.
Two-component canonical composition:
1. meta.storageHash discriminates by schema. A migration changes the
storage hash, which invalidates cached entries automatically.
2. canonicalStringify(exec.command) produces a deterministic
serialization of the wire command. Mongo wire commands are class
instances (InsertOneWireCommand, AggregateWireCommand, etc.) with
own enumerable properties — Object.keys(cmd) picks up
{collection, kind, document/filter/update/pipeline/...}, which
canonicalStringify then sorts and stringifies. The result is
stable across object key insertion order in the underlying
document/filter/update payload, and discriminates on collection,
kind, and any payload field.
Unlike SQL, there is no separate 'rendered statement' component
because a Mongo MongoExecutionPlan.command is the wire command
itself — canonicalizing it captures both structure and parameters
in one pass.
The composed canonical string is piped through hashIdentity (BLAKE2b-
512). Two reasons for hashing instead of using the canonical string
directly: (a) bounded memory — a command embedding a large document
(binary blob, large nested payload) would otherwise produce a
proportionally large cache key; (b) sensitive-data isolation —
document and filter values appear verbatim in the canonical string
and would otherwise leak into debug logs, Redis KEYS / MONITOR
output, persistence dumps, and any user-supplied CacheStore
implementation.
Adds @prisma-next/utils as a runtime dependency so the canonical-
stringify and hash-identity helpers can be shared with the SQL
runtime's identity-key implementation. The cache middleware (TML-2143
M3) will be able to key Mongo plans without touching mongo-runtime
internals.
Coverage:
- Stability: equivalent commands, repeated invocations, document key
order (top-level + nested in filter) all yield the same key.
- Discrimination: differing storageHash, collection name, command
kind, document values, filter values, aggregate pipelines, and
pipeline stage order — all yield distinct keys.
- Shape: blake2b512:HEXDIGEST format, fixed length regardless of
payload size, opaque (no command payload embedded).
Refs: TML-2143
Adds a 'source: "driver" | "middleware"' field to AfterExecuteResult. Indicates where the rows observed during this execution came from: - 'driver' — the default. Rows came from the underlying driver via runDriver / runWithMiddleware's normal path. - 'middleware' — a RuntimeMiddleware.intercept hook (lands in M1.3) short-circuited execution and supplied the rows directly. The driver was not invoked. The runWithMiddleware orchestrator populates source: 'driver' on both the success and error paths today; the value flips to 'middleware' once the intercept chain is wired in. Updates the telemetry middleware to round-trip the field on its afterExecute events so observability sinks can distinguish driver- served from middleware-served executions. Test fixtures across the SQL runtime and framework-components test suites that construct AfterExecuteResult directly are updated to include the new field. Tests that rely on toMatchObject continue to pass without changes. Refs: TML-2143
Adds an optional intercept method on RuntimeMiddleware<TPlan> and an InterceptResult type alongside it. Type-only addition in this commit; the orchestrator wiring lands in M1.3. Signature: intercept?(plan: TPlan, ctx: RuntimeMiddlewareContext): Promise<InterceptResult | undefined> Returning undefined (or omitting the hook entirely) signals passthrough — execution proceeds through the normal driver path. Returning an InterceptResult short-circuits execution: the runtime yields the supplied rows directly to the consumer. InterceptResult.rows accepts both Iterable (arrays, sync generators) and AsyncIterable (async generators). 'for await' natively handles both via Symbol.asyncIterator / Symbol.iterator fallback, so the orchestrator can iterate without branching on the variant. Cached arrays in the cache middleware are the common case; streaming variants support future use cases like mock layers replaying recordings. Row shape is Record<string, unknown> — the same untyped shape onRow receives. The SQL runtime decodes intercepted rows through its normal codec pass, so interceptors cache and return raw (undecoded) rows. Type tests cover: - intercept is optional (observer middleware compile without it) - intercept receives TPlan and RuntimeMiddlewareContext - return type is exactly Promise<InterceptResult | undefined> - intercept narrows alongside other hooks when TPlan is narrowed (e.g. SqlMiddleware sees plan.sql / plan.params) - InterceptResult.rows accepts arrays, sync generators, async generators - InterceptResult rejects rows whose elements are not Record<string, unknown> (negative test) Refs: TML-2143
Replaces the previous static 'source: driver' with a dynamic
intercept-aware lifecycle that lets a RuntimeMiddleware short-circuit
execution and supply rows directly.
Lifecycle, in order:
1. Run intercept chain in registration order. First non-undefined
result wins; subsequent middleware's intercept does not fire.
On hit: emit ctx.log.debug 'middleware.intercept' event with the
winning middleware's name; row source becomes the
intercepted rows; source = 'middleware'.
On all-passthrough: source = 'driver'; row source is runDriver().
2. If source === 'driver': run beforeExecute chain.
If source === 'middleware': skip beforeExecute (it semantically
means 'about to hit the driver').
3. Iterate row source. On the driver path, fire onRow per row. On
the intercepted hit path, skip onRow (intercepted rows did not
originate from a driver row stream). Either way: yield each row
to the consumer in order.
4. Fire afterExecute on success with { rowCount, latencyMs,
completed: true, source }.
5. Fire afterExecute on error with completed: false and source set
accordingly. Errors thrown by afterExecute on the error path
remain swallowed; the original error is rethrown. (Existing
semantics, unchanged.)
runDriver() is invoked lazily — only after the intercept chain has
resolved with a passthrough. This matters for factories that do
eager work (acquiring a connection, sending a query) on invocation:
they must not run on the intercepted hit path.
The 'for await' loop in step 3 iterates either AsyncIterable or
Iterable transparently — for await checks Symbol.asyncIterator first
and falls back to Symbol.iterator, so the orchestrator does not need
to branch on the row-source variant. Cached arrays (the cache
middleware's common case) and async generators (mock layers replaying
recordings, future streaming use cases) both work without further
plumbing.
Comprehensive intercept unit tests land in M1.4. The existing 6
runWithMiddleware tests (zero middleware, single observer, multi-
observer ordering, error paths, swallow semantics, partial-hook
middleware) all continue to pass without modification — the existing
test that asserts on observedResult now sees source: 'driver'
explicitly.
Refs: TML-2143
…nWithMiddleware Adds run-with-middleware.intercept.test.ts covering 17 cases across chain semantics, hit path, miss path, and error path: Chain semantics (3 tests): - First interceptor returning a non-undefined result wins; subsequent intercept does not fire. - Passthrough chains correctly when one interceptor returns undefined before another wins. - Mixed chain (observer + interceptor): when a downstream interceptor wins, the upstream observer's beforeExecute is NOT called either — beforeExecute is suppressed for the entire chain on the hit path. Hit path (7 tests): - Skips beforeExecute and onRow; afterExecute fires with source: 'middleware' and correct rowCount. - Emits a 'middleware.intercept' debug log event naming the winning middleware. - Does not require ctx.log.debug to be defined. - Accepts arrays as the row source. - Accepts sync Iterable (generator function) as the row source. - Accepts AsyncIterable (async generator) as the row source. - rowCount in afterExecute matches the number of intercepted rows yielded. Miss path (3 tests): - All-undefined intercepts → driver path with source: 'driver'; full beforeExecute / onRow / afterExecute lifecycle fires. - Middleware without intercept hooks behave as observers (zero-change baseline; verifies the wiring did not break the pre-existing behavior). - runDriver factory is invoked lazily — only after the intercept chain has resolved to passthrough. Important for factories that do eager work on invocation (acquiring a connection, sending a query) — they must not run on the intercepted hit path. Error path (4 tests): - An interceptor that throws → afterExecute fires with completed: false, source: 'middleware', error rethrown. - An error thrown while iterating intercepted rows → afterExecute fires with completed: false, source: 'middleware', rowCount reflects rows yielded before the throw. - Errors thrown by afterExecute on the intercepted error path are swallowed; the original error is rethrown. (Mirrors the existing driver-path swallow semantics.) - afterExecute on the intercept error path runs in registration order across multiple middleware, all observing the same source: 'middleware' result. Source-tracking subtlety: source is set to 'middleware' BEFORE awaiting an intercept hook, then reverted to 'driver' if the hook returns undefined. This ensures that an intercept hook that throws is reported as a middleware-source failure rather than a driver-source failure. The earlier draft of the wiring set source only after the hook returned successfully, which mis-attributed errors thrown inside intercept; that fix is folded into the preceding wiring commit. Refs: TML-2143
…coding Adds a focused integration test asserting that when a SqlMiddleware intercepts execution and returns raw rows, those rows go through the SQL runtime's normal codec decode pass — exactly as if they had come from the driver. The test registers a JSON codec, builds a plan with meta.projectionTypes mapping an alias to that codec, registers an SqlMiddleware whose intercept returns rows containing JSON-encoded wire values, and asserts the consumer sees the parsed objects. Coverage: - Single intercepted row decoded through the JSON codec. - Multiple intercepted rows decoded independently. - Intercepted rows yielded from an AsyncIterable decoded the same way. - Driver-served rows and intercepted rows produce byte-identical decoded output for the same wire value (round-trip equivalence — the cache middleware can store raw rows and serve them later without affecting consumer output). This contract is what makes the cache middleware's storage cheap: cache the raw, decode on the way out. The cache never needs to know about codecs. No production code change in this commit — executeAgainstQueryable already wraps runWithMiddleware's row stream with decodeRow, which applies uniformly regardless of whether the rows came from runDriver or an intercept hit. The test pins this invariant. Refs: TML-2143
… Mongo runtimes
Extends the existing TML-2255 cross-family middleware proof with a
test that registers a single generic interceptor on both an SQL
runtime (MockSqlRuntime extending RuntimeCore) and a real
MongoRuntimeImpl, executes a query through each, and asserts that
the interceptor short-circuits execution in both families with the
same single-source-of-truth runWithMiddleware orchestrator.
Coverage:
- Generic intercept middleware (no familyId) registered on both runtimes.
- Interceptor invoked once per family with the correct plan metadata.
- Intercepted rows ([{ intercepted: true }]) reach the consumer in
place of the driver's rows in both families.
- Mongo driver was never invoked.
- Telemetry observed only afterExecute (not beforeExecute) on the hit
path, with source: 'middleware' for both runs.
This test passes without any production code change in
@prisma-next/mongo-runtime: Mongo inherits the intercept lifecycle
from runWithMiddleware via RuntimeCore. The test pins that
inheritance and prevents future regressions where Mongo's runtime
might diverge from the canonical orchestrator.
Refs: TML-2143
760ec57 to
44d63eb
Compare
…e; reframe as type/operator catalog expansion
Option A from the design discussion: keep the Project 1 / Project 2
split, but drop Project 2's planner-integration mandate now that
TML-2397 (contract spaces) ships the codec lifecycle hook
framework-wide. Project 2 becomes purely surface expansion (more types,
more operators); each new type instantiates Project 1's pattern.
Umbrella spec.md:
- Add new section: Foundation (TML-2397 contract spaces). Covers the
three concerns dissolved by TML-2397: EQL bundle install, per-column
search-config DDL, strict dbInit preserved per-space.
- Update component overview table: Project 2 description rewritten
('Expanded type/operator surface') without planner-integration claim.
- Update Components section descriptions for both Project 1 and
Project 2.
- Update Cross-component design decisions: replace 'Migration factories
produce DataTransformOperations' (now obsolete; codec hook owns the
surface) with 'Per-column search-config DDL is emitted by the codec
lifecycle hook'. Affects both Project 1 and Project 2.
- Update In-flight framework dependencies table: TML-2397 listed as
satisfied foundation; PRs #404 / #409 promoted to merged status (they
landed on the contract-spaces base before our rebase). Drop the
obsolete framework-prerequisites paragraph for Project 2.
- Update 'Why an umbrella' wording: Project 1's RawSqlExpr AST node
still motivates sql-raw-factory, just no longer for migration-factory
reasons.
Umbrella plan.md:
- Update component diagram: replace 'in-flight framework PRs' arrows
with TML-2397 + PR-bundle as foundation arrows; drop separate
'framework prerequisites' arrow into Project 2.
- Update dependency-edges table: TML-2397 row added; PRs row
consolidated to satisfied; framework-prerequisites row removed.
- Update Sequencing section text: drop 'Project 2 blocked on framework
prereqs' note; Project 2 now gates on Project 1 only.
- Update Phase A — Project 1: reflect M0 + M1 done, M2 next; remove
'hand-authored migration factories' from goal.
- Update Phase B — Project 2: drop framework-prereqs gating; add
per-type sequencing within Project 2.
- Update Status table: TML-2397 row noted; remove obsolete framework-
prereqs gating from Project 2 row.
Project 2 spec.md:
- Retitle: 'Expanded type/operator surface' (was 'Planner-driven DDL +
expanded type/operator surface').
- Rewrite Summary + Description to drop the planner-integration half
entirely; reframe as instantiating Project 1's pattern per type.
- Replace Dependencies table: drop framework prerequisites; add TML-2397
as satisfied foundation; clarify Project 1 hard dependency.
- Update Open Questions: keep type-specific ones (mode-flag downgrade,
re-encryption migration, column-key-id, searchableJson semantics);
add 'order of type rollout' note (customer-demand-driven).
Project 1 spec.md:
- Update header summary line: 'expanded type/operator surface' (drop
'planner-driven DDL' from Project 2's description).
- Update Non-goals: replace 'planner-driven per-column DDL beyond the
lifecycle hook' with 'edge cases of the codec hook deferred to
Project 2' (cross-type transitions, mode-flag downgrade policy).
Project 1 is now rebased onto tml-2397-cipherstash-contract-space, which shipped the cipherstash control plane (descriptor, codec lifecycle hook, contract-space artefacts, EQL bundle install via the contract-space migration runner). Project 1 now delivers only the runtime layer on top: envelope, SDK interface, codec encode/decode, bulk-encrypt middleware, PSL constructor, TS factory, operator lowering, end-to-end tests. spec.md changes: - New § Foundation section pinning TML-2397 as the architectural base. - Replace § EQL bundle installation as an extension dependency (databaseDependencies.init) with § EQL bundle installation as a contract-space migration (the runner applies the bundle byte-for-byte inside the cipherstash space). - Replace § Migration factories with § Per-column search config: emitted automatically by the codec lifecycle hook. The user no longer writes addSearchConfig / activatePendingSearches calls; the codec hook on TML-2397 emits add_search_config / remove_search_config / rotate ops per field-delta. - Add AC-UMB8 (strict dbInit preserved) and AC-UMB9 (tree-shakable control vs runtime planes). - Drop the migration-factories row from the task-spec status table. - Update § In-flight dependencies: TML-2397 listed as foundation; PR #404 routing benefit realized via TML-2397; PR #409 already on the contract-spaces base. plan.md changes: - Restructured milestones: M0 (rebase) and M1 (framework SPI cherry- picks) marked DONE; M2 (cipherstash runtime layer) is the bulk of remaining work; M3 (operators + decryptAll + full e2e); M4 (close- out). Original M2.a/M2.b/M2.c/M3/M4 collapse into the new M2/M3. - New § What survives, what dies, what's already done section gives a per-commit traceability matrix from PR #416 / part-2 → this branch. - M2 task list replaces the old M2.c task list; T2.9 (codec hook flag- name alignment) is the only Project-1-side adjustment to TML-2397's cipherstash codec hook. - M3 collapses the old M3+M4 (operator lowering + migration factories + decryptAll). Migration-factory tasks are removed; codec hook on TML-2397 owns that surface. specs/migration-factories.spec.md: - Replaced with a redirect noting it is obsolete and pointing at the TML-2397 codec lifecycle hook. The original spec is preserved in git history on origin/tml-2373-project-1-part-2 for archaeology. The redirect file is deleted in M4 close-out.
…e; reframe as type/operator catalog expansion
Option A from the design discussion: keep the Project 1 / Project 2
split, but drop Project 2's planner-integration mandate now that
TML-2397 (contract spaces) ships the codec lifecycle hook
framework-wide. Project 2 becomes purely surface expansion (more types,
more operators); each new type instantiates Project 1's pattern.
Umbrella spec.md:
- Add new section: Foundation (TML-2397 contract spaces). Covers the
three concerns dissolved by TML-2397: EQL bundle install, per-column
search-config DDL, strict dbInit preserved per-space.
- Update component overview table: Project 2 description rewritten
('Expanded type/operator surface') without planner-integration claim.
- Update Components section descriptions for both Project 1 and
Project 2.
- Update Cross-component design decisions: replace 'Migration factories
produce DataTransformOperations' (now obsolete; codec hook owns the
surface) with 'Per-column search-config DDL is emitted by the codec
lifecycle hook'. Affects both Project 1 and Project 2.
- Update In-flight framework dependencies table: TML-2397 listed as
satisfied foundation; PRs #404 / #409 promoted to merged status (they
landed on the contract-spaces base before our rebase). Drop the
obsolete framework-prerequisites paragraph for Project 2.
- Update 'Why an umbrella' wording: Project 1's RawSqlExpr AST node
still motivates sql-raw-factory, just no longer for migration-factory
reasons.
Umbrella plan.md:
- Update component diagram: replace 'in-flight framework PRs' arrows
with TML-2397 + PR-bundle as foundation arrows; drop separate
'framework prerequisites' arrow into Project 2.
- Update dependency-edges table: TML-2397 row added; PRs row
consolidated to satisfied; framework-prerequisites row removed.
- Update Sequencing section text: drop 'Project 2 blocked on framework
prereqs' note; Project 2 now gates on Project 1 only.
- Update Phase A — Project 1: reflect M0 + M1 done, M2 next; remove
'hand-authored migration factories' from goal.
- Update Phase B — Project 2: drop framework-prereqs gating; add
per-type sequencing within Project 2.
- Update Status table: TML-2397 row noted; remove obsolete framework-
prereqs gating from Project 2 row.
Project 2 spec.md:
- Retitle: 'Expanded type/operator surface' (was 'Planner-driven DDL +
expanded type/operator surface').
- Rewrite Summary + Description to drop the planner-integration half
entirely; reframe as instantiating Project 1's pattern per type.
- Replace Dependencies table: drop framework prerequisites; add TML-2397
as satisfied foundation; clarify Project 1 hard dependency.
- Update Open Questions: keep type-specific ones (mode-flag downgrade,
re-encryption migration, column-key-id, searchableJson semantics);
add 'order of type rollout' note (customer-demand-driven).
Project 1 spec.md:
- Update header summary line: 'expanded type/operator surface' (drop
'planner-driven DDL' from Project 2's description).
- Update Non-goals: replace 'planner-driven per-column DDL beyond the
lifecycle hook' with 'edge cases of the codec hook deferred to
Project 2' (cross-type transitions, mode-flag downgrade policy).
Middleware
intercepthook + framework SPI for content-addressed middlewareRefs: TML-2143
Summary
Lands the framework-level SPI from the cache-middleware project — everything a
cross-family interceptor needs to short-circuit query execution, but none of
the user-facing surface that consumes it. Specifically:
intercepthook onRuntimeMiddlewarethat lets middleware substituterows for an execution without invoking the driver.
source: 'driver' | 'middleware'field onAfterExecuteResultso observerscan tell driver-served from middleware-served executions apart.
contentHash(exec)method onRuntimeMiddlewareContextthat returns abounded, opaque digest identifying the
(storage, statement, params)tupleof a lowered execution. Implemented for both SQL and Mongo runtimes.
@prisma-next/utils—canonicalStringifyandhashContent— that the runtimecontentHashimplementations compose.The cache middleware package and the user-facing
.annotate(...)surface (SQLDSL + ORM
Collection) are deliberately not in this PR — they haveunresolved API questions and will land in a follow-up. After this PR a
hand-written
RuntimeMiddlewarecan already intercept executions and key offctx.contentHash(exec); we just don't expose any product feature that does so.Motivation
Today
RuntimeMiddlewareis observer-only:beforeExecute,onRow, andafterExecutecan inspect execution and raise errors, but they cannotsubstitute a result. That blocks the whole class of interception use cases
listed in TML-2143 — caching, mocking, rate limiting, circuit breaking — and
forces those features to live outside the middleware pipeline as bespoke
wrappers around lanes or runtimes.
Two prerequisites already landed on
main:beforeCompilerewrite hook (TML-2306) — SQL middleware can rewrite theAST between lane
.build()andadapter.lower().runtime-executorwascollapsed into
@prisma-next/framework-components. BothSqlRuntimeImplandMongoRuntimeImplnow extendRuntimeCore<TPlan, TExec, TMiddleware>, whosetemplate is
runBeforeCompile → lower → runWithMiddleware. That meansrunWithMiddlewareis the single canonical implementation of the middlewarelifecycle and both families inherit any hook added there.
This PR adds the second piece of the TML-2143 vision — short-circuiting with
static results — by introducing
interceptin exactly one place(
runWithMiddleware). There is no per-family wiring; Mongo picks it up byinheritance, and the cross-family integration test pins that.
What's in this PR
Framework-components — middleware SPI
RuntimeMiddleware.intercept?(plan, ctx)— new optional hook.Returning
undefined(or omitting the hook) is passthrough; returning anInterceptResultshort-circuits execution.rowsaccepts arrays, sync generators, and async generators —for awaitnatively handles all three via the
Symbol.asyncIterator/Symbol.iteratorfallback, so the orchestrator iterates without branchingon the variant. Cached arrays are the common case; the streaming variants
cover future use cases like mock layers replaying recordings.
AfterExecuteResult.source: 'driver' | 'middleware'— required field.Telemetry middleware round-trips it on its
afterExecuteevents. Existingin-repo
AfterExecuteResultfixtures acrossframework-components,sql-runtime, andmiddleware-telemetrytest suites are updated.RuntimeMiddlewareContext.contentHash(exec)— required, returnsPromise<string>. Family runtimes own the implementation; middlewareauthors against the framework type without family-specific imports or
runtime probing. (Cross-family middleware that need a per-execution
identity — caching, request coalescing — cannot synthesize one from a
content-free
ExecutionPlanmarker, so the family runtime supplies thecanonical key.)
Async because the underlying digest helper uses WebCrypto's
crypto.subtle.digest.runWithMiddlewareorchestrator wiringThe lifecycle becomes:
interceptchain in registration order. First non-undefinedresult wins; subsequent middleware's
interceptdoes not fire. On hit:emit
ctx.log.debug?({ event: 'middleware.intercept', middleware: mw.name }),set
source = 'middleware', and use the intercepted rows as the rowsource. On all-passthrough:
source = 'driver', row source isrunDriver().source === 'driver', run thebeforeExecutechain. On the hit pathit is skipped —
beforeExecutesemantically means "about to hit thedriver."
onRowper row. On thehit path, skip
onRow(intercepted rows did not originate from a driverrow stream). Either way, yield each row to the consumer in order.
afterExecuteon success with{ rowCount, latencyMs, completed: true, source }.afterExecuteon error withcompleted: falseand the appropriatesource. Errors thrown byafterExecuteon the error path remainswallowed — original error rethrown. (Existing semantics, unchanged.)
runDriver()is invoked lazily — only after the intercept chain resolvesto passthrough. This matters for factories that do eager work on invocation
(acquiring a connection, sending a query): they must not run on the
intercepted hit path.
One source-tracking subtlety worth flagging for review:
sourceis set to'middleware'before awaiting anintercepthook, then reverted to'driver'if the hook returnsundefined. This makes a hook that throwsreport as a middleware-source failure rather than a driver-source failure.
An earlier draft of the wiring set
sourceonly after a successful return,which mis-attributed errors thrown inside
intercept.Family runtimes —
contentHashimplementationsSQL (
packages/2-sql/5-runtime/src/content-hash.ts):Lives in its own module so the cache middleware (when it lands) can be
tested with mock contexts, but the SQL runtime owns the production
implementation. Deliberately does not reuse
computeSqlFingerprint—that helper strips literals to group executions by statement shape (used
by telemetry), which is the opposite of what
contentHashneeds:per-execution discrimination, not per-statement-shape grouping.
Mongo (
packages/2-mongo-family/7-runtime/src/content-hash.ts):Two components instead of three:
MongoExecutionPlan.commandis the wirecommand itself, so canonicalizing it captures both structure (collection,
kind) and parameters (document/filter/update/pipeline) in one pass. No
separate "rendered statement" component is needed.
@prisma-next/utils— new modulescanonical-stringify.ts— deterministic JSON-like serializer.Two values that are structurally equivalent (regardless of object key
insertion order) produce the same string; values that differ in any
meaningful way — including types JSON would conflate (
BigInt(1)vs1,+0vs-0,nullvsundefined,Datevs ISO string,Buffervsnumber array) — produce different strings. Throws on functions, symbols,
and circular references.
hash-content.ts—hashContent(value: string): Promise<string>,returning
`sha512:${hex}`. Wraps WebCrypto'scrypto.subtle.digest('SHA-512', …). Output is fixed-length (135 chars)regardless of input size and self-describing (the
sha512:prefix lets afuture migration to a different hash produce visibly distinct keys, and is
consistent with the
sha256:HEXDIGESTshape already used bymeta.storageHash).Two reasons the canonical string is hashed rather than used directly as a
Mapkey: (a) bounded memory — a query bound to a 10 MB JSON columnwould otherwise produce a 10 MB cache key, scaling to gigabytes at
maxEntries=1000; (b) sensitive-data isolation — parameter valuesappear verbatim in the canonical string and would otherwise leak into
debug logs, Redis
KEYS/MONITORoutput, persistence dumps, and anyuser-supplied
CacheStoreimplementation.WebCrypto over
node:cryptofor portability — Deno, Bun, browsers, edgeworkers all expose
globalThis.crypto.subtlewithout a Node-specificimport.
What's not in this PR (follow-ups)
These are explicitly out of scope here. They have unresolved API questions
that I'd rather not resolve under the same review.
defineAnnotation+ValidAnnotations+assertAnnotationsApplicable— the typed annotation surface with operation-kind applicability
(
'read'/'write'). This is the gate that makes "caching a mutation"structurally impossible, but the precise shape of the handle, the
brand mechanism, and the namespace-reservation policy are still in
flux.
.annotate(...)on the SQL DSL builders andthe ORM
Collectionterminals. Blocked on the previous bullet.@prisma-next/middleware-cachepackage —cacheAnnotation,CacheStoreinterface, in-memory LRU default, transaction-scope guard.This is the eventual proof point and the April stop condition for VP4.
The full plan is in
projects/middleware-intercept-and-cache/plan.md(included as docs in this PR). Follow-up refactorings surfaced during this
work are recorded in
projects/middleware-intercept-and-cache/follow-ups.md— most notably the suggestion to drop the
Rowgeneric fromrunWithMiddlewareand consolidate theas Rowcast to a single site inRuntimeCore.execute. Deferred to keep this PR's review surface focused.Test coverage
Unit —
@prisma-next/utilscanonical-stringify.test.ts— primitives (incl.NaN,±Infinity,±0),BigInt(nsuffix vsnumber),Date(tagged + ISO),Buffer/Uint8Array(tagged + hex), arrays (order-preserving), plain objects(key-sorted, recursive),
nullvsundefined, throws on functions /symbols / circular refs.
hash-content.test.ts— fixed-lengthsha512:output, deterministic acrossrepeated calls, discriminates on trailing characters and separator
placement, output does not embed input (opacity), UTF-8 multi-byte
handling, Unicode normalization is byte-significant (NFC vs NFD).
Unit —
framework-componentsruntime-middleware.types.test-d.ts—interceptis optional;InterceptResult.rowsaccepts arrays, sync generators, async generators;rejects rows whose elements are not
Record<string, unknown>(negativetest);
interceptnarrows alongside other hooks whenTPlanis narrowed(e.g.
SqlMiddlewareseesplan.sql/plan.params).run-with-middleware.intercept.test.ts— 17 cases across chain semantics(3), hit path (7), miss path (3), and error path (4). Highlights:
interceptdoes not fire.the upstream observer's
beforeExecuteis not called either.beforeExecuteandonRow;afterExecutefires oncewith
source: 'middleware'and correctrowCount;runDriverfactoryis not invoked.
Iterable,AsyncIterable)work without orchestrator branching.
afterExecutefires withcompleted: false,source: 'middleware', error rethrown;afterExecute-during-errorswallow semantics preserved.
rowCountreflects rowsyielded before the throw.
runWithMiddlewaretests (zero / single / multimiddleware, error paths, swallow, partial-hook) all continue to pass; the
one assertion against
observedResultnow seessource: 'driver'explicitly.
Unit — family runtimes
sql-runtime/test/content-hash.test.ts— stability (same plan / repeatedinvocations / object-key order in params), discrimination (
storageHash,sql, param values, param position,BigIntvsnumber,nullvsundefined,Dateinstants,Bufferbytes), shape(
sha512:[0-9a-f]{128}), opacity (no raw SQL or params embedded).mongo-runtime/test/content-hash.test.ts— analogous: stability overshuffled document/filter key order, discrimination on collection name,
command kind, document/filter values, aggregate pipeline content and
stage order, shape, opacity.
Integration
sql-runtime/test/intercept-decoding.test.ts— when anSqlMiddlewareintercepts and returns raw rows, those rows go through the SQL runtime's
normal codec decode pass (single row, multiple rows, async-iterable rows,
byte-identical decoded output vs driver-served rows for the same wire
value). This is the contract that lets the eventual cache middleware
store raw rows cheaply: cache the raw, decode on the way out.
test/integration/test/cross-package/cross-family-middleware.test.ts—registers a single generic interceptor on both an SQL runtime
(
MockSqlRuntime extending RuntimeCore) and a realMongoRuntimeImpl,asserts that the interceptor short-circuits execution in both families
via the same
runWithMiddlewareorchestrator. The Mongo runtime needsno production change to pick this up; the test pins that.
Risk and migration
contentHashfield onRuntimeMiddlewareContextis required, but thetype is internal to runtime authors — every concrete construction site
in this repo is updated, and the typechecker enforces completeness for
out-of-tree authors. The new
sourcefield onAfterExecuteResultisalso required; tests using
toMatchObjectcontinue to pass withoutchanges, and the few sites constructing
AfterExecuteResultdirectlyare updated.
unit tests pass without modification. The only observable change for
middleware that does not implement
interceptis the newsource: 'driver'value inAfterExecuteResult.depends on
@prisma-next/utilsfor the shared serializer / hasher (itwas already a SQL-runtime dep); no third-party additions.
Sequencing
This is M1 of
projects/middleware-intercept-and-cache. Once it lands thefollow-up PR will add the
defineAnnotationsurface (M1.7+), the laneannotation builders (M2), and the
@prisma-next/middleware-cachepackage(M3) — at which point the April stop condition for TML-2143 / WS3 VP4 is
met (a repeated query served from cache without hitting the database).
Summary by CodeRabbit
New Features
Documentation
Tests