Skip to content

Conversation

@ymc9
Copy link
Member

@ymc9 ymc9 commented Oct 22, 2025

Summary by CodeRabbit

  • Documentation
    • Added an ORM performance benchmark page comparing ZenStack, Prisma, Drizzle, and TypeORM on local and remote PostgreSQL. Includes methodology, configuration notes, iteration/dataset details, millisecond and relative-percent results, reading guidance, and observations on cold-start overhead and transactional behavior impacting mutation performance.

@vercel
Copy link

vercel bot commented Oct 22, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
zenstack-new-site Ready Ready Preview Comment Oct 22, 2025 8:30pm

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 22, 2025

Walkthrough

Adds a new versioned documentation page at version-3.x describing an ORM performance benchmark comparing ZenStack, Prisma, Drizzle, and TypeORM, including setup/config notes, measured results for local and remote PostgreSQL, and observations about benchmarking caveats and performance characteristics.

Changes

Cohort / File(s) Summary
Documentation - ORM Benchmark
versioned_docs/version-3.x/orm/benchmark.md
New documentation page containing benchmark metadata, overview, repository/configuration notes, result tables for Local and Remote PostgreSQL (iterations, dataset sizes, per-operation timings for zenstack/prisma/drizzle/typeorm), and an Observations section covering cold-start overhead and transactional behavior.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title follows conventional commit conventions by using the “doc:” prefix and succinctly describes the primary change—adding a performance benchmark documentation page. It is concise, clear, and directly reflects the main content introduced in the pull request. Scanning the title alone, a teammate can immediately understand that a new perf benchmark doc is being added.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch doc/perf-page

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (7)
versioned_docs/version-3.x/orm/benchmark.md (7)

10-10: Tense agreement.

“maintains … and included” mixes present/past. Use present.

-ZenStack maintains a fork of [prisma/orm-benchmarks](https://github.com/prisma/orm-benchmarks) and included ZenStack v3 into the test matrix.
+ZenStack maintains a fork of [prisma/orm-benchmarks](https://github.com/prisma/orm-benchmarks) and includes ZenStack v3 in the test matrix.

20-21: Document exact versions and flags for reproducibility.

Please add Prisma/Node/PG versions and how relationJoins was enabled.

 The Prisma tests are run with the new (Rust-free) [prisma-client] ... preview feature.
 We believe this aligns better with how the majority of users will use Prisma going forward.
+  
+Note the exact setup used for these results:
+- Prisma version: <x.y.z>, prisma-client generator: <version>, relationJoins: enabled via `previewFeatures = ["relationJoins"]`
+- Node.js: <version>; OS: <name/version>
+- PostgreSQL server: <version>; client driver: <version>

30-34: Add “Test Environment” details for Local runs.

Hardware/OS and tool versions materially affect numbers; include them here.

 > Tests are run against a PostgreSQL database in a local Docker container.
 
 Iteration count: 100  
 Dataset size: 500
+
+#### Test Environment (Local)
+- Machine: <CPU model>, <cores/threads>, RAM <GB>
+- OS: <name/version>
+- Docker: <version>; Image: postgres:<tag>
+- Node.js: <version>
+- ZenStack: <version>; Prisma/Drizzle/TypeORM: <versions>

44-54: Clarify client location and network for Remote runs.

Add client region and measured RTT to Neon; network dominates here.

 > Tests are run from a local machine against a PostgreSQL database from [Neon](https://neon.com/) in the us-east-1 region.
 
 Iteration count: 20  
 Dataset size: 500
+
+Notes:
+- Client location: <city/region>
+- Average ping/RTT to us-east-1 during tests: <ms> (±<ms>)
+- Connection pooling: <enabled/disabled>, pool size: <n>

58-61: Qualify the “cold start” claim.

Please note Zod version and whether the 20ms is p50/p95 and per-model or per-process. Helps readers contextualize.


64-66: Tighten wording; address LanguageTool hint.

Use “Currently” and avoid contractions; clarify the DB behavior.

-Right now, ZenStack wraps every mutation in an explicit transaction. Although it doesn't make a real difference on the database side because PostgreSQL will do it anyway behind the scenes, it adds communication overhead between the client and the database server.
+Currently, ZenStack wraps every mutation in an explicit transaction. Although this typically makes no difference on the database side (PostgreSQL already wraps statements in a transaction implicitly), it adds client–server round‑trip overhead.
 
-There'll be optimizations for this in the future.
+There will be optimizations for this in the future.

26-26: Add “Last updated” and benchmark commit SHA.

Time-bound benchmarks age quickly. Record the date and benchmark repo commit for traceability.

 ## Results
+
+_Last updated: 2025-10-22; Benchmarks repo commit: `<short-sha>`_
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2decd00 and bffce94.

📒 Files selected for processing (1)
  • versioned_docs/version-3.x/orm/benchmark.md (1 hunks)
🧰 Additional context used
🪛 LanguageTool
versioned_docs/version-3.x/orm/benchmark.md

[style] ~63-~63: For conciseness, consider replacing this expression with an adverb.
Context: ... ZenStack Slower for Simple Mutations? Right now, ZenStack wraps every mutation in an ex...

(AT_THE_MOMENT)

🪛 markdownlint-cli2 (0.18.1)
versioned_docs/version-3.x/orm/benchmark.md

16-16: Bare URL used

(MD034, no-bare-urls)

@ymc9 ymc9 merged commit 1f444c8 into main Oct 22, 2025
3 of 4 checks passed
@ymc9 ymc9 deleted the doc/perf-page branch October 22, 2025 20:31
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (6)
versioned_docs/version-3.x/orm/benchmark.md (6)

8-13: Add methodology and environment for reproducibility.

Briefly capture run date, commit SHAs, versions, hardware, OS, Node, and Postgres to make results comparable.

 ## Overview
 
 ZenStack maintains a fork of [prisma/orm-benchmarks](https://github.com/prisma/orm-benchmarks) and included ZenStack v3 into the test matrix. This page will be periodically updated with the latest benchmark results.
 
 Please understand ORM performance is a complex topic because different applications may have very different database configurations, data patterns and query profiles. The benchmark results should be used to understand if things are **in the right ballpark**, rather than exact performance numbers you will get in your application.
+
+### Methodology & Environment
+
+- Run date: 2025-10-22
+- Benchmark repo commit: <commit-hash>
+- Tool versions: Node <x.y.z>, Prisma <x.y.z>, Drizzle <x.y.z>, TypeORM <x.y.z>, ZenStack <x.y.z>
+- Database: PostgreSQL <x.y> (local Docker / Neon), schema per orm-benchmarks
+- Hardware/OS: <CPU>, <RAM>, <OS/version>
+- Procedure: N warmup runs discarded; results are mean ms/op over M iterations (see sections below)

24-25: Clarify percent sign convention to avoid ± confusion.

Spell out “x% faster/slower vs ZenStack” instead of relying on sign.

-The numbers shown are in milliseconds per operation; lower is better. ZenStack's numbers are used as a baseline and compared against other ORMs. The percentage numbers in parentheses show how much faster (negative) or slower (positive) the other ORMs are compared to ZenStack.
+The numbers are milliseconds per operation (lower is better). ZenStack is the baseline; percentages in parentheses state relative change, e.g., “15% faster vs ZenStack” or “27% slower vs ZenStack”.

58-61: Tighten wording on Zod cold-start; avoid “JIT compilation”.

Zod initializes/caches parsing logic on first use; “JIT compilation” can be misleading.

-While not readily observable in the numbers, ZenStack has a higher cold start overhead due to the usage of [Zod](https://zod.dev/) for input validation. Zod does JIT compilation of schemas on the first run. This overhead is amortized over multiple operations. The worst-case cold start overhead observed is around 20ms in the test environment.
+While not readily observable in the numbers, ZenStack has higher cold‑start overhead due to [Zod](https://zod.dev/) input validation. Zod initializes and caches schema parsing logic on first use, so the cost is amortized over subsequent operations. In our tests, the worst‑case cold‑start overhead was ~20 ms.

62-66: Minor style nits: use “Currently”; avoid contractions; tighten phrasing.

Also aligns with the style hint. As per static analysis hints.

-Right now, ZenStack wraps every mutation in an explicit transaction. Although it doesn't make a real difference on the database side because PostgreSQL will do it anyway behind the scenes, it adds communication overhead between the client and the database server.
-
-There'll be optimizations for this in the future.
+Currently, ZenStack wraps every mutation in an explicit transaction. Although PostgreSQL would do this implicitly, the explicit transaction adds client–server round‑trip overhead.
+
+We plan to optimize this behavior.

1-6: Optional frontmatter hardening.

Consider adding a stable slug for durable links.

 ---
 sidebar_position: 18
 description: ORM performance benchmark
+slug: /orm/benchmark
 ---

20-21: Consider pinning explicit versions referenced in config notes.

Readers benefit from knowing exact Prisma generator and preview feature versions used.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bffce94 and 913110b.

📒 Files selected for processing (1)
  • versioned_docs/version-3.x/orm/benchmark.md (1 hunks)
🧰 Additional context used
🪛 LanguageTool
versioned_docs/version-3.x/orm/benchmark.md

[style] ~63-~63: For conciseness, consider replacing this expression with an adverb.
Context: ... ZenStack Slower for Simple Mutations? Right now, ZenStack wraps every mutation in an ex...

(AT_THE_MOMENT)

🔇 Additional comments (1)
versioned_docs/version-3.x/orm/benchmark.md (1)

28-41: Add median and p95 metrics to benchmark results
Include median and p95 values to show distribution, not just means. The example script failed to locate results.json—please verify the correct path to your raw timings file or adjust the command accordingly.

Comment on lines +1 to +66
---
sidebar_position: 18
description: ORM performance benchmark
---

# Performance Benchmark

## Overview

ZenStack maintains a fork of [prisma/orm-benchmarks](https://github.com/prisma/orm-benchmarks) and included ZenStack v3 into the test matrix. This page will be periodically updated with the latest benchmark results.

Please understand ORM performance is a complex topic because different applications may have very different database configurations, data patterns and query profiles. The benchmark results should be used to understand if things are **in the right ballpark**, rather than exact performance numbers you will get in your application.

### Repository

[zenstackhq/orm-benchmarks](https://github.com/zenstackhq/orm-benchmarks)

### Configuration Notes

The Prisma tests are run with the new (Rust-free) [prisma-client](https://www.prisma.io/docs/orm/prisma-schema/overview/generators) generator and with the [relationJoins](https://www.prisma.io/docs/orm/prisma-client/queries/relation-queries#relation-load-strategies-preview) preview feature. We believe this aligns better with how the majority of users will use Prisma going forward.

### How to Read the Results

The numbers shown are in milliseconds per operation; lower is better. ZenStack's numbers are used as a baseline and compared against other ORMs. The percentage numbers in parentheses show how much faster (negative) or slower (positive) the other ORMs are compared to ZenStack.

## Results

### Local PostgreSQL

> Tests are run against a PostgreSQL database in a local Docker container.

Iteration count: 100
Dataset size: 500

|ORM|findMany|findMany-filter-paginate-order|findMany-1-level-nesting|findFirst|findFirst-1-level-nesting|findUnique|findUnique-1-level-nesting|create|nested-create|update|nested-update|upsert|nested-upsert|delete|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|zenstack|3.62|1.23|118.28|1.08|1.10|0.64|0.95|2.10|4.48|1.70|2.64|1.42|2.53|2.02|
|prisma|3.04 (-15.95%)|1.35 (+9.86%)|134.43 (+13.66%)|1.37 (+26.98%)|1.53 (+39.37%)|0.97 (+52.42%)|1.53 (+61.20%)|1.84 (-12.67%)|4.98 (+11.30%)|1.22 (-27.88%)|3.10 (+17.41%)|2.70 (+90.26%)|2.74 (+8.18%)|1.53 (-24.27%)|
|drizzle|8.42 (+132.62%)|0.97 (-21.18%)|94.88 (-19.78%)|1.09 (+1.39%)|1.15 (+4.82%)|0.74 (+15.88%)|1.15 (+21.70%)|1.61 (-23.57%)|3.72 (-16.84%)|0.88 (-47.98%)|2.25 (-14.80%)|0.77 (-45.54%)|2.08 (-17.96%)|1.28 (-36.74%)|
|typeorm|1.73 (-52.10%)|0.73 (-40.98%)|23.24 (-80.35%)|0.87 (-19.29%)|1.30 (+18.26%)|0.37 (-42.66%)|1.06 (+11.64%)|1.80 (-14.13%)|2.80 (-37.41%)|0.51 (-69.89%)|1.41 (-46.69%)|1.60 (+12.60%)|2.02 (-20.07%)|0.91 (-54.96%)|

### Remote PostgreSQL

> Tests are run from a local machine against a PostgreSQL database from [Neon](https://neon.com/) in the us-east-1 region.

Iteration count: 20
Dataset size: 500

|ORM|findMany|findMany-filter-paginate-order|findMany-1-level-nesting|findFirst|findFirst-1-level-nesting|findUnique|findUnique-1-level-nesting|create|nested-create|update|nested-update|upsert|nested-upsert|delete|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|zenstack|266.88|89.09|379.77|83.93|89.51|87.56|89.48|261.96|719.56|257.24|438.89|264.01|430.05|256.50|
|prisma|349.80 (+31.07%)|173.51 (+94.76%)|485.73 (+27.90%)|171.33 (+104.15%)|179.38 (+100.41%)|173.32 (+97.95%)|180.36 (+101.57%)|175.17 (-33.13%)|1043.86 (+45.07%)|174.61 (-32.12%)|681.50 (+55.28%)|613.88 (+132.52%)|603.02 (+40.22%)|178.72 (-30.32%)|
|drizzle|1067.94 (+300.16%)|177.13 (+98.83%)|284.89 (-24.98%)|174.92 (+108.42%)|178.58 (+99.51%)|176.94 (+102.08%)|180.89 (+102.16%)|183.68 (-29.88%)|716.85 (-0.38%)|179.00 (-30.42%)|532.09 (+21.24%)|176.24 (-33.25%)|540.86 (+25.77%)|180.24 (-29.73%)|
|typeorm|268.19 (+0.49%)|87.74 (-1.51%)|473.78 (+24.76%)|86.04 (+2.52%)|182.50 (+103.90%)|88.06 (+0.57%)|175.96 (+96.66%)|265.91 (+1.51%)|436.68 (-39.31%)|88.39 (-65.64%)|348.48 (-20.60%)|355.76 (+34.75%)|443.17 (+3.05%)|88.53 (-65.48%)|

## Observations

### Cold Start Overhead

While not readily observable in the numbers, ZenStack has a higher cold start overhead due to the usage of [Zod](https://zod.dev/) for input validation. Zod does JIT compilation of schemas on the first run. This overhead is amortized over multiple operations. The worst-case cold start overhead observed is around 20ms in the test environment.

### Why Is ZenStack Slower for Simple Mutations?

Right now, ZenStack wraps every mutation in an explicit transaction. Although it doesn't make a real difference on the database side because PostgreSQL will do it anyway behind the scenes, it adds communication overhead between the client and the database server.

There'll be optimizations for this in the future.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Quick lint sweep script (optional).

Run to catch bare URLs, trailing spaces, and repeated headings before merge.


🏁 Script executed:

#!/bin/bash
# Trailing spaces (hard breaks)
rg -nP '  $' versioned_docs/version-3.x/orm/benchmark.md
# Bare URLs (MD034)
rg -nP 'https?://\S+' versioned_docs/version-3.x/orm/benchmark.md | rg -v '\]\('
# Heading style sanity
rg -nP '^\s*#' versioned_docs/version-3.x/orm/benchmark.md

Length of output: 333


Remove trailing spaces on lines 32 and 46. Each instance introduces unintended hard breaks in the rendered Markdown.

🧰 Tools
🪛 LanguageTool

[style] ~63-~63: For conciseness, consider replacing this expression with an adverb.
Context: ... ZenStack Slower for Simple Mutations? Right now, ZenStack wraps every mutation in an ex...

(AT_THE_MOMENT)

🤖 Prompt for AI Agents
versioned_docs/version-3.x/orm/benchmark.md around lines 32 and 46: those lines
contain trailing spaces that force unintended hard line breaks in the rendered
Markdown; remove the trailing whitespace at the end of line 32 and line 46 (and
optionally run a trim-whitespace or pre-commit hook across the file to ensure no
other trailing spaces remain).

Comment on lines +32 to +34
Iteration count: 100
Dataset size: 500

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Replace hard line-break spaces to satisfy markdownlint (MD009).

Use list items (or
) instead of trailing double spaces.

-Iteration count: 100  
-Dataset size: 500
+- Iteration count: 100
+- Dataset size: 500
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Iteration count: 100
Dataset size: 500
- Iteration count: 100
- Dataset size: 500
🤖 Prompt for AI Agents
In versioned_docs/version-3.x/orm/benchmark.md around lines 32 to 34, the two
trailing spaces at the ends of the "Iteration count: 100  " and "Dataset size:
500" lines trigger markdownlint MD009; remove the hard line-break spaces and
rewrite the two lines as proper Markdown (e.g., convert to a bulleted list, or
join into a single paragraph, or add an explicit <br/> tag) so that line breaks
are represented without trailing double spaces.

Comment on lines +46 to +47
Iteration count: 20
Dataset size: 500
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Same hard line-break issue here.

Switch to list form for consistency and linting.

-Iteration count: 20  
-Dataset size: 500
+- Iteration count: 20
+- Dataset size: 500
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Iteration count: 20
Dataset size: 500
- Iteration count: 20
- Dataset size: 500
🤖 Prompt for AI Agents
versioned_docs/version-3.x/orm/benchmark.md around lines 46 to 47: the document
currently uses two hard line-breaks ("Iteration count: 20  " and "Dataset size:
500") which breaks consistency and fails linting; replace these two hard
line-break lines with a proper markdown list (e.g., a bulleted or numbered list)
containing "Iteration count: 20" and "Dataset size: 500" so the content is
consistent with surrounding docs and passes lint checks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants