Zero is amazing, but getting started can take a lot - setting up Postgres, approving native SQLite, and then configuring the two to work together.
oreZ is an experiment at making Zero work on PGlite and SQLite-wasm, and then packing the two together so running them is as simple as possible. It's intended as a dev-mode tool, with a CLI, programmatic API, and Vite plugin.
bunx orez
What oreZ handles automatically:
- Memory management — auto-sizes Node heap based on system RAM, purges consumed WAL, batches restores with CHECKPOINTs to prevent WASM OOM
- Real-time replication — changes sync instantly via pg_notify triggers, with adaptive polling as fallback; tracks configured/public tables and shard
clientstables - Auto-recovery — finds available ports if configured ones are busy and provides reset/restart controls for zero-cache state
- PGlite compatibility — rewrites unsupported queries, fakes wire protocol responses, filters unsupported column types, cleans session state between connections
- Admin dashboard — live zero-cache logs, restart/reset controls, connection info (
--admin) - Production restores —
pg_dump/pg_restorewith COPY→INSERT conversion, skips unsupported extensions, handles oversized rows, auto-restarts zero-cache - Zero-cache workarounds — fixes concurrent COPY bug, disables query planner (WASM infinite loop), respects publication filtering
- Extensions — pgvector and pg_trgm enabled by default
bunx orez
--pg-port=6434 postgresql proxy port
--zero-port=5849 zero-cache port
--data-dir=.orez data directory
--migrations=DIR migrations directory (skipped if not set)
--seed=FILE seed file path
--pg-user=user postgresql user
--pg-password=password postgresql password
--skip-zero-cache run pglite + proxy only, skip zero-cache
--log-level=warn error, warn, info, debug
--s3 also start a local s3-compatible server
--s3-port=9200 s3 server port
--disable-wasm-sqlite use native @rocicorp/zero-sqlite3 instead of wasm bedrock-sqlite
--on-db-ready=CMD command to run after db+proxy are ready, before zero-cache starts
--on-healthy=CMD command to run once all services are healthy
--admin start admin dashboard (logs, env, restart/reset zero-cache)
--admin-port=6477 admin dashboard port
Ports auto-increment if already in use.
Start the admin dashboard with --admin:
bunx orez --admin
Open http://localhost:6477 for a real-time dashboard with:
- Logs — live-streaming logs from zero-cache, filterable by source and level
- Env — environment variables passed to zero-cache
- Actions — restart zero-cache, reset (wipe replica + resync), clear logs
The dashboard polls every second for new logs and updates uptime/port status every 5 seconds.
bun install orez
import { startZeroLite } from 'orez'
const { config, stop, db, instances } = await startZeroLite({
pgPort: 6434,
zeroPort: 5849,
migrationsDir: 'src/database/migrations',
seedFile: 'src/database/seed.sql',
})
// your app connects to zero-cache at localhost:5849
// database is at postgresql://user:password@localhost:6434/postgres
// db is the postgres PGlite instance (for direct queries)
// instances has all three: { postgres, cvr, cdb }
// when done
await stop()All options are optional with sensible defaults. Ports auto-find if in use.
| Hook | CLI | Programmatic | When |
|---|---|---|---|
| on-db-ready | --on-db-ready=CMD |
onDbReady: 'CMD' or onDbReady: fn |
after db + proxy ready, before zero |
| on-healthy | --on-healthy=CMD |
onHealthy: 'CMD' or onHealthy: fn |
after all services ready |
Hooks can be shell command strings (CLI) or callback functions (programmatic). Shell commands receive env vars: DATABASE_URL, OREZ_PG_PORT, OREZ_ZERO_PORT. Change tracking triggers are re-installed after onDbReady, so tables created by hooks are tracked.
import { orezPlugin } from 'orez/vite'
export default {
plugins: [
orezPlugin({
pgPort: 6434,
zeroPort: 5849,
migrationsDir: 'src/database/migrations',
// lifecycle hooks (optional)
onDbReady: () => console.log('db ready'),
onHealthy: () => console.log('all services healthy'),
}),
],
}Starts oreZ when vite dev server starts, stops on close. Supports all startZeroLite options plus s3 and s3Port for local S3.
oreZ starts three things:
- Three PGlite instances (full PostgreSQL 16 running in-process via WASM) — one for each database zero-cache expects (upstream, CVR, change)
- A TCP proxy that speaks the PostgreSQL wire protocol, routing connections to the correct PGlite instance and handling logical replication
- A zero-cache child process that connects to the proxy thinking it's a real Postgres server
zero-cache expects three separate databases: postgres (app data), zero_cvr (client view records), and zero_cdb (change-streamer state). In real PostgreSQL these are independent databases with separate connection pools and transaction contexts.
oreZ creates a separate PGlite instance for each database, each with its own data directory and mutex. This is critical because PGlite is single-session — all proxy connections to the same instance share one session. Without isolation, transactions on the CVR database get corrupted by queries on the postgres database (zero-cache's view-syncer detects this as ConcurrentModificationException and crashes). Separate instances eliminate cross-database interference entirely.
The proxy routes connections based on the database name in the startup message:
| Connection database | PGlite instance | Data directory |
|---|---|---|
postgres |
postgres | pgdata-postgres |
zero_cvr |
cvr | pgdata-cvr |
zero_cdb |
cdb | pgdata-cdb |
Each instance has its own mutex for serializing queries. Extensions (pgvector, pg_trgm) and app migrations only run on the postgres instance.
zero-cache needs logical replication to stay in sync with the upstream database. PGlite doesn't support logical replication natively, so oreZ fakes it. Every mutation is captured by triggers into a changes table, then encoded into the pgoutput binary protocol and streamed to zero-cache through the replication connection. zero-cache can't tell the difference.
Change notifications are real-time via pg_notify — triggers fire a notification on every write, waking the replication handler immediately. Polling is only a fallback for edge cases (e.g., bulk restores that bypass triggers). Fallback polling is adaptive: 20ms when catching up, 500ms when idle. Batch size is 2000 changes per poll. Consumed changes are purged every 10 cycles to prevent the _zero_changes table from growing unbounded.
Shard schemas (e.g., chat_0) are re-scanned periodically and change tracking is installed for shard clients tables.
The replication handler also tracks shard schema tables so that .server promises on zero mutations resolve correctly.
The whole point of oreZ is that bunx orez works everywhere with no native compilation step. Postgres runs in-process as WASM via PGlite. zero-cache also needs SQLite, and @rocicorp/zero-sqlite3 ships as a compiled C addon — so orez ships bedrock-sqlite, SQLite's bedrock branch recompiled to WASM with BEGIN CONCURRENT and WAL2 support. At startup, oreZ patches @rocicorp/zero-sqlite3 to load bedrock-sqlite instead of the native C addon. Both databases run as WASM — nothing to compile, nothing platform-specific. Just bun install and go.
The CLI detects system memory on startup and re-spawns the process with --max-old-space-size set to ~50% of available RAM (minimum 4GB). PGlite WASM needs substantial heap for large datasets and restores — this prevents cryptic V8 OOM crashes without requiring manual tuning.
Your entire environment is forwarded to the zero-cache child process. This means any ZERO_* env vars you set are passed through automatically.
oreZ provides sensible defaults for a few variables:
| Variable | Default | Overridable |
|---|---|---|
NODE_ENV |
development |
yes |
ZERO_LOG_LEVEL |
from --log-level |
yes |
ZERO_NUM_SYNC_WORKERS |
1 |
yes |
ZERO_ENABLE_QUERY_PLANNER |
false |
yes |
ZERO_UPSTREAM_DB |
(managed by oreZ) | no |
ZERO_CVR_DB |
(managed by oreZ) | no |
ZERO_CHANGE_DB |
(managed by oreZ) | no |
ZERO_REPLICA_FILE |
(managed by oreZ) | no |
ZERO_PORT |
(managed by oreZ) | no |
The --log-level flag controls zero-cache (ZERO_LOG_LEVEL) and oreZ console output. Default is warn to keep output quiet. Set to info or debug for troubleshooting. ZERO_ENABLE_QUERY_PLANNER is disabled by default because it can freeze with wasm sqlite.
The layering is: oreZ defaults → your env → oreZ-managed connection vars. So setting ZERO_LOG_LEVEL=debug in your shell overrides the --log-level default, but you can't override the database connection strings (oreZ needs to point zero-cache at its own proxy).
Common vars you might want to set:
ZERO_MUTATE_URL=http://localhost:3000/api/zero/push
ZERO_QUERY_URL=http://localhost:3000/api/zero/pullThe proxy intercepts several things to convince zero-cache it's talking to a real PostgreSQL server with logical replication enabled:
IDENTIFY_SYSTEMreturns a fake system ID and timelineCREATE_REPLICATION_SLOTpersists slot info in a local table and returns a valid LSNSTART_REPLICATIONenters streaming mode, encoding changes as pgoutput binary messagesversion()returns a standard PostgreSQL 16.4 version string (PGlite's Emscripten string breakspg_restoreand other tools)current_setting('wal_level')always returnslogicalpg_replication_slotsqueries are redirected to a local tracking tableSET TRANSACTION SNAPSHOTis silently accepted (PGlite doesn't support imported snapshots)ALTER ROLE ... REPLICATIONreturns successREAD ONLYis stripped from transaction starts (PGlite is single-session)ISOLATION LEVELis stripped from all queries (meaningless with a single-session database)SET TRANSACTION/SET SESSIONreturn synthetic success without hitting PGlite
The pgoutput encoder produces spec-compliant binary messages: Begin, Relation, Insert, Update, Delete, Commit, and Keepalive. Column values are encoded as text (typeOid 25) except booleans which use typeOid 16 with t/f encoding, matching PostgreSQL's native boolean wire format.
A lot of things don't "just work" when you replace Postgres with PGlite and native SQLite with WASM. Here's what oreZ does to make it seamless.
The proxy runs on raw net.Socket and uses pg-gateway for connection/auth protocol handling, with oreZ intercepting and rewriting messages where needed (logical replication commands, query rewrites, replication slot views).
PGlite is single-session — all proxy connections share one session. If pg_restore sets search_path = '', every subsequent connection inherits that. On disconnect, oreZ resets search_path, statement_timeout, lock_timeout, and idle_in_transaction_session_timeout, and rolls back any open transaction. Without this, the next connection gets a corrupted session.
The mutex uses setImmediate/setTimeout between releases instead of resolving the next waiter as a microtask. Without this, releasing the mutex triggers a chain of synchronous PGlite executions that blocks all socket I/O — connections stall because reads and writes can't be processed between queries.
When execProtocolRaw throws (PGlite internal error), the proxy sends a proper ErrorResponse + ReadyForQuery over the wire instead of destroying the socket. The client sees an error message and continues working.
zero-cache imports @rocicorp/zero-sqlite3 (a native C addon) via ESM import. oreZ uses Node's module.register() API with --import to intercept resolution — ESM resolve and load hooks redirect @rocicorp/zero-sqlite3 to bedrock-sqlite WASM at runtime. The hook templates live in src/shim/ and are written to tmpdir with the resolved bedrock-sqlite path substituted.
The shim also polyfills the better-sqlite3 API surface zero-cache expects: unsafeMode(), defaultSafeIntegers(), serialize(), backup(), and scanStatus/scanStatusV2/scanStatusReset on Statement prototypes (zero-cache's query planner calls these for scan statistics, which WASM doesn't support).
ZERO_ENABLE_QUERY_PLANNER is set to false because it relies on SQLite scan statistics that trigger infinite loops in WASM sqlite (and have caused freezes with native sqlite too). The planner is an optimization, not required for correctness.
Replication RELATION messages carry correct PostgreSQL type OIDs (not just text/25) so zero-cache selects the right value parsers. For example, timestamp with time zone gets OID 1184, which triggers timestampToFpMillis conversion. Without this, zero-cache misinterprets column types.
Columns with types zero-cache can't handle (tsvector, tsquery, USER-DEFINED) are filtered out of replication messages. Without exclusion, zero-cache crashes on the unknown types. The columns are removed from both new and old row data.
If ZERO_APP_PUBLICATIONS is set, only tables in that publication get change-tracking triggers. This prevents streaming changes for private tables (user sessions, accounts) that zero-cache doesn't know about. Stale triggers from previous installs (before the publication existed) are cleaned up automatically.
oreZ deletes the SQLite replica file (zero-replica.db) and related files (-wal, -shm, -wal2) on startup/reset so zero-cache performs a fresh sync.
Existing installs that used a single PGlite instance (pgdata/) are auto-migrated to the multi-instance layout (pgdata-postgres/) on first run. No manual intervention needed.
The restore parser tracks $$ and $tag$ blocks to correctly identify statement boundaries in function bodies. Without this, semicolons inside CREATE FUNCTION bodies are misinterpreted as statement terminators.
After restore, oreZ drops triggers whose backing functions don't exist. This happens when a filtered pg_dump includes triggers on public-schema tables that reference functions from excluded schemas. The triggers survive TOC filtering because they're associated with public tables, but the functions they reference weren't included.
pg_restore tries connecting via wire protocol first (for restoring into a running oreZ instance). If the connection fails, it falls back to direct PGlite access. But if the connection succeeds and the restore itself fails, it does not fall back — the error is real and should be reported, not masked by a retry.
The proxy uses callback-based socket.on('data') events instead of async iterators for the message loop. Async iterators have unreliable behavior across runtimes (Node.js vs Bun). The callback approach with manual pause/resume works everywhere.
Tests cover the full stack from binary encoding to TCP-level integration, including pg_restore end-to-end tests and bedrock-sqlite WASM engine tests:
bun run test # orez tests
cd sqlite-wasm && bunx vitest run # bedrock-sqlite tests
The oreZ test suite includes a zero-cache compatibility layer that decodes pgoutput messages into the same typed format that zero-cache's PgoutputParser produces, validating end-to-end compatibility.
The bedrock-sqlite tests cover Database/Statement API, transactions, WAL/WAL2 modes, BEGIN CONCURRENT, FTS5, JSON functions, custom functions, aggregates, bigint handling, and file persistence.
This is a development tool. It is not suitable for production use.
- PGlite is single-session per instance. All queries to the same database are serialized through a mutex. Cross-database queries are independent (each database has its own PGlite instance and mutex). Fine for development but would bottleneck under real load.
- Triggers add overhead to every write. Again, fine for development.
- PGlite stores data on the local filesystem. No replication, no high availability. Use
orez pg_dump/orez pg_restorefor backups.
src/
cli-entry.ts thin wrapper for auto heap sizing
cli.ts cli with citty
index.ts main entry, orchestrates startup + sqlite wasm patching
config.ts configuration with defaults
log.ts colored log prefixes
mutex.ts simple mutex for serializing pglite access
port.ts auto port finding
pg-proxy.ts raw tcp proxy implementing postgresql wire protocol
pglite-manager.ts multi-instance pglite creation and migration runner
s3-local.ts local s3-compatible server (orez/s3)
vite-plugin.ts vite dev server plugin (orez/vite)
replication/
handler.ts replication protocol state machine + adaptive polling
pgoutput-encoder.ts binary pgoutput message encoder
change-tracker.ts trigger installation, shard tracking, change purging
integration/
integration.test.ts end-to-end zero-cache sync test
restore.test.ts pg_dump/restore integration test
sqlite-wasm/
Makefile emscripten build for bedrock-sqlite wasm binary
bedrock-sqlite.d.ts typescript declarations
native/
api.js better-sqlite3 compatible database/statement API
vfs.c custom VFS with SHM support for WAL/WAL2
vfs.js javascript VFS bridge
test/
database.test.ts wasm sqlite engine tests
Dump and restore your local PGlite database using WASM-compiled pg_dump — no native Postgres install needed.
bunx orez pg_dump > backup.sql
bunx orez pg_dump --output backup.sql
bunx orez pg_restore backup.sql
bunx orez pg_restore backup.sql --clean
pg_dump options:
--data-dir=.orez data directory
-o, --output output file path (default: stdout)
pg_restore options:
--data-dir=.orez data directory
--clean drop and recreate public schema before restoring
pg_restore also supports connecting to a running oreZ instance via wire protocol — just pass --pg-port:
bunx orez pg_restore backup.sql --pg-port 6434
bunx orez pg_restore backup.sql --pg-port 6434 --pg-user user --pg-password password
bunx orez pg_restore backup.sql --direct # force direct PGlite access, skip wire protocol
Restore streams the dump file line-by-line so it can handle large dumps without loading everything into memory. SQL is parsed using pgsql-parser (the real PostgreSQL C parser compiled to WASM) for accurate statement classification and rewriting.
- COPY FROM stdin → INSERT: PGlite WASM doesn't support the COPY protocol, so COPY blocks are converted to batched multi-row INSERTs (50 rows per statement, flushed at 1MB)
- Unsupported extensions:
pg_stat_statements,pg_buffercache,pg_cron, etc. — CREATE, DROP, and COMMENT ON EXTENSION statements are skipped - Idempotent DDL:
CREATE SCHEMA→IF NOT EXISTS,CREATE FUNCTION/VIEW→OR REPLACE - Oversized rows: Rows larger than 16MB are skipped with a warning (PGlite WASM crashes around 24MB per value)
- Missing table references: DDL errors from filtered dumps (e.g. ALTER TABLE on excluded tables) log a warning and continue
- Transaction batching: Data statements are grouped 200 per transaction with CHECKPOINT every 3 batches to manage WASM memory
- PostgreSQL 18+ artifacts:
SET transaction_timeoutsilently skipped - psql meta-commands:
\restrictand similar silently skipped
This means you can take a pg_dump from a production Postgres database and restore it directly into oreZ — incompatible statements are handled automatically.
When oreZ is not running, pg_restore opens PGlite directly. When oreZ is running, pass --pg-port to restore through the wire protocol. Standard Postgres tools (pg_dump, pg_restore, psql) also work against the running proxy since oreZ presents a standard PostgreSQL 16.4 version string over the wire.
Since we use this stack often with a file uploading service like MinIO which also requires docker, I threw in a tiny s3-compatible endpoint too:
bunx orez --s3 or standalone bunx orez s3.
import { startS3Local } from 'orez/s3'
const server = await startS3Local({
port: 9200,
dataDir: '.orez',
})Handles GET, PUT, DELETE, HEAD with CORS. Files stored on disk. No multipart, no ACLs, no versioning.
MIT