feat: Improve File Sync DX — hardening, factory, Convex adapter, docs#57
feat: Improve File Sync DX — hardening, factory, Convex adapter, docs#57
Conversation
✅ Deploy Preview for agent-native-fw ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
There was a problem hiding this comment.
Builder has reviewed your changes and found 6 potential issues.
Review Details
PR #57 — File Sync DX Improvement: Code Review
This PR adds substantial security hardening to the file sync system: path traversal protection (branded SafePath type), a hardcoded sync denylist, identifier validation, Supabase compound filter for cross-tenant isolation, content-hash dedup replacing TTL-only echo suppression, parallel startup sync with p-limit, LCS backtrack fix, retry queue with dead-letter logging, SSE crash protection, and a createFileSync() factory for env-driven setup. The architectural approach is sound and the test coverage (168 tests, 37 new) is thorough.
Risk Assessment: 🔴 HIGH — Fixes a confirmed cross-tenant data leak, touches security-sensitive I/O paths, and changes API contracts.
Critical Issues Found
🔴 Supabase compound filter breaks Realtime subscriptions — The filter was changed from app=eq.${appId} (single clause + client-side owner_id check) to app=eq.${appId}&owner_id=eq.${ownerId}. Supabase Realtime postgres_changes subscriptions only support a single equality expression — the old code even had a comment documenting this: "Realtime only supports one filter." This either causes CHANNEL_ERROR / no events at all, or silently falls back to filtering only by app, reopening the cross-tenant leak this PR is trying to close. Found by 5/5 agents.
🔴 p-limit missing from package.json dependencies — Imported in file-sync.ts and added to pnpm-lock.yaml, but not added to packages/core/package.json. Works locally via pnpm hoisting; published consumers of @agent-native/core will get a module-not-found crash when file sync is enabled. Found by 3/5 agents.
🔴 createAppServer made async — callers not updated — The default template's createAppServer now returns Promise<Express>. The production bootstrap (node-build.ts) still calls createProductionServer(createAppServer()), passing a Promise where an Express instance is expected. app.use() is called on the Promise immediately, crashing both the production build and dev server. Found by 3/5 agents.
🔴 Scoped npm package names crash createFileSync before error handling — readPackageName() returns the raw package.json name. Scoped names like @company/my-app contain @ and /, which fail validateIdentifier()'s regex. This throw happens before the try/catch in createFileSync, so it propagates as an unhandled rejection, crashing the server on startup instead of returning { status: "error" }. Found by 2/5 agents.
🔴 stop() can lose in-flight file pushes — stop() flushes retryQueue but never awaits pushInFlight promises. A push that started via adapter.set() but hasn't completed yet is not in retryQueue. After stop() disposes the adapter, the in-flight .set() rejects, the .catch() tries to enqueue a retry — but shutdown has already completed. The last local edit before SIGTERM can be silently dropped.
🟡 emitSyncEvent drops rich conflict fields (regression) — Previously this.syncEvents.emit("sync", event) forwarded the full SyncEvent. Now emitSyncEvent strips it to { source, type, path }, losing localSnippet/remoteSnippet (needed for LLM-assisted resolution) and strategy. The server template writes application-state/sync-conflict.json from this event — the agent loses the context it needs. Also, the path conditional (event.type === "conflict-saved" ? event.path : event.path) is dead code (both branches identical).
🟡 createFileSync reports "ready" when init fails — initFileSync() now catches its own startup errors and doesn't rethrow. createFileSync() only detects failures when initFileSync() throws, so a transient backend outage during startup returns { status: "ready" } with no watchers or listeners installed. Found by 2/5 agents.
🟡 fs.rm fire-and-forget on remote "removed" events — Empty callback () => {} silently swallows I/O errors. State maps (lastSyncedHash, mergeBaseCache, retryQueue) are cleared synchronously before the deletion completes. On failure, the engine believes the file is gone while it still exists, preventing future conflict detection.
🟡 useFileSyncStatus opens a duplicate SSE connection and misses batched events — Creates a new EventSource("/api/events") independent of the existing one from useFileWatcher, consuming an extra browser connection slot. Also: the SSE handler sends startup sync events in batches ({ type: "batch", events: [...] }), but the hook only checks data.source === "sync" — batch messages are silently ignored, leaving conflict counts stale after startup.
Found by running 5 parallel code-review agents with randomized file ordering to combat position bias.
Code review by Builder.io
There was a problem hiding this comment.
Builder has reviewed your changes and found 6 potential issues.
Review Details
PR #57 — Update Review (Convex Adapter + Docs)
This update adds a Convex backend adapter, wires it into the createFileSync factory, and adds a new /docs/file-sync documentation page. The Convex adapter uses a content-hash diffing approach over onUpdate query subscriptions. Risk level remains 🔴 HIGH — new adapter code handles file system writes and the docs page is the primary user-facing guide for configuring sync.
The previously reported p-limit missing dependency was fixed ✅. The remaining 5 open issues (Supabase compound filter, async createAppServer, scoped package names, stop() loses pushInFlight, emitSyncEvent drops fields) are still unresolved.
New Findings
🔴 /docs/file-sync route missing from routeTree.gen.ts — TanStack Router requires every route to be registered in routeTree.gen.ts. The file routes/docs/file-sync.tsx was added but routeTree.gen.ts was never regenerated (tsr generate). The file currently has no DocsFileSyncRoute entry. Every user clicking the "File Sync" link in the docs sidebar hits a 404.
🔴 Convex get()/delete() pass only {id} but documented schema requires {id, app, ownerId} — The docs in file-sync.tsx show files:get and files:remove as Convex functions with required app and ownerId args. The adapter only passes { id }. Any user following the documented setup gets Convex argument-validation errors on file deletions and orphan cleanup (the delete() path runs during every startup sync). Found by 2/5 agents.
🔴 useFileSyncStatus silently drops all startup sync events — SSE handler batches events during the startup burst as { type: "batch", events: [...] }. The hook only checks data.source === "sync" at the top level; batch wrappers don't have source, so all events during the most critical window are discarded. Found by 2/5 agents.
🟡 Convex processingChain fires onChange after unsubscribe() — The serialized promise chain is not guarded by an isUnsubscribed flag. Queued updates continue to call onChange (writing files to disk) after FileSync.stop() has disposed the adapter. Found by 3/5 agents.
🟡 SSEHandlerOptions.extraEmitters typed as EventEmitter but receives TypedEventEmitter — TypedEventEmitter doesn't extend Node's EventEmitter (it only implements on/off/emit/removeAllListeners). Assigning syncResult.sseEmitter to extraEmitters in the template server is a TypeScript compile error.
🟡 Status endpoint hardcodes connected: true — FileSync.hasError is private with no public getter, so the /api/file-sync/status endpoint always returns connected: true when sync is enabled, even after adapter errors. The correct value is already written to .sync-status.json by writeSyncStatus() but never exposed over HTTP.
🟡 sync-conflict.json never deleted after conflict resolution — The template server writes application-state/sync-conflict.json on conflict-needs-llm but never removes it on conflict-resolved. In the agent-native architecture, file existence = state is active, so resolved conflicts remain visible to the agent indefinitely.
🟡 useFileSyncStatus sets connected: data.enabled instead of data.connected — Initial fetch maps the wrong field; when sync is enabled but disconnected, the hook always shows connected: true. Found by 3/5 agents.
Found by running 5 parallel code-review agents with randomized file ordering.
Code review by Builder.io
| import DocsLayout from "../../components/DocsLayout"; | ||
| import CodeBlock from "../../components/CodeBlock"; | ||
|
|
||
| export const Route = createFileRoute("/docs/file-sync")({ |
There was a problem hiding this comment.
🔴 routeTree.gen.ts not regenerated — /docs/file-sync route is a 404
The file-sync.tsx route file was added but routeTree.gen.ts was never regenerated (tsr generate). The file currently has no DocsFileSyncRoute entry, so TanStack Router won't serve this page. Run pnpm tsr generate and commit the updated routeTree.gen.ts.
How did I do? React with 👍 or 👎 to help me improve.
| const row = (await this.client.query("files:get", { | ||
| id, | ||
| })) as ConvexFileRow | null; | ||
|
|
||
| if (!row) return null; | ||
| return { id: row.id, data: toRecord(row) }; | ||
| } | ||
|
|
||
| async set(id: string, record: FileWritePayload): Promise<void> { | ||
| await this.client.mutation("files:upsert", { | ||
| id, | ||
| ...(record.path !== undefined && { path: record.path }), | ||
| ...(record.content !== undefined && { content: record.content }), | ||
| ...(record.app !== undefined && { app: record.app }), | ||
| ...(record.ownerId !== undefined && { ownerId: record.ownerId }), | ||
| ...(record.lastUpdated !== undefined && { | ||
| lastUpdated: record.lastUpdated, | ||
| }), | ||
| ...(record.createdAt !== undefined && { createdAt: record.createdAt }), | ||
| }); | ||
| } | ||
|
|
||
| async delete(id: string): Promise<void> { | ||
| await this.client.mutation("files:remove", { id }); | ||
| } | ||
|
|
||
| subscribe( | ||
| appId: string, | ||
| ownerId: string, | ||
| onChange: (changes: FileChange[]) => void, |
There was a problem hiding this comment.
🔴 Convex get()/delete() pass only {id} but documented schema requires {id, app, ownerId}
The docs show files:get and files:remove with required app and ownerId args, but the adapter only calls client.query("files:get", { id }) and client.mutation("files:remove", { id }). Users following the documented Convex schema will get argument-validation errors on every file delete and orphan-cleanup call. Either simplify the documented schema to accept only id, or store appId/ownerId on the adapter and pass them through.
How did I do? React with 👍 or 👎 to help me improve.
| const es = new EventSource("/api/events"); | ||
| const handler = (event: MessageEvent) => { | ||
| try { |
There was a problem hiding this comment.
🔴 Hook drops all sync events during startup burst — batch SSE envelope not unwrapped
The SSE handler sends startup sync events as { type: "batch", events: [...] }. The handler checks data.source === "sync" but batch wrappers have no source field, so every batched event is silently discarded. Unwrap batch messages: const evts = data.type === "batch" ? data.events : [data]; for (const e of evts) { if (e.source === "sync") { ... } }.
How did I do? React with 👍 or 👎 to help me improve.
| // Serialize callback processing to prevent race conditions | ||
| let processingChain = Promise.resolve(); | ||
|
|
||
| const unsubscribe = this.client.onUpdate( | ||
| "files:list", | ||
| { app: appId, ownerId }, | ||
| (result) => { | ||
| processingChain = processingChain.then(() => { |
There was a problem hiding this comment.
🟡 processingChain fires onChange/onError after unsubscribe — can write files post-shutdown
The returned unsubscribe() detaches the Convex listener but does not cancel the queued promise chain. Pending chain entries will still call onChange (triggering disk writes) after FileSync.stop() disposes the adapter. Add let stopped = false; and flip it in unsubscribe(), then short-circuit each .then() with if (stopped) return;.
How did I do? React with 👍 or 👎 to help me improve.
| @@ -28,6 +29,8 @@ export function createFileWatcher( | |||
| export interface SSEHandlerOptions { | |||
There was a problem hiding this comment.
🟡 SSEHandlerOptions.extraEmitters typed as EventEmitter but receives TypedEventEmitter — TypeScript compile error
extraEmitters uses EventEmitter from Node's "events" module, but createFileSync() returns a TypedEventEmitter<FileSyncEvents> which doesn't extend it. The template server assignment is a TypeScript compile error. Fix: replace EventEmitter with a minimal interface { on(e: string, l: (...args: unknown[]) => void): this; off(...): this }.
How did I do? React with 👍 or 👎 to help me improve.
| return res.json({ enabled: false, conflicts: 0 }); | ||
| } | ||
| res.json({ | ||
| enabled: true, | ||
| connected: true, |
There was a problem hiding this comment.
🟡 Status endpoint hardcodes connected: true — FileSync.hasError is private and never surfaced
FileSync.hasError is private with no public getter, so the /api/file-sync/status endpoint always returns connected: true when sync is ready. After adapter errors, clients will show a false healthy state. Add a public getter get isConnected() { return !this.hasError; } to FileSync and use it in the status endpoint.
How did I do? React with 👍 or 👎 to help me improve.
There was a problem hiding this comment.
Builder has reviewed your changes and found 6 potential issues.
Review Details
PR #57 — Update Review (Fixes Applied + New Issues)
This update fixed createAppServer/node-build.ts async handling (both the production bootstrap and Vite plugin now properly await the Promise), and the scoped npm package name crash (@scope/name is now stripped before validateIdentifier). Both of those previously reported issues are resolved ✅.
Risk: 🔴 HIGH — security-sensitive filesystem writes, data mutation paths, graceful shutdown data loss.
Still-open issues from prior reviews: Supabase compound filter, stop() not awaiting pushInFlight, emitSyncEvent drops fields, routeTree.gen.ts not regenerated, Convex get()/delete() args mismatch, useFileSyncStatus drops batched SSE, Convex processingChain after unsubscribe, SSEHandlerOptions TypeScript type mismatch, status endpoint hardcodes connected: true.
New Findings
🔴 stop() calls abort() before flushRetryQueue() — shutdown flush is always a no-op — this.abortController.abort() fires on line ~190, before the "final flush attempt" call to flushRetryQueue(). But flushRetryQueue immediately breaks when signal.aborted is true. Every pending retry write is guaranteed to be logged as lost on shutdown, even when the adapter is healthy. Found by 3/5 agents.
🔴 sync-conflict.json never deleted after conflict resolves — stale agent state — Template server writes application-state/sync-conflict.json on conflict-needs-llm but has no handler for conflict-resolved. Per agent-native architecture, file existence = active state. The agent will permanently see the conflict as unresolved. Found by 5/5 agents.
🔴 Manual conflict resolution silently ignored — .conflict file deletions never detected — When a user deletes a .conflict sidecar file to manually resolve a conflict, startFileWatcher ignores this because .conflict is in the sync denylist. conflictPaths is never cleared, conflict-resolved is never emitted, and the UI permanently shows an unresolved conflict. Found by 2/5 agents.
🔴 assertSafePath crashes for projects in symlinked directories (macOS /tmp) — resolvedRoot = path.resolve(root) doesn't resolve symlinks, but realParent = fs.realpathSync(parentDir) does. On macOS, /tmp → /private/tmp. The comparison always fails, throwing "Symlink escape blocked" for any legitimate file write. The test files themselves use fs.realpathSync() to work around this bug, confirming it's real.
🔴 Startup sync populates expectedWrites before the watcher is running — first edit post-boot silently dropped — initStartupSync() calls writeSyncedFile() which adds all pulled files to expectedWrites. But the file watcher starts afterwards with ignoreInitial: true, so these entries are never consumed. The first real local edit to every file pulled at startup hits wasSyncPulled(), returns true, and is silently skipped — never pushed to the backend.
🟡 useFileSyncStatus sets connected: data.enabled instead of data.connected — When sync is enabled but the adapter has errored (hasError=true), the hook always shows connected: true. Found by 5/5 agents (was dropped by cap in previous round).
🟡 Convex subscribe() emits all existing files as "added" on first call — previousHashes starts empty, so the first onUpdate callback treats every file as new. Since initStartupSync() just ran, FileSync redundantly reads and hashes the entire file tree again. Found by 2/5 agents.
Found by running 5 parallel code-review agents with randomized file ordering.
Code review by Builder.io
| async stop(): Promise<void> { | ||
| this.stopped = true; | ||
| this.abortController.abort(); | ||
|
|
||
| // Clear timers | ||
| if (this.retryTimer) { | ||
| clearInterval(this.retryTimer); | ||
| this.retryTimer = null; |
There was a problem hiding this comment.
🔴 stop() aborts signal before flushRetryQueue — shutdown flush is always a guaranteed no-op
this.abortController.abort() is called on line ~190, before flushRetryQueue() on line ~195. But flushRetryQueue immediately breaks when signal.aborted is true, so every entry in the retry queue is written to dead letter even when the adapter is healthy. Move abort() to after the final flush.
How did I do? React with 👍 or 👎 to help me improve.
| syncResult.fileSync.syncEvents.on("sync", (event) => { | ||
| if (event.type === "conflict-needs-llm") { | ||
| try { | ||
| fs.mkdirSync("application-state", { recursive: true }); | ||
| fs.writeFileSync( | ||
| "application-state/sync-conflict.json", | ||
| JSON.stringify(event, null, 2), | ||
| ); | ||
| } catch { | ||
| /* best-effort */ |
There was a problem hiding this comment.
🔴 sync-conflict.json never deleted on conflict-resolved — permanent stale agent state
The server writes application-state/sync-conflict.json on conflict-needs-llm but never deletes it when conflict-resolved fires. Per agent-native architecture, file existence = active state, so the agent will permanently see the conflict as unresolved. Add an else if (event.type === "conflict-resolved") handler that calls fs.rmSync("application-state/sync-conflict.json", { force: true }).
How did I do? React with 👍 or 👎 to help me improve.
| content: result.merged!, | ||
| app: this.options.appId, | ||
| ownerId, | ||
| lastUpdated: now, |
There was a problem hiding this comment.
🔴 Manual .conflict file deletion never detected — conflictPaths stuck forever
.conflict files are in the sync denylist, so handleDelete in startFileWatcher ignores sidecar deletions. When a user manually resolves a conflict by deleting the .conflict file, conflictPaths is never cleared and conflict-resolved is never emitted. The UI permanently shows an unresolved conflict. Watch for .conflict deletions specifically before the shouldSyncFile guard.
How did I do? React with 👍 or 👎 to help me improve.
| * Assert that a file path is not a symlink (prevents write-through-symlink attacks). | ||
| */ | ||
| export function assertNotSymlink(filePath: string): void { | ||
| try { | ||
| const stat = fs.lstatSync(filePath); | ||
| if (stat.isSymbolicLink()) { | ||
| throw new Error( | ||
| `[file-sync] Refusing to write through symlink: ${filePath}`, | ||
| ); |
There was a problem hiding this comment.
🔴 assertSafePath crashes on symlinked project roots (macOS /tmp → /private/tmp)
resolvedRoot = path.resolve(root) doesn't follow symlinks, but realParent = fs.realpathSync(parentDir) does. On macOS /tmp is a symlink to /private/tmp, so the comparison always fails and throws "Symlink escape blocked" for every legitimate write. The test files themselves call fs.realpathSync() as a workaround, confirming the bug. Fix: use fs.realpathSync(resolvedRoot) as the comparison baseline when the path exists.
How did I do? React with 👍 or 👎 to help me improve.
|
|
||
| private writeSyncedFile(filePath: string, absPath: string, content: string) { | ||
| this.markRecent(this.recentlyPulled, filePath); | ||
| const projectRoot = path.resolve(this.options.contentRoot, ".."); | ||
| assertSafePath(projectRoot, filePath); | ||
| assertNotSymlink(absPath); | ||
|
|
There was a problem hiding this comment.
🔴 initStartupSync populates expectedWrites before watcher starts — first local edit silently dropped
initStartupSync() calls writeSyncedFile() which adds pulled files to expectedWrites, but the watcher starts afterwards with ignoreInitial: true. Those entries are never consumed, so the first real local edit to any file pulled at startup hits wasSyncPulled() → true and is silently skipped and never pushed to the backend.
How did I do? React with 👍 or 👎 to help me improve.
| ...prev, | ||
| enabled: data.enabled, | ||
| connected: data.enabled, | ||
| })); |
There was a problem hiding this comment.
🟡 Initial fetch sets connected: data.enabled instead of data.connected
Sets connected: data.enabled on mount. When sync is enabled but the adapter errored (hasError=true), the server returns { enabled: true, connected: false } but the hook always shows connected: true. Fix: connected: data.connected. Found by 5/5 review agents (previously dropped by review cap).
How did I do? React with 👍 or 👎 to help me improve.
There was a problem hiding this comment.
Builder has reviewed your changes and found 4 potential issues.
Review Details
Code Review Summary (Review Cycle 5)
PR #57 is a major security, resilience, and DX overhaul of the file sync system in agent-native. This cycle found the PR has significantly matured: three previously-reported bugs were fixed (Supabase compound filter now uses client-side owner_id check; stop() now properly awaits pushInFlight; emitSyncEvent now spreads all fields with { source: "sync", ...event }). Risk: 🔴 HIGH — security-sensitive path validation, data mutation logic, new adapters.
New Issues Found (4 new, from 5 parallel agents)
🔴 HIGH — assertSafePath false-negative on non-existent parent (4/5 agents)
The symlink escape check is guarded by fs.existsSync(parentDir). If the immediate parent doesn't yet exist (new nested path), the check is skipped. An attacker-controlled data/ symlink pointing to /etc passes the guard when the path is data/newdir/file.json — newdir doesn't exist, so existsSync returns false, symlink check is skipped, and mkdirSync(..., { recursive: true }) writes outside contentRoot.
🟡 MEDIUM — createFileSync returns ready despite init failure (2/5 agents)
initFileSync() now swallows errors internally (to allow retry), so createFileSync()'s try/catch never sees the exception. If the adapter query fails at boot, createFileSync still returns { status: "ready" } — wiring up SSE and status endpoints for a sync engine that never actually initialized.
🟡 MEDIUM — Supabase dispose() is a no-op (1/5 agents)
The comment in dispose() names removeAllChannels as the correct teardown, but the call is absent — the method body is empty. Supabase Realtime WebSocket channels are never closed on stop(), leaking connections and potentially preventing clean process exit.
🟡 MEDIUM — handleDelete races with pushInFlight, can resurrect deleted files (1/5 agents)
handleDelete() calls adapter.delete() immediately without waiting for pushInFlight. If a file is edited and then quickly deleted, the in-flight set() can resolve after the delete and recreate the remote row even though the file no longer exists locally.
Found by running 5 parallel code-review agents with randomized file ordering.
Code review by Builder.io
| const parentDir = path.dirname(resolved); | ||
| if (fs.existsSync(parentDir)) { | ||
| const realParent = fs.realpathSync(parentDir); | ||
| if ( |
There was a problem hiding this comment.
🔴 assertSafePath skips symlink check when parent dir doesn't exist yet
The symlink escape check is gated on fs.existsSync(parentDir). If an attacker sends data/newdir/file.json where data/ is a symlink outside the project, newdir doesn't exist yet so existsSync returns false — the entire symlink check is skipped. Subsequent mkdirSync(..., { recursive: true }) then follows the symlink and writes outside contentRoot. Walk up to the nearest existing ancestor to check it instead.
How did I do? React with 👍 or 👎 to help me improve.
| const sync = new FileSync(syncOptions); | ||
|
|
||
| try { | ||
| await sync.initFileSync(); |
There was a problem hiding this comment.
🟡 createFileSync returns ready even when initFileSync silently failed
initFileSync() now catches all startup errors internally (to support retry) and returns normally. This means createFileSync()'s surrounding try/catch never sees the exception and still returns { status: "ready" }. A DB outage at boot leaves the server wiring up SSE and status endpoints for a sync engine that never started.
How did I do? React with 👍 or 👎 to help me improve.
| } | ||
|
|
||
| async dispose(): Promise<void> { | ||
| // removeAllChannels is the cleanest teardown for Supabase realtime |
There was a problem hiding this comment.
🟡 dispose() body is empty — Supabase channels never closed
The comment names removeAllChannels as the correct teardown but the actual call is absent; the method body is empty. Supabase Realtime WebSocket channels are never closed on stop(), leaking connections and potentially preventing clean Node.js process exit.
How did I do? React with 👍 or 👎 to help me improve.
| @@ -432,7 +827,9 @@ export class FileSync { | |||
| this.options.adapter | |||
| .delete(this.docId(relPath)) | |||
| .then(() => { | |||
There was a problem hiding this comment.
🟡 handleDelete races with pushInFlight, can resurrect deleted files
handleDelete() sends adapter.delete() without waiting for pushInFlight. If a file is edited and then quickly deleted, an in-flight set() can resolve after the delete, recreating the remote row for a file that no longer exists locally.
How did I do? React with 👍 or 👎 to help me improve.
There was a problem hiding this comment.
Builder has reviewed your changes and found 1 potential issue.
Review Details
Code Review Summary (Review Cycle 6)
PR #57 continues incremental improvements: the latest update fixed two Convex schema documentation bugs (renamed reserved by_id index to by_sync_id; corrected schema import from convex/server instead of convex/values). Risk: 🔴 HIGH — security-sensitive path validation, data mutation logic. Reviewed via 3/5 parallel agents (2 aborted mid-review due to concurrent PR update).
New Issues Found
🟡 MEDIUM — initFileSync() leaks purge timer handles on retry (1/3 agents)
startPurgeTimer() unconditionally overwrites this.purgeTimer = setInterval(...) without clearing the previous handle. Because initFileSync() now allows retries after startup failure (the flag stays false on error), each retry call starts a new purge interval. The old setInterval handle is lost — its reference is overwritten before clearInterval() is ever called. On repeated retries (e.g., during adapter outages), the number of leaked timers grows unboundedly and only the last one is cleared on stop().
Found by running 3 parallel code-review agents with randomized file ordering (2 agents aborted due to concurrent PR update).
Code review by Builder.io
| this.startRemoteListener(patterns, this.options.ownerId, "shared"); | ||
| this.startFileWatcher(patterns, this.options.ownerId, "shared"); | ||
| try { | ||
| this.startPurgeTimer(); |
There was a problem hiding this comment.
🟡 startPurgeTimer() overwrites purgeTimer handle without clearing previous, leaking intervals on retry
Each failed initFileSync() retry calls startPurgeTimer() which sets this.purgeTimer = setInterval(...) without first checking or clearing an existing timer. On repeated retries, old interval handles are leaked — only the most recent is tracked and cleared on stop().
How did I do? React with 👍 or 👎 to help me improve.
There was a problem hiding this comment.
Builder has reviewed your changes and found 1 potential issue.
Review Details
Code Review Summary (Review Cycle 7)
The latest PR update adds Firestore doc ID encoding (encodeDocId/decodeDocId) to handle the new : separator format — Firestore rejects IDs containing /, so / is encoded as %2F. It also fixes the firebase-admin ESM dynamic import unwrapping. Risk: 🔴 HIGH — data mutation logic, security-sensitive sync engine. Reviewed via 4/5 parallel agents (1 aborted due to concurrent PR update).
New Issue Found
🔴 HIGH — decodeDocId corrupts IDs that legitimately contain %2F (2/4 agents)
encodeDocId replaces / with %2F, and decodeDocId replaces ALL %2F back to /. But on Linux/macOS, file paths can legitimately contain %2F as literal characters (e.g. data/file%2Fname.json). Round-trip: appId:data/file%2Fname.json → encoded: appId:data%2Ffile%2Fname.json → decoded: appId:data/file/name.json. The decoded ID no longer matches getDocId(appId, filePath), so on next startup initStartupSync treats the document as an orphan and deletes it. Any file with %2F in its name triggers an infinite orphan-cleanup loop (delete → re-upload → delete → …).
Found by running 4 parallel code-review agents with randomized file ordering.
Code review by Builder.io
| return id.replace(/\//g, "%2F"); | ||
| } | ||
|
|
||
| function decodeDocId(id: string): string { |
There was a problem hiding this comment.
🔴 decodeDocId is not the inverse of encodeDocId — corrupts paths containing literal %2F
encodeDocId encodes / as %2F but doesn't first escape existing %. On decode, decodeDocId converts ALL %2F back to /, corrupting paths that legitimately contain %2F (valid on Linux/macOS). The mismatched round-trip ID triggers infinite orphan-cleanup cycles for any such file. Fix: encodeDocId should first replace % with %25 before encoding / as %2F, then decode in reverse.
How did I do? React with 👍 or 👎 to help me improve.
There was a problem hiding this comment.
Builder has reviewed your changes and found 1 potential issue.
Review Details
Code Review Summary (Review Cycle 8)
This update refactors Supabase key detection in createFileSync() to support both the new (SUPABASE_PUBLISHABLE_KEY/SUPABASE_SECRET_KEY) and legacy (SUPABASE_ANON_KEY/SUPABASE_SERVICE_ROLE_KEY) key formats. Risk: 🔴 HIGH — credential selection logic directly impacts RLS enforcement. Reviewed via 5 parallel agents.
New Issue Found
🟡 MEDIUM — Factory silently prefers secret/service-role key when both keys are configured (2/5 agents)
The selection logic const key = secretKey || publishableKey always picks the secret key if SUPABASE_SECRET_KEY or SUPABASE_SERVICE_ROLE_KEY is present in the environment — even if SUPABASE_PUBLISHABLE_KEY is also set and intended to be used. In typical server environments both variables are present (different services use different keys). The warning fires but the behavior is still a footgun: the file-sync subsystem silently escalates to full-table RLS-bypassing access without explicit opt-in. The factory should default to the publishable/anon key and require an explicit FILE_SYNC_SUPABASE_KEY_TYPE=secret signal (or similar) to opt into the secret key.
Found by running 5 parallel code-review agents with randomized file ordering.
Code review by Builder.io
99b9fb6 to
73c179d
Compare
There was a problem hiding this comment.
Builder has reviewed your changes and found 6 potential issues.
Review Details
Code Review Summary — PR #57 (File Sync DX)
This PR comprehensively hardens and improves the file sync engine: security fixes (path traversal, denylist, input validation, tenant isolation), resilience improvements (retry queue, graceful shutdown, init flag fix, AbortController), performance gains (parallel startup sync, content hashing, LCS fix), a new Convex adapter, a createFileSync() factory, and a useFileSyncStatus() hook — alongside full documentation. Risk: High (security-sensitive code, data deletion, cross-tenant isolation).
The overall architecture is sound. The factory pattern, discriminated union result, typed EventEmitter, and per-file push serialization are all clean improvements. However, 3 parallel code-review agents surfaced several confirmed bugs spanning correctness, resilience, and UI state.
Key Findings
🔴 HIGH
- Convex
delete()missing requiredapp/ownerIdargs —"files:remove"mutation requires all 3 args per docs; only{ id }is passed, causing runtime Convex validation errors for all deletes.
🟡 MEDIUM (confirmed by 2+ agents or clearly valid)
useFileSyncStatussetsconnected: data.enabled— ignoresdata.connected; UI can never show disconnected state./api/file-sync/statushardcodesconnected: true— ignoreshasErrorset after init; inconsistent with.sync-status.json.sync-burst-startwithout guaranteedsync-burst-end— ifadapter.query()throws, SSE clients stay inbatchModepermanently.- Convex
processingChainpoisoned whenonErrorthrows — chain permanently rejects; all future remote changes silently dropped. - Legacy
__orphans deleted but not migrated — remote DB records deleted but never re-inserted under new:format;ignoreInitial: trueprevents re-push. - Merge base cache is FIFO, not LRU —
Map.set()on existing key doesn't update insertion order; actively-edited files evicted first. - Unhandled rejection in production entry —
createAppServer().then(...)has no.catch(); Node 15+ terminates process silently.
Found by running 3 parallel code-review agents with randomized file ordering.
Code review by Builder.io
| } | ||
|
|
||
| async delete(id: string): Promise<void> { | ||
| await this.client.mutation("files:remove", { id }); |
There was a problem hiding this comment.
🔴 Convex delete() missing required app/ownerId args — all deletes fail at runtime
Calls "files:remove" with only { id } but the documented Convex function requires { id, app, ownerId } as mandatory v.string() args. Convex validates mutation args at runtime, so every delete (orphan cleanup and local file deletion) throws a validation error and the remote record is never removed.
How did I do? React with 👍 or 👎 to help me improve.
| setStatus((prev) => ({ | ||
| ...prev, | ||
| enabled: data.enabled, | ||
| connected: data.enabled, |
There was a problem hiding this comment.
🟡 useFileSyncStatus sets connected: data.enabled — should be data.connected
Sets connected: data.enabled instead of data.connected, so the hook can never surface a disconnected/error state — connected will always equal enabled regardless of adapter errors detected after init.
How did I do? React with 👍 or 👎 to help me improve.
| } | ||
| res.json({ | ||
| enabled: true, | ||
| connected: true, |
There was a problem hiding this comment.
🟡 /api/file-sync/status hardcodes connected: true after init
Always returns connected: true when sync is ready, ignoring FileSync.hasError set on post-init adapter failures. The .sync-status.json file correctly reflects the error state but this endpoint does not, so the hook above can never show a real disconnect.
How did I do? React with 👍 or 👎 to help me improve.
| console.log(`[file-sync:${label}] Running full startup sync...`); | ||
|
|
||
| // Emit burst start for SSE batching | ||
| this.syncEvents.emit("sync-burst-start"); |
There was a problem hiding this comment.
🟡 sync-burst-start emitted without guaranteed sync-burst-end on failure
If adapter.query() throws after sync-burst-start is emitted, the function exits before emitting sync-burst-end. Every connected SSE client stays in batchMode = true permanently, degrading all real-time file-change events to batched 150 ms delivery for the server lifetime.
How did I do? React with 👍 or 👎 to help me improve.
| "files:list", | ||
| { app: appId, ownerId }, | ||
| (result) => { | ||
| processingChain = processingChain.then(() => { |
There was a problem hiding this comment.
🟡 Convex processingChain permanently poisoned if onError callback throws
If the caller's onError throws, the .then() rejects with no .catch() recovery path — all subsequent remote updates are silently dropped. A .catch(() => {}) guard after each .then() slot prevents the chain from poisoning.
How did I do? React with 👍 or 👎 to help me improve.
| await Promise.all( | ||
| orphanedDocIds.map((id) => | ||
| limit(() => | ||
| this.options.adapter.delete(id).catch((err) => { |
There was a problem hiding this comment.
🟡 Legacy __-separator orphans deleted but never re-inserted under new : format
Orphan docs are deleted from the remote DB but their content is never re-pushed under the new canonical ID. Since the file watcher uses ignoreInitial: true, pre-existing local files won't trigger a push, leaving the remote DB empty for all previously-synced files after an upgrade.
How did I do? React with 👍 or 👎 to help me improve.
There was a problem hiding this comment.
Builder has reviewed your changes and found 6 potential issues.
Review Details
Code Review Summary — PR #57: Improve File Sync DX
This PR is a major overhaul of the file sync subsystem: a createFileSync() factory for zero-config setup, a Convex adapter (third backend alongside Firestore/Supabase), extensive security hardening (path traversal, denylist, input validation, client-side owner filter), resilience improvements (AbortController shutdown, retry queue, per-file serialization), and performance work (p-limit parallel startup, SHA-256 content hashing, LCS fix). Risk level: 🔴 HIGH — security-sensitive code, data deletion logic, API contract changes, and multi-tenant isolation.
The approach is architecturally sound and the breadth of hardening is impressive. However, 5 parallel code-review agent passes (with randomized file ordering) uncovered several confirmed bugs, some of which cause silent data loss or security issues.
Key Findings
🔴 HIGH — Confirmed by 3/5 agents:
- Legacy doc ID migration deletes all existing remote data — upgrading users lose all synced files when startup sees every
__-format ID as an orphan and deletes it without re-uploading under the new:format.
🔴 HIGH — Confirmed by 2/5 agents:
- SIGTERM race: async shutdown bypassed by
createProductionServer's synchronousprocess.exit(0)— retry queue flush, in-flight pushes, and adapter disposal are all skipped on container shutdown. createFileSync()returns"ready"even wheninitFileSync()failed — internal error swallowing means factory consumers see a false-positive ready state.- Convex
files:remove/files:getargs mismatch with docs — adapter passes only{ id }but documented sample mutations requireapp+ownerId; every delete fails with a Convex validation error.
🔴 HIGH — Single-agent, confirmed critical:
- Path traversal can bypass the sync denylist via unnormalized paths (
foo/../.env) docIdomitsownerId, allowing cross-tenant overwrite of private files- Supabase DELETE events silently dropped (missing
owner_idinpayload.oldwithoutREPLICA IDENTITY FULL) - Echo suppression stale entries from startup block first user edits
Found by running 5 parallel code-review agents with randomized file ordering.
Code review by Builder.io
| const legacyDocs = rows.filter((r) => r.id.includes("__")); | ||
| if (legacyDocs.length > 0) { | ||
| console.warn( | ||
| `[file-sync] Found ${legacyDocs.length} document(s) with legacy '__' separator. ` + | ||
| `These will be treated as orphans. See: https://agent-native.dev/docs/file-sync#migration`, | ||
| ); | ||
| } | ||
| docsByPath.set(filePath, row); | ||
| } | ||
|
|
||
| if (orphanedDocIds.length > 0) { | ||
| console.log( | ||
| `[file-sync:${label}] Cleaning up ${orphanedDocIds.length} orphaned doc(s)...`, | ||
| ); | ||
| for (const id of orphanedDocIds) { | ||
| await this.options.adapter.delete(id).catch(() => {}); | ||
| for (const row of rows) { | ||
| const filePath = row.data.path; | ||
| const canonicalId = this.docId(filePath); | ||
| if (row.id !== canonicalId) { | ||
| orphanedDocIds.push(row.id); | ||
| continue; | ||
| } | ||
| docsByPath.set(filePath, row); | ||
| } | ||
| } | ||
|
|
||
| const projectRoot = path.resolve(this.options.contentRoot, ".."); | ||
| let syncedCount = 0; | ||
|
|
||
| for (const [filePath, row] of docsByPath) { | ||
| const data = row.data; | ||
| if (!shouldSyncFile(filePath, patterns)) continue; | ||
|
|
||
| const absPath = path.resolve(projectRoot, filePath); | ||
| const remoteContent: string = data.content ?? ""; | ||
| const localContent = this.readLocalFile(absPath); | ||
|
|
||
| if (localContent === null) { | ||
| this.writeSyncedFile(filePath, absPath, remoteContent); | ||
| syncedCount++; | ||
| } else if (localContent !== remoteContent) { | ||
| const remoteMs: number = data.lastUpdated ?? 0; | ||
| let localMs = 0; | ||
| try { | ||
| localMs = fs.statSync(absPath).mtimeMs; | ||
| } catch {} | ||
| // Parallelize orphan cleanup with p-limit (1g) | ||
| const limit = pLimit(this.options.startupConcurrency ?? 10); | ||
|
|
||
| if (remoteMs > localMs) { | ||
| if (orphanedDocIds.length > 0) { | ||
| console.log( | ||
| `[file-sync:${label}] Cleaning up ${orphanedDocIds.length} orphaned doc(s)...`, | ||
| ); |
There was a problem hiding this comment.
🔴 Legacy __ doc ID migration deletes all existing remote data on upgrade
Every document stored with the old __ separator fails the canonical ID match and lands in orphanedDocIds, which are then deleted. Because they're skipped via continue before being added to docsByPath, the startup sync loop never re-uploads them, and the file watcher uses ignoreInitial: true so unchanged local files are never re-pushed. Other connected clients receive removed events and delete their local files. Impact: upgrading users silently lose all previously synced remote data. Fix: migrate before deleting — await adapter.set(newId, row.data) then await adapter.delete(row.id) — and populate docsByPath with the migrated record.
How did I do? React with 👍 or 👎 to help me improve.
| process.on("SIGTERM", async () => { | ||
| if (syncResult.status === "ready") await syncResult.shutdown(); | ||
| process.exit(0); |
There was a problem hiding this comment.
🔴 SIGTERM race: createProductionServer's synchronous process.exit(0) kills the process before async sync shutdown completes
The template registers an async SIGTERM handler that awaits syncResult.shutdown(), but createProductionServer() registers a subsequent synchronous SIGTERM handler that calls process.exit(0) immediately. Node does not await async listeners, so the production handler fires while the template's handler is suspended at await, killing the process before the retry-queue flush, in-flight push drain, or adapter disposal finish. Fix: remove the template-level SIGTERM handler and have createProductionServer accept an async onShutdown callback, or ensure only one shutdown path owns process.exit(0) after async cleanup.
How did I do? React with 👍 or 👎 to help me improve.
| try { | ||
| await sync.initFileSync(); | ||
| } catch (err) { | ||
| const safeMsg = | ||
| err instanceof Error ? err.message.slice(0, 200) : "Unknown error"; | ||
| console.error(`[file-sync] Init failed: ${safeMsg}`); | ||
| return { status: "error", reason: `Init failed: ${safeMsg}` }; | ||
| } |
There was a problem hiding this comment.
🔴 Factory returns status:"ready" even when initFileSync failed internally
initFileSync() catches all initialization errors internally (file-sync.ts:134-164), sets hasError = true, and returns normally without rethrowing. The try/catch in createFileSync() is therefore dead code for startup failures — adapter errors, missing tables, and transient network failures all fall through to { status: "ready" }. Impact: callers wire SSE and routes as if sync is functional while no remote listener was ever started. Fix: check sync.hasError after initFileSync() returns, or rethrow from initFileSync() so createFileSync() can return { status: "error" } as documented.
How did I do? React with 👍 or 👎 to help me improve.
| assertSafePath(projectRoot, filePath); | ||
| assertNotSymlink(absPath); | ||
|
|
||
| this.expectedWrites.add(filePath); |
There was a problem hiding this comment.
🔴 Echo suppression stale entries from startup sync cause first user edit to be silently dropped
writeSyncedFile() during initStartupSync adds paths to expectedWrites for echo suppression, but the file watcher starts afterward with ignoreInitial: true and never fires for those writes — leaving stale entries in expectedWrites. When the user next modifies one of those files, handleChange calls wasSyncPulled(), finds the stale entry, consumes it, and returns without syncing. Impact: the first manual edit to every file pulled during startup is silently lost. Fix: call this.expectedWrites.clear() at the end of initStartupSync, before the watcher starts.
How did I do? React with 👍 or 👎 to help me improve.
| if (lastHash === undefined || localHash === lastHash) { | ||
| // No local changes since last sync — safe to overwrite | ||
| this.writeSyncedFile(filePath, absPath, incoming); | ||
| } else { |
There was a problem hiding this comment.
🔴 Path traversal bypasses sync denylist via unnormalized remote paths
The denylist (isDenylisted) and shouldSyncFile checks run against the unnormalized remote path before assertSafePath canonicalizes it. A path like foo/../.env passes the **/.env* minimatch check because minimatch evaluates the literal string, but assertSafePath then resolves it to the real .env inside the project root and allows the write. Impact: a remote database write with a traversal path can overwrite .env or other protected files despite the denylist. Fix: normalize the path first with assertSafePath, then run shouldSyncFile/isDenylisted against the canonical relative path.
How did I do? React with 👍 or 👎 to help me improve.
|
|
||
| // Client-side filter by owner_id (Realtime only supports one filter) | ||
| // Client-side filter — Supabase Realtime only supports one server filter | ||
| if (row.owner_id !== ownerId) return; |
There was a problem hiding this comment.
🔴 Supabase client-side owner filter silently drops all DELETE events
On Supabase DELETE events, payload.old only contains the primary key unless REPLICA IDENTITY FULL is set (which the provided schema does not do). row.owner_id is therefore undefined, and the check if (row.owner_id !== ownerId) return evaluates undefined !== "shared" → true, so every remote deletion is silently ignored. Impact: files deleted by remote peers are never removed locally, permanently breaking sync for deletes. Fix: either add ALTER TABLE files REPLICA IDENTITY FULL to the setup schema, or skip the owner filter for DELETE events and instead verify ownership via the doc ID prefix.
How did I do? React with 👍 or 👎 to help me improve.
|
@builderio-bot look at latest PR feedback and fix anything you agree with. Be skeptical. Reply to every comment (directly on the comment thread of each comment) if you fixed it or not and why |
|
oops this isn't a Builder-created PR, @gopinav can you peek at latest review feedback and have agent fix whatever it agrees with |
…ce, race conditions Phase 1 of file sync DX improvement: - Path traversal protection with branded SafePath type and symlink checks - Hardcoded sync denylist (secrets, infra, meta-files, editor temps) - Input validation for appId/ownerId/userUid (filter injection prevention) - Supabase compound filter (fixes cross-tenant data leak) - Init flag fix (set after success, not before) - stop() with AbortController for graceful shutdown - Per-file push serialization (prevents hash corruption from rapid writes) - Content-hash dedup replacing TTL-only echo suppression - Parallel startup sync with p-limit(10) - Remove read-before-write (content hash comparison in-memory) - Content hashes + merge base LRU cache (bounded memory) - LCS backtrack fix (O(N) vs O(N^2)) and lower bailout threshold - Hardened retry queue with concurrent flush guard, dead letter log - SSE crash protection (safeWrite wrapper) - expectedWrites Set for echo suppression - Doc ID separator changed from __ to : (no collision) - dispose() on FileSyncAdapter interface - Typed EventEmitter, branded types, eliminate any - Sync status file (data/.sync-status.json) for agent visibility
Phase 2 of file sync DX improvement:
- createFileSync({ contentRoot }) returns discriminated union FileSyncResult
- Three states: disabled, error (with reason), ready (with fileSync + sseEmitter + shutdown)
- Env var contract: FILE_SYNC_ENABLED, FILE_SYNC_BACKEND, FILE_SYNC_SUPABASE_KEY_TYPE
- Dynamic imports for adapter SDKs (firebase-admin, @supabase/supabase-js)
- Actionable error messages for missing deps and env vars
- appId defaults to package.json name, ownerId defaults to "shared"
- Exported from @agent-native/core/adapters/sync barrel
…rity Phase 3 of file sync DX improvement: - useFileSyncStatus() hook with SSE subscription + initial REST fetch - Default template wired with createFileSync(), diagnostic endpoint, shutdown - SSE events tagged with source: "file" and relative paths - Adaptive SSE batching during startup sync bursts - Agent conflict notification via application-state/sync-conflict.json - AGENTS.md updated with sync documentation (env vars, status, conflicts) - Post-creation CLI hint pointing to file sync docs
- FileSync integration tests: constructor validation, startup sync, remote pull, local push, legacy doc ID detection, init retry, stop/dispose, echo suppression, sync status, path traversal rejection - Config security tests: assertSafePath (traversal, null bytes, empty, symlinks), assertNotSymlink, validateIdentifier, hashContent - Factory tests: disabled, missing backend, invalid backend, missing credentials - 37 new tests (168 total, all passing)
- ConvexFileSyncAdapter implementing all 6 FileSyncAdapter methods - Duck-typed ConvexClient interface (query, mutation, onUpdate, close) - Subscribe uses content-hash snapshot diffing with processingChain serialization - No skip-first-callback (sync engine dedup handles initial state) - Factory case with CONVEX_URL HTTPS validation - Optional peer dep convex>=1, exports map ./adapters/convex
- 13 tests for ConvexFileSyncAdapter (CRUD + subscribe diffing) - Factory test for missing CONVEX_URL - Fix isValidBackend to include "convex" (was unreachable through factory)
- 10-section docs page: overview, quick start, config, 3 backend guides (Firestore, Supabase, Convex), factory pattern, diagnostics, agent parity, custom adapter guide - Convex setup with full copyable schema and function code - Redirect from /docs/database-adapters to /docs/file-sync - Sidebar navigation updated
- Revert Supabase compound filter (Realtime only supports one filter) - Fix scoped package name crash in createFileSync (strip @scope/ prefix) - Fix stop() to drain pushInFlight before disposing adapter - Fix emitSyncEvent to preserve localSnippet/remoteSnippet/strategy fields - Fix createAppServer async callers (node-build.ts, vite express plugin) - Fix path traversal test for Linux CI (don't check /etc/passwd existence) - Run prettier on all changed files
- key-concepts.tsx: rename section to "File Sync", add Convex card, update language from "database" to "remote store" - cli-adapters.tsx: update link to /docs/file-sync - searchIndex.ts: rename sections, add Convex references - file-sync.tsx: rename MyDatabaseAdapter to MyCustomAdapter - file-sync.ts: update comment from "Database adapter" to "Adapter"
…ync for section names MyCustomAdapter → MyDatabaseAdapter (it IS a database adapter) "remote store" → "database" in descriptive text (the word is accurate) Section/page/nav names stay as "File Sync" (feature name, not backend name)
…rver not convex/values
…fault Two issues found during manual integration testing: - firebase-admin ESM dynamic import wraps module under .default — unwrap it - Firestore doc IDs cannot contain / (path separator) — encode as %2F
Supabase now uses Publishable/Secret keys instead of anon/service_role. - Primary env vars: SUPABASE_PUBLISHABLE_KEY, SUPABASE_SECRET_KEY - Legacy env vars still supported: SUPABASE_ANON_KEY, SUPABASE_SERVICE_ROLE_KEY - Removed FILE_SYNC_SUPABASE_KEY_TYPE toggle (auto-detects from which key is set) - Updated docs and AGENTS.md template
…p secret key path
1. stop() aborts before flushRetryQueue — moved abort() after graceful drain 2. sync-conflict.json never deleted — now removed on conflict-resolved 3. useFileSyncStatus sets connected: data.enabled — fixed to data.connected 4. startPurgeTimer leaks intervals on retry — guard against duplicate timer 5. sync-burst-start without guaranteed end — wrapped in try/finally 6. Convex processingChain poisoned if onError throws — added .catch() guard
4571f95 to
4daf542
Compare
There was a problem hiding this comment.
Builder has reviewed your changes and found 6 potential issues.
Review Details
Code Review Summary — PR #57 (Round 3): Improve File Sync DX
This PR adds a createFileSync() factory, Convex adapter, extensive security hardening (path traversal, denylist, input validation), resilience improvements (AbortController shutdown, retry queue, per-file serialization), and performance optimizations. Risk level: 🔴 HIGH — security-sensitive code, multi-tenant isolation, data deletion logic, and API contract changes.
The six issues from the previous review round remain open. This round of 5 parallel agents (with randomized file ordering) uncovered additional new bugs not previously posted.
New Key Findings
🔴 HIGH — 2 agents:
docIdomitsownerId— private-sync documents from different users sharing the same file path get the same remote ID, causing cross-user overwrites
🔴 HIGH — 1 agent (confirmed logic bug):
pushInFlightserialization broken — multiple rapid changes allawaitthe same prior promise and wake simultaneously, defeating per-file serialization- Convex diffing ignores
lastUpdated— metadata-only updates (conflict resolution, echo suppression touches) are invisible to the Convex subscriber
🟡 MEDIUM — 2 agents:
- Convex
get/removearg mismatch — adapter sends only{ id }but documented sample functions requireapp+ownerId; deletes fail for anyone following the docs dispose()now required onFileSyncAdapter— breaks existing custom adapters at compile time and crashes at shutdown without a guard- Startup
pushOpsnot tracked inpushInFlight—stop()can dispose the adapter while startup writes are still running
🟡 MEDIUM — 1 agent:
Promise.allover startup pushOps fails entire init on one file error- SSE
safePushswallows stream errors without cleanup, leaking watcher listeners - Convex
processingChaincontinues executing after unsubscribe, causing concurrent mutation on reconnect stop()race when background retry flush is already running
Found by running 5 parallel code-review agents with randomized file ordering.
Code review by Builder.io
| if (this.wasSyncPulled(relPath)) return; | ||
|
|
||
| // Per-file push serialization (1h) — wait for in-flight push | ||
| const prior = this.pushInFlight.get(relPath); |
There was a problem hiding this comment.
🔴 pushInFlight serialization fails: all concurrent awaiters wake simultaneously
When multiple rapid file-change events fire for the same path, they all fetch the same prior promise and await it together. When prior resolves, every waiting execution resumes at the same time and races to read the file, compute hashes, and call adapter.set() concurrently — defeating the intended serialization. Fix: chain promises instead of just awaiting them: const next = (prior || Promise.resolve()).then(async () => { /* push logic */ }); this.pushInFlight.set(relPath, next);
How did I do? React with 👍 or 👎 to help me improve.
| */ | ||
| export function getDocId(appId: string, filePath: string): string { | ||
| return `${appId}__${filePath.replace(/\//g, "__")}`; | ||
| return `${appId}:${filePath}`; |
There was a problem hiding this comment.
🔴 docId omits ownerId — private sync channels collide across different users
getDocId() builds keys as ${appId}:${filePath} with no ownerId. Two users syncing the same relative path under initPrivateSync produce identical remote IDs. All adapters use this ID as the upsert key, so user B's write overwrites user A's row and changes its owner_id, making user A's document invisible. Fix: include ownerId in the canonical ID: ${appId}:${ownerId}:${filePath}.
How did I do? React with 👍 or 👎 to help me improve.
| const prevHash = previousHashes.get(id); | ||
| if (prevHash === undefined) { | ||
| changes.push({ | ||
| type: "added", |
There was a problem hiding this comment.
🔴 Convex snapshot diffing ignores lastUpdated — metadata-only updates are silently dropped
The subscriber detects modifications by comparing only content hashes (prevHash !== hash). If a file's content doesn't change but its lastUpdated timestamp does (e.g., conflict resolution or echo-suppression touch), the hash stays identical and no "modified" event is emitted. Other clients never learn about the update, leaving them in stale state. Fix: also compare lastUpdated: if (prevHash !== hash || prevRecord?.lastUpdated !== record.lastUpdated).
How did I do? React with 👍 or 👎 to help me improve.
| async get(id: string): Promise<{ id: string; data: FileRecord } | null> { | ||
| const row = (await this.client.query("files:get", { | ||
| id, | ||
| })) as ConvexFileRow | null; | ||
|
|
||
| if (!row) return null; | ||
| return { id: row.id, data: toRecord(row) }; | ||
| } | ||
|
|
||
| async set(id: string, record: FileWritePayload): Promise<void> { | ||
| await this.client.mutation("files:upsert", { | ||
| id, | ||
| ...(record.path !== undefined && { path: record.path }), | ||
| ...(record.content !== undefined && { content: record.content }), | ||
| ...(record.app !== undefined && { app: record.app }), | ||
| ...(record.ownerId !== undefined && { ownerId: record.ownerId }), | ||
| ...(record.lastUpdated !== undefined && { | ||
| lastUpdated: record.lastUpdated, | ||
| }), | ||
| ...(record.createdAt !== undefined && { createdAt: record.createdAt }), | ||
| }); | ||
| } | ||
|
|
||
| async delete(id: string): Promise<void> { | ||
| await this.client.mutation("files:remove", { id }); |
There was a problem hiding this comment.
🟡 Convex adapter sends only { id } but documented sample functions require app+ownerId — deletes always fail
adapter.get() calls files:get with { id } and adapter.delete() calls files:remove with { id }, but the Convex setup guide in the docs defines those mutations with required app: v.string() and ownerId: v.string() args. Convex validates all required args at the edge, so every delete() call fails with an argument-validation error for anyone following the documented setup. Fix: either pass appId/ownerId from the adapter, or update the docs to make those args optional.
How did I do? React with 👍 or 👎 to help me improve.
| onError: (error: any) => void, | ||
| onError: (error: unknown) => void, | ||
| ): Unsubscribe; | ||
| dispose(): Promise<void>; |
There was a problem hiding this comment.
🟡 dispose() is now required on FileSyncAdapter — breaks all existing custom adapters
dispose(): Promise<void> was added as a mandatory interface member, and FileSync.stop() calls it unconditionally. Any existing custom adapter built against the prior 5-method contract fails TypeScript compilation after upgrade, and JavaScript adapters throw TypeError: adapter.dispose is not a function at shutdown. The docs custom-adapter section still shows the old 5-method signature. Fix: make it optional (dispose?(): Promise<void>) and guard the call with await this.options.adapter.dispose?.().
How did I do? React with 👍 or 👎 to help me improve.
| // Collect push operations for parallel execution | ||
| const pushOps: Array<() => Promise<void>> = []; | ||
|
|
||
| for (const [filePath, row] of docsByPath) { | ||
| if (this.stopped) return; | ||
|
|
||
| const data = row.data; | ||
| if (!shouldSyncFile(filePath, patterns)) continue; | ||
|
|
||
| const absPath = assertSafePath(projectRoot, filePath) as string; | ||
| const remoteContent: string = data.content ?? ""; | ||
| const localContent = this.readLocalFile(absPath); | ||
|
|
||
| if (localContent === null) { | ||
| this.writeSyncedFile(filePath, absPath, remoteContent); | ||
| syncedCount++; | ||
| } else if (localContent !== remoteContent) { | ||
| const remoteMs: number = data.lastUpdated ?? 0; | ||
| let localMs = 0; | ||
| try { | ||
| localMs = fs.statSync(absPath).mtimeMs; | ||
| } catch { | ||
| /* file may have been deleted */ | ||
| } | ||
|
|
||
| if (remoteMs > localMs) { | ||
| this.writeSyncedFile(filePath, absPath, remoteContent); | ||
| syncedCount++; | ||
| } else { | ||
| // Queue push for parallel execution | ||
| const capturedFilePath = filePath; | ||
| const capturedLocalContent = localContent; | ||
| const capturedCreatedAt = data.createdAt; | ||
| pushOps.push(async () => { | ||
| if (this.abortController.signal.aborted) return; | ||
| const now = Date.now(); | ||
| await this.options.adapter.set(this.docId(capturedFilePath), { | ||
| path: capturedFilePath, | ||
| content: capturedLocalContent, | ||
| app: this.options.appId, | ||
| ownerId, | ||
| lastUpdated: now, | ||
| createdAt: capturedCreatedAt ?? now, | ||
| }); | ||
| if (this.abortController.signal.aborted) return; | ||
| this.lastSyncedHash.set( | ||
| capturedFilePath, | ||
| hashContent(capturedLocalContent), | ||
| ); | ||
| this.updateMergeBase(capturedFilePath, capturedLocalContent); | ||
| this.markRecent(this.recentlyPushed, capturedFilePath); | ||
| }); | ||
| syncedCount++; | ||
| } | ||
| } else { | ||
| const now = Date.now(); | ||
| await this.options.adapter.set(this.docId(filePath), { | ||
| path: filePath, | ||
| content: localContent, | ||
| app: this.options.appId, | ||
| ownerId, | ||
| lastUpdated: now, | ||
| createdAt: data.createdAt ?? now, | ||
| }); | ||
| this.lastSyncedContent.set(filePath, localContent); | ||
| this.markRecent(this.recentlyPushed, filePath); | ||
| syncedCount++; | ||
| this.lastSyncedHash.set(filePath, hashContent(localContent)); | ||
| this.updateMergeBase(filePath, localContent); | ||
| } | ||
| } else { | ||
| this.lastSyncedContent.set(filePath, localContent); | ||
| } | ||
| } | ||
|
|
||
| console.log( | ||
| `[file-sync:${label}] Startup sync complete - ${syncedCount} file(s) synced`, | ||
| ); | ||
| // Execute pushes in parallel with concurrency limit (1g) | ||
| if (pushOps.length > 0) { | ||
| await Promise.all(pushOps.map((fn) => limit(fn))); | ||
| } |
There was a problem hiding this comment.
🟡 Startup pushOps run outside pushInFlight — stop() disposes adapter while writes are in-flight
The startup sync push batch (Promise.all(pushOps.map(fn => limit(fn)))) is never registered in pushInFlight. stop() only awaits pushInFlight before calling abortController.abort() and adapter.dispose(). If stop() fires mid-startup (e.g., container restart), those orphaned push promises continue writing to an already-disposed adapter, causing errors and corrupted sync state. Fix: store the startup batch promise and await it in stop() before the abort signal.
How did I do? React with 👍 or 👎 to help me improve.
Wire createFileSync() into analytics, brand, calendar, content, imagegen, mail, slides, and videos templates — matching the pattern established in the default template (PR #57). Per template: - Make createAppServer() async - Update node-build.ts to .then() pattern - Add createFileSync() wiring with diagnostic endpoint, SSE extraEmitters, graceful shutdown, and conflict notification - Add/update AGENTS.md with standard File Sync section - Create sync-config.json where missing (brand, mail, videos) Templates that had no SSE (analytics, calendar, slides, videos) now get createFileWatcher + createSSEHandler. Content template correctly uses contentRoot: "./content". Mail template syncs ./data only (not ./application-state). Slides keeps existing /api/decks/events alongside new /api/events. Dead Firestore/initFileSync docs replaced in calendar and slides AGENTS.md. Sync remains opt-in — apps work identically without FILE_SYNC_ENABLED.
Wire createFileSync() into analytics, brand, calendar, content, imagegen, mail, slides, and videos templates — matching the pattern established in the default template (PR #57). Per template: - Make createAppServer() async - Update node-build.ts to .then() pattern - Add createFileSync() wiring with diagnostic endpoint, SSE extraEmitters, graceful shutdown, and conflict notification - Add/update AGENTS.md with standard File Sync section - Create sync-config.json where missing (brand, mail, videos) Templates that had no SSE (analytics, calendar, slides, videos) now get createFileWatcher + createSSEHandler. Content template correctly uses contentRoot: "./content". Mail template syncs ./data only (not ./application-state). Slides keeps existing /api/decks/events alongside new /api/events. Dead Firestore/initFileSync docs replaced in calendar and slides AGENTS.md. Sync remains opt-in — apps work identically without FILE_SYNC_ENABLED.
What & Why
Security
assertSafePath()with OWASP canonicalize-then-verify + symlink escape detection..env, credentials,.git/,node_modules/, sync meta-files blocked from syncing regardless of user config.appId,ownerId,userUidvalidated to prevent filter injection in Supabase Realtime.owner_idcheck to prevent cross-tenant data leaks.Resilience
stop()with AbortController — graceful shutdown: cancels ops, drains retry queue + in-flight pushes, closes watchers, disposes adapter.dispose()on adapter interface — prevents leaked gRPC channels and WebSocket connections.Performance
p-limit(10)drops 100-file startup from ~10s to ~1s.adapter.get()on every push.push()+reverse()instead ofunshift()(O(N) vs O(N^2)).Race Conditions
pushInFlightmap prevents hash corruption from rapid writes.safeWritewrapper preventsres.write()crash on closed connections.DX — Factory & Wiring
createFileSync()factory — zero-config from env vars, returns discriminated union (disabled/error/ready).useFileSyncStatus()hook — SSE subscription + initial REST fetch.Convex Adapter
ConvexFileSyncAdapter— third backend alongside Firestore/Supabase. Duck-typedConvexClientinterface.processingChainserialization for safe reconnection handling.app/ownerIdon get/remove. Partial update support forset().FILE_SYNC_BACKEND=convex+CONVEX_URL(HTTPS validated).Documentation
/docs/database-adaptersto/docs/file-sync.Other
__→:to fix collisions. Runtime warning for legacy format.anyfrom sync engine.Testing
Post-Deploy Monitoring & Validation
No additional operational monitoring required: This is a framework library change. Sync is opt-in via
FILE_SYNC_ENABLED=trueand disabled by default — zero behavior change for existing apps.Before & After
Enabling file sync (before)
Enabling file sync (after)
Handling remote changes (before)
Handling remote changes (after)
Path security (before)
Path security (after)
Startup sync (before)
Startup sync (after)
Documentation (before)
FileSyncAdapterinterfaceDocumentation (after)
Naming: "File Sync" vs "Database Adapters"
The docs page was renamed from "Database Adapters" to "File Sync". The code still uses
adapters/in directory structure. Here's why:The word "database" is still used in prose where it's accurate (e.g., "syncs files to a database"), just not as the feature name. Users think "I want file sync", not "I want to configure a database adapter".