Conversation
- Implement storage provider selection (S3, R2, Backblaze, MinIO) - Add credential prompts for access key, secret key, and bucket - Generate storage route with presigned URL upload/download endpoints - Refactor env.ts schema generation for better maintainability Additionally: - Update auth.ts to check existing auth setup before running (idempotent) - Switch from db:push to drizzle-kit push in auth command - Update login.ts with device code registration and new API endpoints - Update turbo to v2.8.12 in dependencies
Add ServerManager class to handle dev server lifecycle with hot reload, graceful shutdown using SIGTERM, and automatic restart on crash with debounce. Also improve error handling in init command for dependency installation failures.
- Updated CODEBASE_MAP.md with new technology stack table, revised architecture diagram showing apps instead of templates, and expanded package structure documentation - Simplified README.md with badges, quick start guide, and condensed feature/architecture sections - Added apps/test-project directory and cli-auth-page to project structure
…e RLS SQL generation - Add singularize() function to convert plural table names to singular form - Update GraphQL type names, input types, and mutations to use singular names (e.g., "users" -> "User") - Change policyToSQL() to return a joined string instead of an array - Fix schema generation to not include empty mutation/subscription types - Update related tests to match new behavior - Update biome config to warn on non-template literal strings
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds a ServerManager-driven Bun dev workflow with hot-reload and debounced context regeneration; scaffolds a full example app (apps/test-project) including auth, DB, routes, realtime, and storage; large documentation and README rewrite; extensive test additions and multiple security/env-validation hardenings across CLI, core, and app code. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as Dev CLI (runDevCommand)
participant Context as ContextGenerator
participant Watcher as FileWatcher
participant ServerMgr as ServerManager
participant Bun as Bun (hot server)
CLI->>Context: generate initial context
Context-->>CLI: context (ok / partial)
CLI->>ServerMgr: instantiate & start()
ServerMgr->>Bun: Bun.spawn --hot (cwd=projectRoot)
Bun-->>ServerMgr: PID, stdout/stderr
Watcher->>Watcher: file change detected (debounced)
Watcher->>Context: regenerate context
Context-->>CLI: context updated
CLI->>ServerMgr: request restart (on crash or config change)
ServerMgr->>Bun: stop (SIGTERM) -> respawn after delay
CLI->>ServerMgr: cleanup() -> stop, clear timers, close watchers
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 15
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
packages/core/src/rls/generator.ts (1)
104-120:⚠️ Potential issue | 🟠 MajorKeep
policyToSQLstatement-structured to avoid brittle downstream parsing.Line 104 and Line 120 collapse multiple statements into one SQL string. This forces consumers to recover boundaries via string-splitting, which is error-prone and can mis-split when policy expressions include semicolons/quoted text.
Proposed fix (preserve statement boundaries)
-export function policyToSQL(policy: PolicyDefinition): string { +export function policyToSQL(policy: PolicyDefinition): string[] { const statements: string[] = []; @@ - return statements.join(" "); + return statements; }+// Optional helper for logging/display only +export function policyToSQLText(policy: PolicyDefinition): string { + return policyToSQL(policy).join(" "); +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/rls/generator.ts` around lines 104 - 120, policyToSQL currently concatenates all SQL pieces into one string which breaks downstream parsing; modify policyToSQL to return an array of statement strings (preserve boundaries) instead of a single joined string: collect enableRLS(policy.table) and each generatePolicyStatement(policy, operation) into a string[] and return that array, and then update any callers to accept the string[] (or map/join at the callsite if needed); reference functions: policyToSQL, enableRLS, generatePolicyStatement, and the PolicyOperation loop so you locate and adjust the collection/return behavior.packages/cli/src/commands/dev.ts (1)
156-157:⚠️ Potential issue | 🟠 MajorUse conditional
recursivewatching to avoid silently skipping schema watchers.Applying
{ recursive: true }to a file path can fail on some runtimes/platforms, which means context regeneration for schema changes may never be wired. Therecursiveoption is only supported on macOS and Windows; on Linux it is ignored. Additionally,recursiveshould only be used with directory paths, not file paths.💡 Proposed fix
-import { type FSWatcher, existsSync, watch } from "node:fs"; +import { type FSWatcher, existsSync, statSync, watch } from "node:fs"; ... - const watcher = watch(watchPath, { recursive: true }, (_eventType, filename) => { + const supportsRecursive = process.platform === "darwin" || process.platform === "win32"; + const recursive = supportsRecursive && statSync(watchPath).isDirectory(); + const watcher = watch(watchPath, { recursive }, (_eventType, filename) => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/commands/dev.ts` around lines 156 - 157, The watcher call uses { recursive: true } unconditionally which can be ignored or invalid for file paths and on Linux; update the code around the watch(watchPath, { recursive: true }, ...) invocation to only pass the recursive option when watchPath is a directory and the platform supports recursive watching (process.platform === 'darwin' or 'win32'). Detect directory-ness via fs.statSync or fs.promises.stat (check stat.isDirectory()) on the watchPath before creating the watcher, build the options object conditionally (e.g., opts = isDir && isSupportedPlatform ? { recursive: true } : undefined), and then call watch(watchPath, opts, ...) so logger.info and the watcher variable remain unchanged but recursive is applied safely.packages/cli/src/commands/login.ts (2)
99-102:⚠️ Potential issue | 🟠 MajorUse cryptographically secure randomness for device code generation.
Math.random()is not suitable for authentication codes and makes guessing attacks easier. For Node.js, usecrypto.randomBytes()to generate unpredictable device codes.💡 Proposed fix
+import { randomBytes } from "node:crypto" function generateDeviceCode(): string { const chars = "ABCDEFGHJKLMNPQRSTUVWXYZ23456789" - const part1 = Array.from({ length: 4 }, () => chars[Math.floor(Math.random() * chars.length)]).join("") - const part2 = Array.from({ length: 4 }, () => chars[Math.floor(Math.random() * chars.length)]).join("") + const pick = () => { + const bytes = randomBytes(1) + return chars[bytes[0] % chars.length] + } + const part1 = Array.from({ length: 4 }, pick).join("") + const part2 = Array.from({ length: 4 }, pick).join("") return `${part1}-${part2}` }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/commands/login.ts` around lines 99 - 102, The device code generation uses Math.random() (chars, part1, part2) which is not cryptographically secure; replace the random selection with crypto.randomBytes-based randomness: create sufficient random bytes, map each byte to an index into the chars string (e.g., use modulo with rejection or mask to avoid bias) to build part1 and part2 securely, then return `${part1}-${part2}`; ensure you import Node's crypto and remove Math.random() usage in this generation logic.
137-137:⚠️ Potential issue | 🟠 MajorWrite credential files with restrictive permissions.
Auth tokens are written with default file mode
0o644(group and world-readable), allowing other users on the same system to read authentication credentials. Use owner-only permissions (0600) to ensure only the process owner can access stored tokens.💡 Proposed fix
async function saveCredentials(creds: Credentials): Promise<void> { const dir = path.dirname(CREDENTIALS_PATH) await fs.mkdir(dir, { recursive: true }) - await fs.writeFile(CREDENTIALS_PATH, JSON.stringify(creds, null, 2), "utf-8") + await fs.writeFile(CREDENTIALS_PATH, JSON.stringify(creds, null, 2), { + encoding: "utf-8", + mode: 0o600, + }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/commands/login.ts` at line 137, The credentials are being written with default permissive permissions; update the fs.writeFile call that writes CREDENTIALS_PATH with JSON.stringify(creds...) to set owner-only permissions by passing options { encoding: "utf-8", mode: 0o600 } (or alternatively set fs.chmod(CREDENTIALS_PATH, 0o600) immediately after write) so the file created for creds is readable/writable only by the owner.
🟡 Minor comments (18)
packages/shared/test/constants.test.ts-83-85 (1)
83-85:⚠️ Potential issue | 🟡 MinorDirectory-path validation currently accepts malformed values
Checking only for
"/"will pass values like"/"or"src//functions". Consider asserting a simple non-empty segment pattern.Suggested fix
- it("should be a valid directory path", () => { - expect(FUNCTIONS_DIR).toContain("/") - }) + it("should be a valid directory path", () => { + expect(FUNCTIONS_DIR).toMatch(/^[^/]+(?:\/[^/]+)+$/) + }) ... - it("should be a valid directory path", () => { - expect(POLICIES_DIR).toContain("/") - }) + it("should be a valid directory path", () => { + expect(POLICIES_DIR).toMatch(/^[^/]+(?:\/[^/]+)+$/) + })Also applies to: 93-95
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared/test/constants.test.ts` around lines 83 - 85, Replace the brittle check expect(FUNCTIONS_DIR).toContain("/") with an assertion that FUNCTIONS_DIR matches a non-empty-segment path pattern: at least one slash separating segments, no empty segments (i.e., no '//' anywhere) and no trailing slash; do the same replacement for BUILT_FUNCTIONS_DIR (and the tests at the corresponding lines) so both values are validated as real directory paths composed of non-empty path segments separated by single slashes.packages/shared/test/constants.test.ts-52-54 (1)
52-54:⚠️ Potential issue | 🟡 Minor
CONTEXT_FILE_NAMEextension check is too permissive
toContain(".json")can pass invalid names likefoo.json.tmp. Prefer an end-of-string check.Suggested fix
- it("should be a valid file name with json extension", () => { - expect(CONTEXT_FILE_NAME).toContain(".json") - }) + it("should be a valid file name with json extension", () => { + expect(CONTEXT_FILE_NAME).toMatch(/\.json$/) + })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/shared/test/constants.test.ts` around lines 52 - 54, The test using CONTEXT_FILE_NAME currently uses toContain(".json") which allows suffixes like "foo.json.tmp"; change the assertion in the test (the it block referencing CONTEXT_FILE_NAME) to assert the filename ends with ".json" (e.g., use a string endsWith check or a regex match for /\.json$/) so only true .json filenames pass.packages/client/test/auth.test.ts-369-389 (1)
369-389:⚠️ Potential issue | 🟡 MinorSignout error-path tests don’t verify the returned error contract
The tests say they validate AuthError/network handling, but only check token removal. They should also assert
result.errortype andresult.datacontract.Suggested assertions
const result = await client.signOut() // Even on error, token should be removed from storage expect(mockStorage.getItem("betterbase_session")).toBeNull() + expect(result.error).toBeInstanceOf(AuthError) + expect(result.data).toBeNull()const result = await client.signOut() // Should still remove token even on network error expect(mockStorage.getItem("betterbase_session")).toBeNull() + expect(result.error).toBeInstanceOf(NetworkError) + expect(result.data).toBeNull()Also applies to: 391-410
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/client/test/auth.test.ts` around lines 369 - 389, The signOut error-path test currently only asserts token removal but must also verify the returned result follows the AuthError contract; in the test for AuthClient.signOut (and the similar test at lines 391-410) assert that the returned value has result.error populated with the expected shape/message (e.g., error.message === "Sign out failed" and/or instanceof or error.type if applicable) and that result.data is null (or matches the expected empty data contract), so update the test assertions to check result.error and result.data in addition to clearing the mockStorage token.packages/client/test/auth.test.ts-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorFix CI lint failure by sorting import specifiers
Pipeline already reports import sorting failure; adjust Line 1 specifier order to match lint expectations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/client/test/auth.test.ts` at line 1, The import specifiers on Line 1 are not sorted per lint rules; reorder the named imports in the test file so they are alphabetically sorted (afterAll, afterEach, beforeAll, describe, expect, it, mock) in the import statement that currently lists describe, it, expect, beforeAll, afterAll, mock, afterEach to satisfy the linter.packages/core/test/migration.test.ts-10-17 (1)
10-17:⚠️ Potential issue | 🟡 MinorRemove unused imports of migrator functions.
Lines 10–17 import
applyPolicies,applyAuthFunction,applyRLSMigration,dropPolicies,dropTableRLS, andgetAppliedPolicies, but these are never referenced before line 162. Thedescribe("migration/rls-migrator")block intentionally re-imports and assigns these functions dynamically (lines 164–169, 173–178) to avoid test pollution. Remove the unused top-level imports.Proposed fix
import { runMigration, isRLSSupported, } from "../src/migration/index" -import { - applyPolicies, - applyAuthFunction, - applyRLSMigration, - dropPolicies, - dropTableRLS, - getAppliedPolicies, -} from "../src/migration/rls-migrator" import type { ProviderAdapter, DatabaseConnection } from "../src/providers/types"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/test/migration.test.ts` around lines 10 - 17, Remove the unused top-level imports of applyPolicies, applyAuthFunction, applyRLSMigration, dropPolicies, dropTableRLS, and getAppliedPolicies from the test file; these functions are re-imported dynamically later in the describe("migration/rls-migrator") block (the dynamic import/assignment around lines where the tests set those symbols), so delete the initial import statement that lists these six symbols to avoid test pollution and unused-import warnings.apps/test-project/betterbase.config.ts-48-62 (1)
48-62:⚠️ Potential issue | 🟡 MinorConsider validating
DATABASE_URLenvironment variable.
process.env.DATABASE_URLcan beundefinedat runtime. IfBetterBaseConfigexpects a non-optionalconnectionString, this could cause runtime errors or silent failures.💡 Suggested validation approach
provider: { /** The database provider type */ type: "postgres" as const, - connectionString: process.env.DATABASE_URL, + connectionString: process.env.DATABASE_URL ?? (() => { + throw new Error("DATABASE_URL environment variable is required"); + })(),Or use a validation library like Zod for environment variables:
import { z } from 'zod'; const env = z.object({ DATABASE_URL: z.string().url(), }).parse(process.env);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/test-project/betterbase.config.ts` around lines 48 - 62, The provider.connectionString currently assigns process.env.DATABASE_URL which may be undefined; update the BetterBaseConfig/provider initialization to validate and fail fast: check that process.env.DATABASE_URL is a non-empty string (or use a schema validator like Zod) before assigning to provider.connectionString, and throw a clear error or log and exit if missing; reference the provider.connectionString property and the surrounding provider block (and optionally a Zod schema for DATABASE_URL) so the runtime configuration cannot be undefined.packages/core/test/graphql.test.ts-330-342 (1)
330-342:⚠️ Potential issue | 🟡 MinorTest assertion may be incorrect for disabled subscriptions.
The test passes
{ subscriptions: false }but then assertsresolvers.Subscriptionis defined. This seems inconsistent with line 139's test where disabled subscriptions result inundefined. IfgenerateResolversreturns an empty or defaultSubscriptionobject even when disabled, this should be documented; otherwise, the assertion may need adjustment.💡 Consider verifying expected behavior
it("respects subscriptions config", () => { // ... const resolvers = generateResolvers(tables, mockDb as any, { subscriptions: false }) - expect(resolvers.Subscription).toBeDefined() + // If subscriptions are disabled, should Subscription be undefined or empty? + expect(resolvers.Subscription).toEqual({}) // or toBeUndefined() })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/test/graphql.test.ts` around lines 330 - 342, The test passes subscriptions: false to generateResolvers but then asserts resolvers.Subscription is defined, which conflicts with the other test expecting undefined when subscriptions are disabled; either update the test to assert expect(resolvers.Subscription).toBeUndefined() to match the intended behavior, or if the desired behavior is to return a default/empty Subscription object even when disabled, modify generateResolvers (the function named generateResolvers) to return that default Subscription shape when called with { subscriptions: false } and update documentation/comments accordingly; pick the approach consistent with the existing test at line 139 and adjust the assertion or implementation to match.packages/core/src/graphql/schema-generator.ts-217-235 (1)
217-235:⚠️ Potential issue | 🟡 MinorSingularize function has edge cases with words ending in "ses".
The current logic at line 227-229 prevents singularizing words ending in "ses" to avoid "statuses" → "statuse", but this also affects valid cases:
- "addresses" stays "addresses" instead of "address"
- "buses" stays "buses" instead of "bus"
- "classes" stays "classes" instead of "class"
Consider using a more robust singularization library like
pluralizeif these edge cases matter for your schema, or document this as a known limitation.💡 Suggested improvement
function singularize(str: string): string { // Handle common English plural forms if (str.endsWith("ies")) { return str.slice(0, -3) + "y"; } + if (str.endsWith("sses") || str.endsWith("xes") || str.endsWith("ches") || str.endsWith("shes")) { + return str.slice(0, -2); + } if (str.endsWith("es") && str.length > 2) { - // Don't singularize words like "status", "statuses" -> "statuse" - if (!str.endsWith("ses")) { - return str.slice(0, -2); - } + return str.slice(0, -1); // "statuses" -> "statuse" won't match, falls through to -s rule } if (str.endsWith("s") && str.length > 1) { return str.slice(0, -1); } return str; }Or use a well-tested library:
import pluralize from 'pluralize'; const singular = pluralize.singular(str);packages/client/test/storage.test.ts-1-2 (1)
1-2:⚠️ Potential issue | 🟡 MinorSort imports to unblock CI lint checks.
The current pipeline flags import ordering; please run Biome organize-imports/format on this file.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/client/test/storage.test.ts` around lines 1 - 2, The import statements at the top (the Bun test helpers: describe, it, expect, beforeAll, afterAll, mock, afterEach and the node:fs functions mkdtempSync, writeFileSync, rmSync, readFileSync) are not sorted; run Biome organize-imports/format on this test file or manually reorder the two import lines to satisfy the project's import ordering (e.g., group and alphabetize imports consistently), then save so CI lint passes.issues.md-9-12 (1)
9-12:⚠️ Potential issue | 🟡 MinorUpdate this quality report to match current CI status.
This file states all checks are passing, but the current PR pipeline includes failures. Keeping this as-is can mislead reviewers and release decisions.
Also applies to: 57-59, 83-85, 102-104
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@issues.md` around lines 9 - 12, The quality report still contains hardcoded "Status: ✅ PASSED" lines that no longer reflect the current pipeline; locate each occurrence of the status header (e.g., the literal line "Status: ✅ PASSED" and the similar status blocks later in the document) and update them to accurately reflect the current CI results (replace the emoji/text with the real status and a short note or failing check list), and ensure the summary sections mentioned (the repeated status blocks) are consistent with the latest pipeline output.packages/core/test/storage.test.ts-1-3 (1)
1-3:⚠️ Potential issue | 🟡 MinorConsolidate/sort
node:fsimports to satisfy lint rules.This file still triggers the import-order lint check.
💡 Proposed fix
-import { mkdtempSync, writeFileSync, mkdirSync, rmSync } from "node:fs" -import { existsSync } from "node:fs" +import { existsSync, mkdirSync, mkdtempSync, rmSync, writeFileSync } from "node:fs"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/test/storage.test.ts` around lines 1 - 3, The file has multiple separate imports from "node:fs" which breaks the import-order/lint rule; consolidate the two imports into a single import statement that pulls mkdtempSync, writeFileSync, mkdirSync, rmSync, and existsSync from "node:fs" and ensure the import line is placed/sorted correctly among other imports in storage.test.ts (look for the existing import lines at the top to replace both occurrences).02_better_error_messages.md-5-56 (1)
5-56:⚠️ Potential issue | 🟡 MinorFix markdown code block formatting.
The code examples have malformed markdown syntax. The language identifier (e.g.,
typescript) appears inline with the code instead of in proper fenced code blocks, which will render incorrectly in markdown viewers.📝 Example fix for one block
bb init — when dependency installation fails: -typescriptlogger.error( +```typescript +logger.error( `Failed to install dependencies.\n` + `Try running manually: cd ${projectName} && bun install\n` + `Error: ${message}` ) +```Apply similar fixes to all code examples in this document.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@02_better_error_messages.md` around lines 5 - 56, The markdown code examples are missing proper fenced code blocks and instead have inline language identifiers (e.g., "typescript") mixed into the text; update each example to use fenced code blocks with the language tag (```typescript) before the snippet and a closing ``` after it — specifically fix blocks containing logger.error calls such as the ones starting with `logger.error(error.message)`, the `Database connection failed.` example, the "Failed to install dependencies." (bb init) snippet, the "Schema file not found: src/db/schema.ts" (bb migrate) snippet, the "Migration failed." snippet, the `Table "${tableName}" not found` (bb generate crud) snippet, the `better-auth is not installed.` (bb auth setup) snippet, the `Authentication timed out after 5 minutes.` (bb login) snippet, and the `Port 3000 is already in use.` (bb dev) snippet — wrap each in ```typescript ... ``` so they render correctly.apps/test-project/src/auth/index.ts-20-22 (1)
20-22:⚠️ Potential issue | 🟡 MinorMissing validation for
AUTH_SECRETenvironment variable.
AUTH_SECRETis passed directly fromprocess.envwithout validation. If undefined, this could cause authentication failures or security issues. Consider addingAUTH_SECRETandAUTH_URLto the Zod schema inenv.tsto ensure they are validated at startup, similar to howDB_PATHis validated.🛡️ Suggested addition to env.ts
const envSchema = z.object({ NODE_ENV: z.enum(["development", "test", "production"]).default("development"), PORT: z.coerce.number().int().positive().default(3000), DB_PATH: z.string().min(1).default(DEFAULT_DB_PATH), AUTH_SECRET: z.string().min(32).optional(), // Required in production AUTH_URL: z.string().url().default("http://localhost:3000"), });Then use
env.AUTH_SECRETandenv.AUTH_URLin the auth config.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/test-project/src/auth/index.ts` around lines 20 - 22, Add validation for AUTH_SECRET and AUTH_URL in the env schema and use the validated values when constructing the auth config: update env.ts to include AUTH_SECRET (e.g., z.string().min(32).optional() or required in prod) and AUTH_URL (z.string().url().default("http://localhost:3000")), then replace direct uses of process.env.AUTH_SECRET, process.env.AUTH_URL in the auth config (see secret, baseURL, trustedOrigins in the auth setup) with env.AUTH_SECRET and env.AUTH_URL so missing/invalid values are caught at startup.packages/cli/test/dev.test.ts-55-76 (1)
55-76:⚠️ Potential issue | 🟡 MinorTest only validates setup, not
runDevCommandfunctionality.This test creates a project structure and verifies the files exist, but doesn't actually call
runDevCommand. It essentially testsmkdirSyncandwriteFileSyncrather than the dev server functionality. Consider either removing this test or expanding it to actually testrunDevCommand.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/test/dev.test.ts` around lines 55 - 76, The test in packages/cli/test/dev.test.ts only creates files and asserts they exist but never invokes the function under test (runDevCommand), so update the "creates project structure for dev server" test to actually exercise runDevCommand: call runDevCommand (or the exported CLI entrypoint that starts the dev server) with the temporary testDir as the project root, await its result or mock/stub any long-running behavior, then assert expected side-effects (e.g., server started flag, created config files, returned port, or that specific helper functions were invoked) and finally clean up the temp dir; alternatively remove this test if you decide not to test runDevCommand here. Ensure you reference runDevCommand (or the CLI start function) and the temp directory setup/teardown code so the test both prepares and exercises the real behavior instead of only validating filesystem setup.03_test_suite.md-6-6 (1)
6-6:⚠️ Potential issue | 🟡 MinorVersion inconsistency: Bun 1.3.9 vs Bun v1.2+.
This document references "Bun 1.3.9" but the README.md badge indicates "Bun v1.2+". Please verify the correct minimum Bun version and update both documents for consistency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@03_test_suite.md` at line 6, The docs show conflicting Bun versions: "Bun 1.3.9" in 03_test_suite.md vs the README badge "Bun v1.2+"; confirm the true minimum supported Bun version and make them consistent by updating the literal string in 03_test_suite.md (replace "Bun 1.3.9" with the agreed version) and the README badge text (replace "Bun v1.2+" to the same version), and also scan related metadata (e.g., any engines field, CI config, or setup scripts) to ensure they match the chosen minimum version.apps/test-project/src/lib/env.ts-3-4 (1)
3-4:⚠️ Potential issue | 🟡 MinorConsolidate to shared constants instead of manual sync.
DEFAULT_DB_PATHis defined in at least 5 locations across the codebase (apps/test-project/src/lib/env.ts, packages/cli/src/constants.ts, templates/base/src/lib/env.ts, packages/shared/src/constants.ts, and packages/cli/src/commands/init.ts). Rather than relying on sync comments, import from a single source of truth (packages/shared/src/constants.ts exports this value) to eliminate duplication and prevent future drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/test-project/src/lib/env.ts` around lines 3 - 4, Replace the local export DEFAULT_DB_PATH in apps/test-project/src/lib/env.ts with the shared constant: remove the hardcoded export and import DEFAULT_DB_PATH from the shared constants module (packages/shared/src/constants.ts) so the file uses the single source of truth; update any references in this file to use the imported DEFAULT_DB_PATH and delete the local definition to avoid duplication.01_bb_dev_hot_reload.md-5-7 (1)
5-7:⚠️ Potential issue | 🟡 MinorCode snippets need proper fenced blocks and language tags.
Line 6 (
typescriptimport ...) indicates broken formatting; the TS snippets are hard to copy reliably. Wrap these sections in fenced```tsblocks.Also applies to: 133-137
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@01_bb_dev_hot_reload.md` around lines 5 - 7, The markdown contains broken TypeScript snippets (e.g., the token "typescriptimport path from \"node:path\";" and the import of existsSync) that must be wrapped in proper fenced code blocks; locate the occurrences (including the snippet around "typescriptimport path from \"node:path\";" and the other affected snippet at lines referenced around 133-137) and replace the inline/malformed text with fenced code blocks using ```ts ... ``` containing the valid TypeScript imports and code so the snippets render and copy correctly.CODEBASE_MAP.md-58-58 (1)
58-58:⚠️ Potential issue | 🟡 MinorAdd a language hint for the fenced code block.
Line 58 starts a fenced block without a language identifier (
MD040). Use something like```textfor the tree block.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CODEBASE_MAP.md` at line 58, The fenced code block starting at the tree block marker currently has no language hint; update the opening fence to include a language identifier (for example change ``` to ```text) so the markdown linter rule MD040 is satisfied and the tree block renders consistently.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8bac19d8-7a5c-4b99-8c23-637f056a591c
⛔ Files ignored due to path filters (3)
apps/test-project/bun.lockis excluded by!**/*.lockbun.lockis excluded by!**/*.lockpackages/cli/bun.lockbis excluded by!**/bun.lockb
📒 Files selected for processing (60)
01_bb_dev_hot_reload.md02_better_error_messages.md03_test_suite.mdCODEBASE_MAP.mdREADME.mdapps/test-project/README.mdapps/test-project/betterbase.config.tsapps/test-project/drizzle.config.tsapps/test-project/package.jsonapps/test-project/src/auth/index.tsapps/test-project/src/auth/types.tsapps/test-project/src/db/index.tsapps/test-project/src/db/migrate.tsapps/test-project/src/db/policies/.gitkeepapps/test-project/src/db/schema.tsapps/test-project/src/functions/.gitkeepapps/test-project/src/index.tsapps/test-project/src/lib/env.tsapps/test-project/src/lib/realtime.tsapps/test-project/src/middleware/auth.tsapps/test-project/src/middleware/validation.tsapps/test-project/src/routes/graphql.d.tsapps/test-project/src/routes/health.tsapps/test-project/src/routes/index.tsapps/test-project/src/routes/storage.tsapps/test-project/src/routes/users.tsapps/test-project/test/crud.test.tsapps/test-project/test/health.test.tsapps/test-project/tsconfig.jsonbetterbase_backend_rebuild.mdbetterbase_real_world_project_creation.mdbetterbase_test_suite_v3.mdbiome.jsoncli-auth-page/.gitignorecli-auth-page/index.htmlissues.mdpackage.jsonpackages/cli/src/commands/auth.tspackages/cli/src/commands/dev.tspackages/cli/src/commands/init.tspackages/cli/src/commands/login.tspackages/cli/test/dev.test.tspackages/cli/test/error-messages.test.tspackages/cli/test/logger.test.tspackages/cli/test/prompts.test.tspackages/cli/test/provider-prompts.test.tspackages/client/test/auth.test.tspackages/client/test/storage.test.tspackages/core/src/graphql/schema-generator.tspackages/core/src/migration/rls-migrator.tspackages/core/src/rls/generator.tspackages/core/test/graphql.test.tspackages/core/test/migration.test.tspackages/core/test/providers.test.tspackages/core/test/rls.test.tspackages/core/test/storage.test.tspackages/shared/test/constants.test.tspackages/shared/test/errors.test.tspackages/shared/test/types.test.tspackages/shared/test/utils.test.ts
💤 Files with no reviewable changes (3)
- betterbase_real_world_project_creation.md
- betterbase_test_suite_v3.md
- betterbase_backend_rebuild.md
| it("logs an error and exits when src/index.ts is missing", async () => { | ||
| const testDir = mkdtempSync(path.join(os.tmpdir(), "bb-dev-missing-")) | ||
|
|
||
| // Don't create src/index.ts - this should cause an error | ||
| // The runDevCommand should handle this gracefully | ||
| // Check that the file doesn't exist | ||
| expect(existsSync(path.join(testDir, "src/index.ts"))).toBe(false) | ||
|
|
||
| // Clean up | ||
| rmSync(testDir, { recursive: true, force: true }) | ||
| }) |
There was a problem hiding this comment.
Test does not verify the behavior it describes.
This test is named "logs an error and exits when src/index.ts is missing" but it only checks that the file doesn't exist—it never calls runDevCommand to verify that the error is logged or that the process exits. This incomplete test likely contributes to the pipeline failure.
🐛 Suggested implementation
it("logs an error and exits when src/index.ts is missing", async () => {
const testDir = mkdtempSync(path.join(os.tmpdir(), "bb-dev-missing-"))
- // Don't create src/index.ts - this should cause an error
- // The runDevCommand should handle this gracefully
- // Check that the file doesn't exist
- expect(existsSync(path.join(testDir, "src/index.ts"))).toBe(false)
+ // Create minimal structure WITHOUT src/index.ts
+ mkdirSync(path.join(testDir, "src/db"), { recursive: true })
+ writeFileSync(path.join(testDir, "src/db/schema.ts"), "export const schema = {}")
+
+ const { runDevCommand } = await import("../src/commands/dev")
+
+ // runDevCommand should handle the missing entry point gracefully
+ // Option 1: Expect it to throw
+ // await expect(runDevCommand(testDir)).rejects.toThrow()
+
+ // Option 2: If it returns early without throwing, verify that behavior
+ // const result = await runDevCommand(testDir)
+ // expect(result).toBeUndefined() // or whatever the expected return is
- // Clean up
rmSync(testDir, { recursive: true, force: true })
})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/cli/test/dev.test.ts` around lines 43 - 53, The test currently only
checks that src/index.ts is absent but never invokes runDevCommand; update the
"logs an error and exits when src/index.ts is missing" test to call
runDevCommand(testDir) (await it if async), spy/mock process.exit and the logger
used by runDevCommand (e.g. processLogger or whatever logger is injected) to
capture calls, then assert that the error logger was called with a message about
the missing file and that process.exit was called with a non-zero code; ensure
you restore/clear the spies and still remove the temporary testDir in the test
teardown.
- Update betterbase.config.ts with new settings - Modify main index.ts entry point - Update env.ts environment configuration - Update realtime.ts for real-time functionality changes
- Update auth module with new authentication logic - Modify auth middleware for improved request handling - Update GraphQL type definitions
- Modify storage route handlers in test project
- Update dev command with improved hot reload functionality - Modify init command for better project initialization - Update login command for authentication improvements
- Update auth command tests - Modify dev command tests - Update prompts tests for better coverage
- Update authentication tests in client package - Modify storage functionality tests
- Update RLS migrator for improved policy handling - Modify RLS generator with enhanced security rules
- Update GraphQL tests - Modify migration tests - Update RLS tests - Update storage tests
- Update constants tests in shared package
- Add Betterbase31PR Errors.md with known issues - Add PR31_CHANGES_DOCUMENTATION.md with comprehensive change log
There was a problem hiding this comment.
Actionable comments posted: 8
♻️ Duplicate comments (3)
packages/client/test/auth.test.ts (1)
112-114:⚠️ Potential issue | 🟠 MajorRe-check
mock.restore()usage here; it may be resetting shared mocks too aggressively.This can make later tests flaky if default
mockSignUp/mockSignIn/...implementations or module mock wiring are reset between cases.In Bun's test runner (`bun:test`), what exactly does `mock.restore()` reset? Specifically: does it reset `mock.module(...)` module mocks and standalone `mock(...)` implementations, and what is the recommended way to only clear call history between tests?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/client/test/auth.test.ts` around lines 112 - 114, Replace the aggressive mock.restore() in the afterEach block with a call that only clears call history so shared module mocks and default implementations remain intact: change the afterEach to call mock.reset() (or the equivalent Bun API that clears call history) instead of mock.restore(), and if you need to fully restore mocks once per suite, call mock.restore() in an afterAll block; update references to afterEach, mock.restore, mock.reset, and afterAll accordingly.packages/cli/test/dev.test.ts (1)
43-58:⚠️ Potential issue | 🟠 MajorError-path test can pass even when no error happens.
On Line 53, if
runDevCommand(testDir)resolves, the test still passes and does not actually verify the behavior described by the test name.💡 Suggested fix
- // Call runDevCommand and expect it to throw or handle the error - try { - await runDevCommand(testDir) - } catch (error) { - // Expected to throw due to missing src/index.ts - expect(error).toBeDefined() - } + let cleanup: (() => void) | undefined + let threw = false + try { + cleanup = await runDevCommand(testDir) + } catch (error) { + threw = true + expect(error).toBeDefined() + } finally { + cleanup?.() + } + expect(threw).toBe(true)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/test/dev.test.ts` around lines 43 - 58, The test for runDevCommand allows success to be treated as pass because the try block doesn't fail if no error is thrown; change the assertion to explicitly assert that runDevCommand(testDir) rejects (or, alternatively, after awaiting runDevCommand in the try, call a failing assertion) so the test fails when no error occurs—update the test around the runDevCommand invocation in packages/cli/test/dev.test.ts to use an explicit reject assertion for runDevCommand or add a fail() / expect(false) after the awaited call inside the try; reference symbol: runDevCommand.packages/cli/src/commands/init.ts (1)
738-752:⚠️ Potential issue | 🔴 CriticalGenerated storage auth trusts caller-controlled identity (
x-user-id).
getAuthenticatedUserIdacceptsx-user-idfrom the request after only checking cookie presence. That makes the ownership checks bypassable by header spoofing.💡 Suggested direction
- const userId = c.req.header('x-user-id'); - if (!userId) { - return null; - } - return { id: userId }; + // Derive user identity from a verified server-side session/token only. + // Do not trust client-provided identity headers for authorization.Also applies to: 773-780, 792-793, 814-815, 836-837
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/commands/init.ts` around lines 738 - 752, The getAuthenticatedUserId function currently trusts the x-user-id header after only checking for the presence of a session cookie; replace this unsafe logic by validating the session token and deriving the user id from a trusted source (e.g., verify the BetterAuth session token or lookup the session in the server-side session store) rather than reading x-user-id from headers. Locate getAuthenticatedUserId and any similar uses (places that read c.req.header('x-user-id')) and change them to call the session validation/lookup routine (or verify a signed cookie) and return the authenticated id from that trusted result; remove direct reliance on caller-controlled headers so ownership checks cannot be bypassed.
🧹 Nitpick comments (3)
packages/client/test/storage.test.ts (2)
23-30:createMockClientexposes an unuseddownloadresponse contract.
download?: stringis never consumed by the mock logic, so the helper API is misleading. Either wire it into a download branch or remove it for now.♻️ Proposed cleanup
function createMockClient(responses: { upload?: { path: string; url: string }; - download?: string; publicUrl?: string; signedUrl?: string; remove?: string; list?: { files: Array<{ name: string; size: number; lastModified: string }> }; }): { getUrl: () => string; fetch: typeof fetch } {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/client/test/storage.test.ts` around lines 23 - 30, The declared download?: string in the createMockClient response contract is unused and misleading; fix it by either removing the download field from the responses type in createMockClient (and updating any test callers that pass download) or by implementing the download behavior inside createMockClient's fetch handler so it returns the provided string when a download endpoint is requested; locate createMockClient in storage.test.ts and update the responses type and corresponding mock logic (or callers) accordingly, then run the tests to verify no unused-field errors remain.
168-185: Tighten negative-path assertions to lock error contracts.These tests currently only verify that an error exists. Since production returns operation-specific
name/message, assert those fields so regressions in error mapping are caught early.✅ Example assertion hardening
it("returns error when upload fails with non-ok response", async () => { // ... expect(result.error).toBeDefined(); + expect(result.error?.name).toBe("UploadError"); + expect(result.error?.message).toBe("Upload failed - file too large"); expect(result.data).toBeNull(); });Also applies to: 227-242, 275-290, 349-364, 431-446, 545-560
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/client/test/storage.test.ts` around lines 168 - 185, The test should assert the operation-specific error contract rather than just existence: update the negative-path in the test for StorageBucketClient.upload to check that result.error.message equals the server JSON message ("Upload failed - file too large") and that result.error.name equals the expected operation error name (e.g., "UploadError") so regressions in error mapping are caught; locate the failing test around the StorageBucketClient.upload invocation and add assertions for result.error.name and result.error.message alongside the existing checks.apps/test-project/betterbase.config.ts (1)
20-27: Prefer throwing overprocess.exit()in config evaluation.Terminating the process inside config parsing makes this harder to reuse in tests/tools and bypasses caller cleanup/error handling. Throw a descriptive error and let the CLI entrypoint decide when to exit.
Proposed change
if (!dbUrl || typeof dbUrl !== "string" || dbUrl.trim() === "") { - console.error( - "[BetterBase Config Error] DATABASE_URL is required but not set or is empty. " + - "Please set the DATABASE_URL environment variable.\n" + - "Example: DATABASE_URL=\"postgresql://user:pass@localhost:5432/mydb\"" - ); - process.exit(1); + throw new Error( + "[BetterBase Config Error] DATABASE_URL is required but not set or is empty. " + + "Please set the DATABASE_URL environment variable.\n" + + "Example: DATABASE_URL=\"postgresql://user:pass@localhost:5432/mydb\"" + ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/test-project/betterbase.config.ts` around lines 20 - 27, Replace the process.exit path in the DATABASE_URL validation with a thrown Error so callers can handle termination; specifically update the validation block that checks dbUrl (the if branch checking !dbUrl || typeof dbUrl !== "string" || dbUrl.trim() === "") in betterbase.config.ts to throw a descriptive Error (including the same message currently sent to console.error) instead of calling console.error and process.exit(1), leaving CLI or tests to decide whether to exit.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/test-project/src/lib/realtime.ts`:
- Around line 69-75: The authenticate function currently short-circuits to null
for non-development by checking allowDevAuth; replace that hard-disable with a
real verification path: when allowDevAuth is false call an injected or
configurable token verifier (e.g., a function provided via closure/config or
passed into the module) to validate the token and map its result to the expected
{ userId, claims: string[] } shape (return null on failure), while keeping the
dev shortcut when NODE_ENV === "development"; update the authenticate(token:
string | undefined) implementation to use this verifier instead of always
returning null and add a clear fallback for missing verifier.
In `@apps/test-project/src/routes/storage.ts`:
- Around line 239-247: The code is using the wrong Hono API: replace
c.req.body({ all: true }) with the raw request stream c.req.raw.body (which
returns ReadableStream<Uint8Array> | null), check for null and return the same
400 error if absent, then call getReader() on that stream; update variables like
bodyStream and reader usage in the same block (e.g., where bodyStream is defined
and reader = bodyStream.getReader()) so the streaming/size-enforcement logic
reads from c.req.raw.body instead of c.req.body.
In `@Betterbase31PR` Errors.md:
- Around line 5-421: The Betterbase31PR Errors.md document contains transient
review triage and stale CI output and should not be committed; remove this file
from the repository (or move its contents into the PR discussion or a tracked
issue) so it is not part of the codebase, ensuring no references/imports remain
that expect it to exist and then commit the deletion; if preserving information,
create a new issue or draft in the issue tracker with the content and replace
the file deletion with a short README pointing to that issue.
In `@packages/cli/src/commands/dev.ts`:
- Around line 155-161: The file restricts the fs.watch recursive option by
checking isSupportedPlatform (process.platform in dev command) before passing
opts to watch(watchPath, opts...), which prevents recursive watching on Linux;
to fix, either raise the project's Node.js engine constraint to >=19.1.0 (update
package.json engines and any CI/tooling) so the existing recursive option can be
used on Linux, and annotate the isSupportedPlatform logic (or remove it) to
allow recursive={ true } when process.version >= v19.1.0, or replace the native
watch call with a cross-platform watcher (e.g., replace watch(watchPath, opts,
...) usage with chokidar.watch(watchPath, { persistent: true, ignoreInitial:
true, depth: undefined }) and wire the same callback) so nested src/routes/**
changes are detected on Linux; update any related tests/Docs accordingly and
keep references to watchPath, opts, isDir, isSupportedPlatform, and watch when
making changes.
In `@packages/cli/src/commands/init.ts`:
- Around line 1333-1336: The prompt is collecting sensitive data using
prompts.text which reveals the secret; change the input to a masked/secret
prompt so STORAGE_SECRET_KEY is not echoed—replace the prompts.text call that
sets storageCredentials.STORAGE_SECRET_KEY with a masked input (e.g.,
prompts.secret or your project's masked-input helper) and ensure the new helper
behaves like prompts.text but hides input and does not log the value anywhere;
update any imports or helpers to provide prompts.secret if missing and use that
symbol in place of prompts.text for STORAGE_SECRET_KEY.
In `@packages/core/test/migration.test.ts`:
- Around line 73-77: The vi.mock calls for the scanner are being registered at
runtime inside the test and therefore do not replace the already-cached scanner
used by runMigration; move all vi.mock("../src/rls/scanner", ...) calls to the
module/top level in migration.test.ts (before any import or require of the
module that exports runMigration) so the mock is hoisted before runMigration is
imported, or alternatively register the mock then dynamically import the module
that exports runMigration (e.g., await import(...)) after mocking; apply the
same change for the other vi.mock occurrences mentioned so the scanner mock
actually affects runMigration (or run tests with --preload to register mocks
prior to imports).
In `@PR31_CHANGES_DOCUMENTATION.md`:
- Around line 26-27: The file links in PR31_CHANGES_DOCUMENTATION.md use a
non-GitHub format like "(.../file.ts:20-31)" which won't render as clickable
line anchors; update those references to standard GitHub file links pointing to
apps/test-project/src/index.ts with the proper `#L20-L31` (or `#L20-L31`) anchor
syntax so they become clickable (e.g., use the repository-relative path and
append `#L20-L31`); locate the occurrences referencing
apps/test-project/src/index.ts and replace them with the GitHub-style link
format.
- Line 401: The phrase "argument-array style process spawns" should be
hyphenated as a compound modifier; update the sentence containing "Replaced with
argument-array style process spawns using `Bun.spawn` with separate arguments,
preventing shell interpretation." to "Replaced with argument-array-style process
spawns using `Bun.spawn` with separate arguments, preventing shell
interpretation." to fix the grammar.
---
Duplicate comments:
In `@packages/cli/src/commands/init.ts`:
- Around line 738-752: The getAuthenticatedUserId function currently trusts the
x-user-id header after only checking for the presence of a session cookie;
replace this unsafe logic by validating the session token and deriving the user
id from a trusted source (e.g., verify the BetterAuth session token or lookup
the session in the server-side session store) rather than reading x-user-id from
headers. Locate getAuthenticatedUserId and any similar uses (places that read
c.req.header('x-user-id')) and change them to call the session validation/lookup
routine (or verify a signed cookie) and return the authenticated id from that
trusted result; remove direct reliance on caller-controlled headers so ownership
checks cannot be bypassed.
In `@packages/cli/test/dev.test.ts`:
- Around line 43-58: The test for runDevCommand allows success to be treated as
pass because the try block doesn't fail if no error is thrown; change the
assertion to explicitly assert that runDevCommand(testDir) rejects (or,
alternatively, after awaiting runDevCommand in the try, call a failing
assertion) so the test fails when no error occurs—update the test around the
runDevCommand invocation in packages/cli/test/dev.test.ts to use an explicit
reject assertion for runDevCommand or add a fail() / expect(false) after the
awaited call inside the try; reference symbol: runDevCommand.
In `@packages/client/test/auth.test.ts`:
- Around line 112-114: Replace the aggressive mock.restore() in the afterEach
block with a call that only clears call history so shared module mocks and
default implementations remain intact: change the afterEach to call mock.reset()
(or the equivalent Bun API that clears call history) instead of mock.restore(),
and if you need to fully restore mocks once per suite, call mock.restore() in an
afterAll block; update references to afterEach, mock.restore, mock.reset, and
afterAll accordingly.
---
Nitpick comments:
In `@apps/test-project/betterbase.config.ts`:
- Around line 20-27: Replace the process.exit path in the DATABASE_URL
validation with a thrown Error so callers can handle termination; specifically
update the validation block that checks dbUrl (the if branch checking !dbUrl ||
typeof dbUrl !== "string" || dbUrl.trim() === "") in betterbase.config.ts to
throw a descriptive Error (including the same message currently sent to
console.error) instead of calling console.error and process.exit(1), leaving CLI
or tests to decide whether to exit.
In `@packages/client/test/storage.test.ts`:
- Around line 23-30: The declared download?: string in the createMockClient
response contract is unused and misleading; fix it by either removing the
download field from the responses type in createMockClient (and updating any
test callers that pass download) or by implementing the download behavior inside
createMockClient's fetch handler so it returns the provided string when a
download endpoint is requested; locate createMockClient in storage.test.ts and
update the responses type and corresponding mock logic (or callers) accordingly,
then run the tests to verify no unused-field errors remain.
- Around line 168-185: The test should assert the operation-specific error
contract rather than just existence: update the negative-path in the test for
StorageBucketClient.upload to check that result.error.message equals the server
JSON message ("Upload failed - file too large") and that result.error.name
equals the expected operation error name (e.g., "UploadError") so regressions in
error mapping are caught; locate the failing test around the
StorageBucketClient.upload invocation and add assertions for result.error.name
and result.error.message alongside the existing checks.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1450f618-1fdf-4395-8a1e-4b066513a94e
📒 Files selected for processing (24)
Betterbase31PR Errors.mdPR31_CHANGES_DOCUMENTATION.mdapps/test-project/betterbase.config.tsapps/test-project/src/auth/index.tsapps/test-project/src/index.tsapps/test-project/src/lib/env.tsapps/test-project/src/lib/realtime.tsapps/test-project/src/middleware/auth.tsapps/test-project/src/routes/graphql.d.tsapps/test-project/src/routes/storage.tspackages/cli/src/commands/dev.tspackages/cli/src/commands/init.tspackages/cli/src/commands/login.tspackages/cli/test/auth-command.test.tspackages/cli/test/dev.test.tspackages/cli/test/prompts.test.tspackages/client/test/auth.test.tspackages/client/test/storage.test.tspackages/core/src/rls/generator.tspackages/core/test/graphql.test.tspackages/core/test/migration.test.tspackages/core/test/rls.test.tspackages/core/test/storage.test.tspackages/shared/test/constants.test.ts
🚧 Files skipped from review as they are similar to previous changes (8)
- packages/cli/test/prompts.test.ts
- packages/shared/test/constants.test.ts
- apps/test-project/src/lib/env.ts
- apps/test-project/src/auth/index.ts
- apps/test-project/src/middleware/auth.ts
- apps/test-project/src/routes/graphql.d.ts
- packages/core/test/rls.test.ts
- packages/core/src/rls/generator.ts
| authenticate(token: string | undefined): { userId: string; claims: string[] } | null { | ||
| if (!token || !token.trim()) return null; | ||
|
|
||
| const allowDevAuth = process.env.NODE_ENV === "development"; | ||
| if (!allowDevAuth) { | ||
| return null; | ||
| } |
There was a problem hiding this comment.
Production auth path is currently hard-disabled.
authenticate() always returns null outside development, so realtime connections cannot authenticate in production/staging. Add a real token verifier path (config-injected or middleware-provided) for non-dev environments.
Proposed direction
interface RealtimeConfig {
maxClients: number;
maxSubscriptionsPerClient: number;
maxSubscribersPerTable: number;
+ tokenVerifier?: (token: string) => { userId: string; claims: string[] } | null;
}
...
authenticate(token: string | undefined): { userId: string; claims: string[] } | null {
if (!token || !token.trim()) return null;
const allowDevAuth = process.env.NODE_ENV === "development";
- if (!allowDevAuth) {
- return null;
- }
+ if (!allowDevAuth) return this.config.tokenVerifier?.(token.trim()) ?? null;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/test-project/src/lib/realtime.ts` around lines 69 - 75, The authenticate
function currently short-circuits to null for non-development by checking
allowDevAuth; replace that hard-disable with a real verification path: when
allowDevAuth is false call an injected or configurable token verifier (e.g., a
function provided via closure/config or passed into the module) to validate the
token and map its result to the expected { userId, claims: string[] } shape
(return null on failure), while keeping the dev shortcut when NODE_ENV ===
"development"; update the authenticate(token: string | undefined) implementation
to use this verifier instead of always returning null and add a clear fallback
for missing verifier.
| // Stream the body and enforce maxSize during streaming to prevent DoS attacks | ||
| // Content-Length can be spoofed, so we must enforce the limit during read | ||
| const bodyStream = c.req.body({ all: true }); | ||
| if (!bodyStream) { | ||
| return c.json({ error: "No body provided" }, 400); | ||
| } | ||
|
|
||
| const chunks: Uint8Array[] = []; | ||
| const reader = bodyStream.getReader(); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check if the file exists and read the relevant section
cat -n apps/test-project/src/routes/storage.ts | sed -n '230,260p'Repository: weroperking/Betterbase
Length of output: 1267
🏁 Script executed:
# Check for Hono imports and version in package.json or similar
find . -name "package.json" -type f | xargs grep -l "hono" | head -5Repository: weroperking/Betterbase
Length of output: 172
🏁 Script executed:
# Search for other uses of c.req.body in the codebase to see if it's a common pattern
rg "c\.req\.body\(" --type ts --type tsx -B 2 -A 2Repository: weroperking/Betterbase
Length of output: 93
🏁 Script executed:
# Also search for c.req.raw to see how it's used
rg "c\.req\.raw" --type ts --type tsx -B 2 -A 2Repository: weroperking/Betterbase
Length of output: 93
🏁 Script executed:
# Check Hono version in root package.json
cat package.json | grep -A 5 -B 5 '"hono"'Repository: weroperking/Betterbase
Length of output: 204
🏁 Script executed:
# Also check apps/test-project package.json
cat apps/test-project/package.json | grep -A 5 -B 5 '"hono"'Repository: weroperking/Betterbase
Length of output: 353
🏁 Script executed:
# Search for other body or stream usage patterns
rg "req\.body" --type ts -B 2 -A 2 | head -50Repository: weroperking/Betterbase
Length of output: 546
🏁 Script executed:
# Search for req.raw patterns
rg "req\.raw" --type ts -B 2 -A 2 | head -50Repository: weroperking/Betterbase
Length of output: 2757
🏁 Script executed:
# Check the imports at the top of storage.ts to see Hono version context
head -30 apps/test-project/src/routes/storage.tsRepository: weroperking/Betterbase
Length of output: 997
🌐 Web query:
Hono v4.6.10 request body streaming API c.req.body c.req.raw.body
💡 Result:
In Hono v4.6.10, c.req is a HonoRequest wrapper around the standard Web Request object, exposed as c.req.raw. Hono does not provide an Express-like parsed c.req.body property. Instead, you either:
- Parse the body via HonoRequest methods like
await c.req.json(),await c.req.text(),await c.req.arrayBuffer(),await c.req.parseBody(), etc. (these consume the body stream), or - Stream the raw body via the underlying Web Request stream:
c.req.raw.body(aReadableStream<Uint8Array> | null).c.req.rawis the documented way to access the rawRequest. [1][2]
Request-body streaming (raw)
app.post('/upload', async (c) => {
const stream = c.req.raw.body
if (!stream) return c.text('No body', 400)
const reader = stream.getReader()
for (;;) {
const { value, done } = await reader.read()
if (done) break
// value is Uint8Array chunk
}
return c.text('ok')
})Important: body is one-shot
If anything already called c.req.json(), c.req.text(), validation middleware, etc., the stream is consumed. If you need to read it again, use cloneRawRequest(c.req), which is designed to work even after consumption by Hono/validators. [1][2]
Also note the naming confusion: c.body() is a Context response helper (response body), not a request-body API. [3]
Sources: HonoRequest API (raw, cloneRawRequest) [1], JSR API listing for HonoRequest.raw / cloneRawRequest [2], Context body() (response) [3].
Use the correct Hono request stream API for body access.
Line 241 calls c.req.body({ all: true }), which does not exist in Hono v4.6.10. In Hono, c.req.body() is a response helper, not a request API. To access the raw request body stream for chunked reading, use c.req.raw.body, which returns a ReadableStream<Uint8Array> | null.
Suggested fix
- const bodyStream = c.req.body({ all: true });
+ const bodyStream = c.req.raw.body;
if (!bodyStream) {
return c.json({ error: "No body provided" }, 400);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/test-project/src/routes/storage.ts` around lines 239 - 247, The code is
using the wrong Hono API: replace c.req.body({ all: true }) with the raw request
stream c.req.raw.body (which returns ReadableStream<Uint8Array> | null), check
for null and return the same 400 error if absent, then call getReader() on that
stream; update variables like bodyStream and reader usage in the same block
(e.g., where bodyStream is defined and reader = bodyStream.getReader()) so the
streaming/size-enforcement logic reads from c.req.raw.body instead of
c.req.body.
| Verify each finding against the current code and | ||
| only fix it if needed. | ||
|
|
||
| In `@packages/core/src/rls/generator.ts` around lines 104 - 120, policyToSQL | ||
| currently concatenates all SQL pieces into one string which breaks downstream | ||
| parsing; modify policyToSQL to return an array of statement strings (preserve | ||
| boundaries) instead of a single joined string: collect enableRLS(policy.table) | ||
| and each generatePolicyStatement(policy, operation) into a string[] and return | ||
| that array, and then update any callers to accept the string[] (or map/join at | ||
| the callsite if needed); reference functions: policyToSQL, enableRLS, | ||
| generatePolicyStatement, and the PolicyOperation loop so you locate and adjust | ||
| the collection/return behavior. | ||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/cli/src/commands/login.ts` around lines 99 - 102, The device code | ||
| generation uses Math.random() (chars, part1, part2) which is not | ||
| cryptographically secure; replace the random selection with | ||
| crypto.randomBytes-based randomness: create sufficient random bytes, map each | ||
| byte to an index into the chars string (e.g., use modulo with rejection or mask | ||
| to avoid bias) to build part1 and part2 securely, then return | ||
| `${part1}-${part2}`; ensure you import Node's crypto and remove Math.random() | ||
| usage in this generation logic. | ||
|
|
||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/cli/src/commands/dev.ts` around lines 156 - 157, The watcher call | ||
| uses { recursive: true } unconditionally which can be ignored or invalid for | ||
| file paths and on Linux; update the code around the watch(watchPath, { | ||
| recursive: true }, ...) invocation to only pass the recursive option when | ||
| watchPath is a directory and the platform supports recursive watching | ||
| (process.platform === 'darwin' or 'win32'). Detect directory-ness via | ||
| fs.statSync or fs.promises.stat (check stat.isDirectory()) on the watchPath | ||
| before creating the watcher, build the options object conditionally (e.g., opts | ||
| = isDir && isSupportedPlatform ? { recursive: true } : undefined), and then call | ||
| watch(watchPath, opts, ...) so logger.info and the watcher variable remain | ||
| unchanged but recursive is applied safely. | ||
|
|
||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/shared/test/constants.test.ts` around lines 83 - 85, Replace the | ||
| brittle check expect(FUNCTIONS_DIR).toContain("/") with an assertion that | ||
| FUNCTIONS_DIR matches a non-empty-segment path pattern: at least one slash | ||
| separating segments, no empty segments (i.e., no '//' anywhere) and no trailing | ||
| slash; do the same replacement for BUILT_FUNCTIONS_DIR (and the tests at the | ||
| corresponding lines) so both values are validated as real directory paths | ||
| composed of non-empty path segments separated by single slashes. | ||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/shared/test/constants.test.ts` around lines 52 - 54, The test using | ||
| CONTEXT_FILE_NAME currently uses toContain(".json") which allows suffixes like | ||
| "foo.json.tmp"; change the assertion in the test (the it block referencing | ||
| CONTEXT_FILE_NAME) to assert the filename ends with ".json" (e.g., use a string | ||
| endsWith check or a regex match for /\.json$/) so only true .json filenames | ||
| pass. | ||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/client/test/auth.test.ts` around lines 369 - 389, The signOut | ||
| error-path test currently only asserts token removal but must also verify the | ||
| returned result follows the AuthError contract; in the test for | ||
| AuthClient.signOut (and the similar test at lines 391-410) assert that the | ||
| returned value has result.error populated with the expected shape/message (e.g., | ||
| error.message === "Sign out failed" and/or instanceof or error.type if | ||
| applicable) and that result.data is null (or matches the expected empty data | ||
| contract), so update the test assertions to check result.error and result.data | ||
| in addition to clearing the mockStorage token. | ||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/client/test/auth.test.ts` at line 1, The import specifiers on Line 1 | ||
| are not sorted per lint rules; reorder the named imports in the test file so | ||
| they are alphabetically sorted (afterAll, afterEach, beforeAll, describe, | ||
| expect, it, mock) in the import statement that currently lists describe, it, | ||
| expect, beforeAll, afterAll, mock, afterEach to satisfy the linter. | ||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/core/test/migration.test.ts` around lines 10 - 17, Remove the unused | ||
| top-level imports of applyPolicies, applyAuthFunction, applyRLSMigration, | ||
| dropPolicies, dropTableRLS, and getAppliedPolicies from the test file; these | ||
| functions are re-imported dynamically later in the | ||
| describe("migration/rls-migrator") block (the dynamic import/assignment around | ||
| lines where the tests set those symbols), so delete the initial import statement | ||
| that lists these six symbols to avoid test pollution and unused-import warnings. | ||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@apps/test-project/betterbase.config.ts` around lines 48 - 62, The | ||
| provider.connectionString currently assigns process.env.DATABASE_URL which may | ||
| be undefined; update the BetterBaseConfig/provider initialization to validate | ||
| and fail fast: check that process.env.DATABASE_URL is a non-empty string (or use | ||
| a schema validator like Zod) before assigning to provider.connectionString, and | ||
| throw a clear error or log and exit if missing; reference the | ||
| provider.connectionString property and the surrounding provider block (and | ||
| optionally a Zod schema for DATABASE_URL) so the runtime configuration cannot be | ||
| undefined. | ||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/core/test/graphql.test.ts` around lines 330 - 342, The test passes | ||
| subscriptions: false to generateResolvers but then asserts | ||
| resolvers.Subscription is defined, which conflicts with the other test expecting | ||
| undefined when subscriptions are disabled; either update the test to assert | ||
| expect(resolvers.Subscription).toBeUndefined() to match the intended behavior, | ||
| or if the desired behavior is to return a default/empty Subscription object even | ||
| when disabled, modify generateResolvers (the function named generateResolvers) | ||
| to return that default Subscription shape when called with { subscriptions: | ||
| false } and update documentation/comments accordingly; pick the approach | ||
| consistent with the existing test at line 139 and adjust the assertion or | ||
| implementation to match. | ||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/client/test/storage.test.ts` around lines 1 - 2, The import | ||
| statements at the top (the Bun test helpers: describe, it, expect, beforeAll, | ||
| afterAll, mock, afterEach and the node:fs functions mkdtempSync, writeFileSync, | ||
| rmSync, readFileSync) are not sorted; run Biome organize-imports/format on this | ||
| test file or manually reorder the two import lines to satisfy the project's | ||
| import ordering (e.g., group and alphabetize imports consistently), then save so | ||
| CI lint passes. | ||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@issues.md` around lines 9 - 12, The quality report still contains hardcoded | ||
| "Status: ✅ PASSED" lines that no longer reflect the current pipeline; locate | ||
| each occurrence of the status header (e.g., the literal line "Status: ✅ PASSED" | ||
| and the similar status blocks later in the document) and update them to | ||
| accurately reflect the current CI results (replace the emoji/text with the real | ||
| status and a short note or failing check list), and ensure the summary sections | ||
| mentioned (the repeated status blocks) are consistent with the latest pipeline | ||
| output. | ||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/core/test/storage.test.ts` around lines 1 - 3, The file has multiple | ||
| separate imports from "node:fs" which breaks the import-order/lint rule; | ||
| consolidate the two imports into a single import statement that pulls | ||
| mkdtempSync, writeFileSync, mkdirSync, rmSync, and existsSync from "node:fs" and | ||
| ensure the import line is placed/sorted correctly among other imports in | ||
| storage.test.ts (look for the existing import lines at the top to replace both | ||
| occurrences). | ||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@apps/test-project/src/auth/index.ts` around lines 20 - 22, Add validation for | ||
| AUTH_SECRET and AUTH_URL in the env schema and use the validated values when | ||
| constructing the auth config: update env.ts to include AUTH_SECRET (e.g., | ||
| z.string().min(32).optional() or required in prod) and AUTH_URL | ||
| (z.string().url().default("http://localhost:3000")), then replace direct uses of | ||
| process.env.AUTH_SECRET, process.env.AUTH_URL in the auth config (see secret, | ||
| baseURL, trustedOrigins in the auth setup) with env.AUTH_SECRET and env.AUTH_URL | ||
| so missing/invalid values are caught at startup. | ||
|
|
||
| Suggested addition to env.ts | ||
| const envSchema = z.object({ | ||
| NODE_ENV: z.enum(["development", "test", "production"]).default("development"), | ||
| PORT: z.coerce.number().int().positive().default(3000), | ||
| DB_PATH: z.string().min(1).default(DEFAULT_DB_PATH), | ||
| AUTH_SECRET: z.string().min(32).optional(), // Required in production | ||
| AUTH_URL: z.string().url().default("http://localhost:3000"), | ||
| }); | ||
|
|
||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/cli/test/dev.test.ts` around lines 55 - 76, The test in | ||
| packages/cli/test/dev.test.ts only creates files and asserts they exist but | ||
| never invokes the function under test (runDevCommand), so update the "creates | ||
| project structure for dev server" test to actually exercise runDevCommand: call | ||
| runDevCommand (or the exported CLI entrypoint that starts the dev server) with | ||
| the temporary testDir as the project root, await its result or mock/stub any | ||
| long-running behavior, then assert expected side-effects (e.g., server started | ||
| flag, created config files, returned port, or that specific helper functions | ||
| were invoked) and finally clean up the temp dir; alternatively remove this test | ||
| if you decide not to test runDevCommand here. Ensure you reference runDevCommand | ||
| (or the CLI start function) and the temp directory setup/teardown code so the | ||
| test both prepares and exercises the real behavior instead of only validating | ||
| filesystem setup. | ||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@apps/test-project/src/lib/env.ts` around lines 3 - 4, Replace the local | ||
| export DEFAULT_DB_PATH in apps/test-project/src/lib/env.ts with the shared | ||
| constant: remove the hardcoded export and import DEFAULT_DB_PATH from the shared | ||
| constants module (packages/shared/src/constants.ts) so the file uses the single | ||
| source of truth; update any references in this file to use the imported | ||
| DEFAULT_DB_PATH and delete the local definition to avoid duplication. | ||
|
|
||
|
|
||
| # Major and Critical | ||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@apps/test-project/src/index.ts` around lines 24 - 27, The current WebSocket | ||
| auth accepts a queryToken fallback (authHeaderToken && queryToken branch) which | ||
| is unsafe for production; modify the logic around authHeaderToken and queryToken | ||
| in apps/test-project/src/index.ts so that queryToken is only accepted in | ||
| non-production (e.g., when process.env.NODE_ENV !== 'production' or an explicit | ||
| isDev flag), otherwise reject or ignore queryToken and require | ||
| header/cookie/subprotocol auth; update the console.warn to only run in the dev | ||
| branch and ensure the auth flow (authHeaderToken, queryToken checks) enforces | ||
| this policy. | ||
|
|
||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@apps/test-project/src/index.ts` around lines 55 - 69, Replace the | ||
| require-based blind catch with an async dynamic import and only treat a | ||
| missing-module error as "not generated": use await import("./routes/graphql") to | ||
| load the module, extract graphqlRoute (the graphqlRoute symbol and its | ||
| ReturnType cast remain the same) and call app.route("/", graphqlRoute); in the | ||
| catch check err.code === 'ERR_MODULE_NOT_FOUND' || err.code === | ||
| 'MODULE_NOT_FOUND' || /Cannot find module|Cannot find | ||
| package/.test(String(err.message)) and if so, keep the dev-only console.log | ||
| using env.NODE_ENV; otherwise rethrow or log the error so real syntax/runtime | ||
| errors in the module are not swallowed. | ||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@apps/test-project/src/lib/realtime.ts` around lines 72 - 76, The current dev | ||
| auth gate uses process.env.ENABLE_DEV_AUTH which allows dev-token parsing | ||
| outside development; change the check so the dev-token parser is enabled only | ||
| when process.env.NODE_ENV === "development" (remove the ENABLE_DEV_AUTH OR | ||
| branch) and ensure code paths that rely on the dev parser (the allowDevAuth | ||
| variable and the branch that returns null) instead call the real verifier in | ||
| non-development environments (i.e., keep allowDevAuth true only in development | ||
| and use the production verifier elsewhere); update references to allowDevAuth in | ||
| this file (realtime.ts) so unsigned token parsing is never permitted when | ||
| NODE_ENV !== "development". | ||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@apps/test-project/src/middleware/auth.ts` around lines 4 - 24, Wrap calls to | ||
| auth.api.getSession in try/catch inside both requireAuth and optionalAuth; on | ||
| error in requireAuth return c.json({ data: null, error: "Unauthorized" }, 401) | ||
| so failures are treated as unauthenticated, and in optionalAuth swallow or log | ||
| the error and continue without setting user/session so the request degrades to | ||
| unauthenticated. Locate the auth call by the symbol auth.api.getSession and | ||
| update the requireAuth and optionalAuth functions accordingly; also apply the | ||
| same pattern to the similar auth call in the storage route mentioned. | ||
|
|
||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@apps/test-project/src/routes/graphql.d.ts` around lines 7 - 8, The module | ||
| augmentation currently declares module "./routes/graphql" which resolves | ||
| incorrectly; update the declaration to declare module "./graphql" so it targets | ||
| the actual module and preserve the exported symbol by keeping export const | ||
| graphqlRoute: Hono; (ensure Hono is in scope or imported/available). Locate the | ||
| existing declaration string "./routes/graphql" and change it to "./graphql" | ||
| while leaving the exported identifier graphqlRoute and its type untouched. | ||
|
|
||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@apps/test-project/src/routes/storage.ts` around lines 228 - 237, The current | ||
| check trusts Content-Length and then calls c.req.arrayBuffer(), which can be | ||
| bypassed; change to stream the incoming request and enforce the maxSize while | ||
| reading so you never allocate more than the limit. Replace the | ||
| c.req.arrayBuffer() call with a streaming read (using the request body stream / | ||
| reader available on c.req, or Node request stream) that accumulates into a | ||
| Buffer (or temp file) and checks a running byteCount against maxSize on each | ||
| chunk, immediately return a 413/400 JSON error if byteCount > maxSize, and only | ||
| construct `body` after the stream completes within the limit; keep the existing | ||
| `maxSize`, `contentLength` check as a best-effort early abort but enforce the | ||
| hard limit during the streaming read. | ||
|
|
||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@apps/test-project/src/routes/storage.ts` around lines 269 - 274, The route | ||
| parameter for nested object keys currently uses :key which stops at slashes; | ||
| update the Hono route patterns in the storageRouter handlers to use the | ||
| regex-constrained parameter :key{.+} so keys like "uploads/2026/03/file.txt" are | ||
| captured; specifically replace the path strings used in | ||
| storageRouter.get("/:bucket/:key", ...), the GET route that ends with "/public" | ||
| (currently "/:bucket/:key/public"), and the route that ends with "/sign" | ||
| (currently "/:bucket/:key/sign") to use "/:bucket/:key{.+}", | ||
| "/:bucket/:key{.+}/public", and "/:bucket/:key{.+}/sign" respectively so | ||
| downstream code (e.g., validatePath) receives the full key. | ||
|
|
||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/cli/src/commands/init.ts` around lines 717 - 732, The S3Client | ||
| config only sets region for provider === "s3" but getSignedUrl requires a region | ||
| for SigV4 even when using a custom endpoint; update the endpointLine logic so | ||
| both branches include a region entry (e.g., region: process.env.STORAGE_REGION | ||
| ?? "us-east-1") and keep the endpoint line for non-s3 providers (so the S3Client | ||
| instantiation in init.ts always has a region plus endpoint when needed), | ||
| adjusting the constant used in the returned template (endpointLine) accordingly. | ||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/cli/src/commands/init.ts` around lines 739 - 765, The storage | ||
| endpoints (storageRoute.post('/presign'), storageRoute.get('/presign/:key{.+}'), | ||
| storageRoute.delete('/:key{.+}')) are currently unauthenticated; add | ||
| auth/authorization checks to each handler so only signed-in and authorized users | ||
| can presign or delete objects. Implement this by invoking your existing auth | ||
| middleware or helper (e.g., ensureAuthenticated(c) or verifyJwtToken(c)) at the | ||
| start of each route handler or by attaching an auth middleware to storageRoute, | ||
| then enforce any owner/role checks (e.g., confirm the user owns the resource or | ||
| has admin/storage permissions) before calling getSignedUrl or | ||
| DeleteObjectCommand and return 401/403 on failure. Ensure the authorization | ||
| decision uses unique identifiers from the request (the key param or request body | ||
| key) so deletions are permitted only for allowed users. | ||
|
|
||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/cli/src/commands/login.ts` around lines 107 - 110, The code | ||
| currently builds shell commands with string interpolation using execSync and | ||
| url, creating a command-injection risk; replace these with argument-array style | ||
| process spawns (as used in graphql.ts) so the URL is passed as a separate | ||
| argument. Specifically, stop using execSync(`open "${url}"`) / execSync(`start | ||
| "" "${url}"`) / execSync(`xdg-open "${url}"`) and instead call a spawn API | ||
| (e.g., Bun.spawn or child_process.spawn) with the program name and url as | ||
| distinct arguments (["open", url], ["start", url] or ["xdg-open", url]) and | ||
| preserve the equivalent stdio handling (ignore) and platform branching around | ||
| process.platform. Ensure you do not enable shell:true so the URL is never | ||
| interpreted by a shell. | ||
|
|
||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/cli/test/dev.test.ts` around lines 43 - 53, The test currently only | ||
| checks that src/index.ts is absent but never invokes runDevCommand; update the | ||
| "logs an error and exits when src/index.ts is missing" test to call | ||
| runDevCommand(testDir) (await it if async), spy/mock process.exit and the logger | ||
| used by runDevCommand (e.g. processLogger or whatever logger is injected) to | ||
| capture calls, then assert that the error logger was called with a message about | ||
| the missing file and that process.exit was called with a non-zero code; ensure | ||
| you restore/clear the spies and still remove the temporary testDir in the test | ||
| teardown. | ||
|
|
||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/cli/test/prompts.test.ts` around lines 11 - 21, Tests in | ||
| prompts.test.ts are tautological because they assert local literals instead of | ||
| exercising the exported prompt builders; replace those literal checks with calls | ||
| to the actual functions (prompts.text, prompts.confirm, prompts.select) from the | ||
| module under test and assert their returned prompt config or snapshot so | ||
| regressions are caught. Specifically, import the prompts module, call | ||
| prompts.text({ message, initial? }), prompts.confirm({ message, initial? }), | ||
| prompts.select({ message, choices? }) and assert the returned object contains | ||
| expected keys/values (message, initial, choices, type) or use jest snapshots; if | ||
| the functions are interactive, mock the underlying inquirer/interactive layer so | ||
| tests remain deterministic. Ensure each test uses the function names | ||
| prompts.text, prompts.confirm, prompts.select instead of checking plain object | ||
| literals. | ||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/client/test/auth.test.ts` at line 2, The tests import AuthClient | ||
| which causes src/auth.ts to eagerly import createAuthClient from | ||
| "better-auth/client" before your mock.module(...) is registered, so move the | ||
| mock.module("better-auth/client", ...) call to the very top of the test file | ||
| (before the import { AuthClient } from "../src/auth") so the module-level | ||
| dependency is mocked when src/auth.ts loads; then in afterEach, either verify | ||
| mock.restore() semantics or replace it with mock.clearAll() (or equivalent | ||
| provided by Bun) to avoid clearing mocks unexpectedly between tests and ensure | ||
| subsequent tests get a clean mocked module. | ||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/client/test/auth.test.ts` around lines 105 - 111, The shared | ||
| fixtures mockStorage and authStateChanges are initialized in beforeAll causing | ||
| state leakage across tests; change the setup to run in beforeEach so MockStorage | ||
| and the authStateChanges array are re-created before every test (replace the | ||
| beforeAll block that initializes mockStorage and authStateChanges with a | ||
| beforeEach that assigns new MockStorage() to mockStorage and sets | ||
| authStateChanges = []), ensuring tests referencing MockStorage or | ||
| authStateChanges (e.g., assertions using toContain) operate on fresh state. | ||
|
|
||
|
|
||
| Verify each finding against the current code and only fix it if needed. | ||
|
|
||
| In `@packages/core/test/rls.test.ts` around lines 35 - 43, The tests share a | ||
| single tmpDir created in beforeAll and removed in afterAll which allows | ||
| cross-test filesystem state leakage; change to create and clean a unique temp | ||
| directory per test (or per describe) by moving mkdtempSync into a beforeEach (or | ||
| each describe's beforeEach) and rmSync into afterEach (or the corresponding | ||
| describe's afterEach), update references to the tmpDir variable accordingly, and | ||
| apply the same change to the other test block referenced around the 365-395 area | ||
| so each test gets an isolated tmpDir. | ||
|
|
||
|
|
||
|
|
||
| # CI CD , faills | ||
| 57 │ - }) | ||
| Error: @betterbase/client#lint: command (/home/runner/work/Betterbase/Betterbase/packages/client) /home/runner/.bun/bin/bun run lint exited (1) | ||
| ERROR run failed: command exited (1) | ||
|
|
||
| error: script "lint" exited with code 1 | ||
| Error: Process completed with exit code 1. | ||
|
|
||
| error: script "lint" exited with code 1 | ||
| Error: command finished with error: command (/home/runner/work/Betterbase/Betterbase/packages/client) /home/runner/.bun/bin/bun run lint exited (1) | ||
|
|
||
|
|
||
| 1 tests failed: | ||
| (fail) runAuthSetupCommand > is idempotent — running twice does not duplicate auth handler mount [5032.94ms] | ||
| ^ this test timed out after 5000ms. | ||
|
|
||
| 119 pass | ||
| 1 fail | ||
| 207 expect() calls | ||
| Ran 120 tests across 14 files. [9.65s] | ||
| error: script "test" exited with code 1 | ||
| Error: command finished with error: command (/home/runner/work/Betterbase/Betterbase/packages/cli) /home/runner/.bun/bin/bun run test exited (1) | ||
|
|
||
|
|
||
| Error: Process completed with exit code 1. No newline at end of file |
There was a problem hiding this comment.
This looks like transient review triage content, not maintainable repo docs.
The file is mostly fix prompts and stale CI output. It should be removed from the codebase (or moved to PR discussion/issue tracking) before merge.
🧰 Tools
🪛 LanguageTool
[grammar] ~320-~320: Use a hyphen to join words.
Context: ... risk; replace these with argument-array style process spawns (as used in graphql...
(QB_NEW_EN_HYPHEN)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Betterbase31PR` Errors.md around lines 5 - 421, The Betterbase31PR Errors.md
document contains transient review triage and stale CI output and should not be
committed; remove this file from the repository (or move its contents into the
PR discussion or a tracked issue) so it is not part of the codebase, ensuring no
references/imports remain that expect it to exist and then commit the deletion;
if preserving information, create a new issue or draft in the issue tracker with
the content and replace the file deletion with a short README pointing to that
issue.
| try { | ||
| const watcher = watch(watchPath, { recursive: true }, (_eventType, filename) => { | ||
| // Only use recursive option for directories on supported platforms (darwin/win32) | ||
| const isDir = statSync(watchPath).isDirectory(); | ||
| const isSupportedPlatform = process.platform === 'darwin' || process.platform === 'win32'; | ||
| const opts = isDir && isSupportedPlatform ? { recursive: true } : undefined; | ||
|
|
||
| const watcher = watch(watchPath, opts, (_eventType, filename) => { |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Node.js fs.watch recursive option platform support behavior Linux nested files
💡 Result:
- Current Node.js (≥ v19.1.0):
fs.watch(path, { recursive: true })does support recursive directory watching on Linux (and AIX; IBM i noted in the doc history) and will watch subdirectories when you pass a directory path. On Linux it’s implemented via inotify. [1] - Older Node.js (pre‑v19.1.0): the
recursiveoption was only supported on macOS and Windows; on Linux you generally had to create watchers per subdirectory (either manually or via a library). [2] - Nested-files behavior / caveats (Linux):
- With
recursive: false(default), you should assume you’ll only reliably get events for the watched directory itself (not deep descendants). [1] - Linux watches are inode-based: if a watched path is deleted and recreated (new inode), the watcher continues watching the old inode and won’t emit events for the new file/dir. [1]
- With
Sources:
[1] Node.js latest docs (fs.watch section, options + caveats; notes v19.1.0 recursive support on Linux; inotify; inode behavior) (nodejs.org)
[2] Node.js older docs noting recursive support only on macOS/Windows (nodejs.org)
Citations:
- 1: https://nodejs.org/docs/latest/api/fs.html
- 2: https://nodejs.org/download/release/v9.9.0/docs/api/fs.html?utm_source=openai
🏁 Script executed:
# Check package.json for Node.js version requirement
fd package.json --type f | head -1 | xargs cat | grep -A 5 '"engines"'Repository: weroperking/Betterbase
Length of output: 48
🏁 Script executed:
# Also check for .node-version or similar files
fd -e node-version -e nvmrc -e tool-versions --type fRepository: weroperking/Betterbase
Length of output: 48
🏁 Script executed:
cat package.json | head -50Repository: weroperking/Betterbase
Length of output: 1243
🏁 Script executed:
# Find and display version constraint files
fd -e node-version -e nvmrc -e tool-versions --type f -x catRepository: weroperking/Betterbase
Length of output: 48
🏁 Script executed:
# Check if chokidar or other watch libraries are already in dependencies
cat package.json | grep -A 50 '"dependencies"' | grep -i watchRepository: weroperking/Betterbase
Length of output: 48
🏁 Script executed:
# Search for other watch implementations in the codebase
rg -i "chokidar|recursive.*watch|fs\.watch" --type ts --type js | head -20Repository: weroperking/Betterbase
Length of output: 48
🏁 Script executed:
# Get the rg output properly
rg -i "chokidar|recursive.*watch|fs\.watch" --type ts --type jsRepository: weroperking/Betterbase
Length of output: 48
🏁 Script executed:
# Check the exact dev.ts implementation around the file watching section
sed -n '150,170p' packages/cli/src/commands/dev.tsRepository: weroperking/Betterbase
Length of output: 824
Consider updating minimum Node.js version to enable recursive watching on Linux.
The code correctly restricts { recursive: true } to darwin/win32 because the project requires Node.js ≥18.0.0, which does not support the recursive option on Linux. Node.js v19.1.0+ added recursive watching support on Linux via inotify. Updating the minimum required Node.js version to ≥19.1.0 would allow nested file changes in src/routes/** to be detected on Linux without requiring a third-party library. Alternatively, use chokidar or similar for better cross-platform file watching.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/cli/src/commands/dev.ts` around lines 155 - 161, The file restricts
the fs.watch recursive option by checking isSupportedPlatform (process.platform
in dev command) before passing opts to watch(watchPath, opts...), which prevents
recursive watching on Linux; to fix, either raise the project's Node.js engine
constraint to >=19.1.0 (update package.json engines and any CI/tooling) so the
existing recursive option can be used on Linux, and annotate the
isSupportedPlatform logic (or remove it) to allow recursive={ true } when
process.version >= v19.1.0, or replace the native watch call with a
cross-platform watcher (e.g., replace watch(watchPath, opts, ...) usage with
chokidar.watch(watchPath, { persistent: true, ignoreInitial: true, depth:
undefined }) and wire the same callback) so nested src/routes/** changes are
detected on Linux; update any related tests/Docs accordingly and keep references
to watchPath, opts, isDir, isSupportedPlatform, and watch when making changes.
| storageCredentials.STORAGE_SECRET_KEY = await prompts.text({ | ||
| message: "Secret Access Key:", | ||
| initial: "", | ||
| }); |
There was a problem hiding this comment.
Storage secret is collected with a plain-text prompt.
Line 1333 captures STORAGE_SECRET_KEY using prompts.text, which exposes secrets on-screen and in terminal recordings.
💡 Suggested fix
- storageCredentials.STORAGE_SECRET_KEY = await prompts.text({
+ storageCredentials.STORAGE_SECRET_KEY = await prompts.secret({
message: "Secret Access Key:",
initial: "",
});(If prompts.secret does not exist yet, add a masked-input helper and use it here.)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/cli/src/commands/init.ts` around lines 1333 - 1336, The prompt is
collecting sensitive data using prompts.text which reveals the secret; change
the input to a masked/secret prompt so STORAGE_SECRET_KEY is not echoed—replace
the prompts.text call that sets storageCredentials.STORAGE_SECRET_KEY with a
masked input (e.g., prompts.secret or your project's masked-input helper) and
ensure the new helper behaves like prompts.text but hides input and does not log
the value anywhere; update any imports or helpers to provide prompts.secret if
missing and use that symbol in place of prompts.text for STORAGE_SECRET_KEY.
| vi.mock("../src/rls/scanner", () => ({ | ||
| scanPolicies: vi.fn().mockResolvedValue({ policies: [], errors: [] }), | ||
| })) | ||
|
|
||
| await runMigration(tmpDir, db, provider) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Locate and examine the test file
find . -name "migration.test.ts" -type fRepository: weroperking/Betterbase
Length of output: 104
🏁 Script executed:
# Read the test file to see imports and mock usage
cat -n packages/core/test/migration.test.ts | head -130Repository: weroperking/Betterbase
Length of output: 4918
🌐 Web query:
Vitest vi.mock hoisting behavior imported modules
💡 Result:
vi.mock()is hoisted above all imports (in the test file). Vitest statically analyzes your test file, movesvi.mock()calls to the very top, and rewrites staticimportstatements into dynamic imports so the mock is registered before the module is loaded. This means the mock factory runs before your imports evaluate. [1] [2]- Consequence: you can’t safely reference “later” things inside the
vi.mockfactory, such as variables declared below, or values that come from (static) imports in the same file—because the factory executes before those are initialized. [2] vi.mock/vi.hoistedhoisting only happens in test files, not in modules you import. If you putvi.mock()inside a helper module and import it, Vitest generally won’t hoist that call, which often leads to “why didn’t my mock apply?” confusion. [3]- If you need data/functions available to the mock factory, use
vi.hoisted. It’s designed to run code before imports so you can prepare values used byvi.mock. [4] - If you want a mock that is not hoisted, use
vi.doMock()(it runs at runtime, after imports, like an imperative mock). [2]
Minimal example (why imported modules/variables “don’t work” in the factory):
import { something } from './dep' // you might expect this to exist...
vi.mock('./mod', () => {
// ...but this factory is hoisted and runs before imports,
// so referencing `something` (or later `const x = ...`) can break.
return { fn: () => something }
})Sources: Vitest “Mocking Modules” guide (hoists vi.mock, rewrites imports) [1], Vitest vi API docs (hoisting vs non-hoisting APIs) [2], Vitest config/notes on hoisting only being detected in test files [3], Vitest discussion/issue describing vi.hoisted motivation (imports are hoisted; vi.mock is hoisted even before imports) [4].
🌐 Web query:
Bun test vi.mock inside it block hoisting behavior
💡 Result:
In Bun’s test runner (bun:test), vi is just a compatibility alias for Bun’s mocking APIs—it does not implement Vitest-style “compile-time hoisting” of vi.mock() calls. So a vi.mock() placed inside an it/test block runs only when that test executes, and won’t affect imports that already happened earlier in the file. (bun.com)
By contrast, Vitest statically analyzes test files, moves vi.mock() to the top of the file, and (effectively) ensures mocks are registered before imports. (vitest.dev)
What this means in Bun
vi.mock()insideit()is not hoisted, so it’s often “too late” if the module under test (or its dependencies) was imported at the top of the file.- If you must ensure a module is mocked before any import, Bun recommends using
--preloadto register module mocks before tests run. (bun.com) - If you want per-test mocking without preload, mock first, then
await import()the module under test after the mock is set up (so the import happens after the mock registration). This aligns with Bun’s “import timing effects” note (mocking after import can’t prevent earlier side effects). (bun.com)
Practical patterns (Bun)
A) Global (file/preload) mock (best when you need “before import”)
- Put
mock.module()/vi.mock()in a preload file and runbun test --preload ./my-preload.ts. (bun.com)
B) Per-test mock (mock first, then dynamic import)
- Inside
test(): callvi.mock(...), thenconst mod = await import(...).
If you share a minimal repro (your imports + where the mock is declared), I can point out exactly which import happens “too early” and show the smallest change to make the mock take effect in Bun.
Citations:
- 1: https://bun.com/docs/test/mocks
- 2: https://vitest.dev/guide/mocking/modules.html?utm_source=openai
- 3: https://bun.com/docs/test/mocks
- 4: https://bun.com/docs/test/mocks
- 5: https://bun.com/docs/test/mocks
🌐 Web query:
Bun Vitest compatibility vi.mock runtime vs hoisted
💡 Result:
In Vitest, vi.mock() is hoisted (moved/executed before imports) because Vitest transforms the test file: it rewrites static imports to dynamic ones and lifts vi.mock to the top so the mock is registered before the module loads. This is why the mock “works” even if you write it below an import. [1], [2]
In Bun’s test runner, vi is only a compatibility alias and Bun does not implement Vitest’s hoisting transform. So vi.mock() behaves like a runtime call: it only affects modules that are imported after the mock is registered. (Same practical behavior as Vitest’s non-hoisted vi.doMock().) [3], [4]
What this means for compatibility (Bun vs Vitest)
- Code that relies on Vitest hoisting (or
vi.hoisted) will often break under Bun unless you change import timing. Vitest itself documents that these transform-based features “do not work in Deno or Bun.” [4] - To make tests portable to Bun, ensure mocks are set before the import happens, typically by using dynamic import.
Portable pattern (works in Bun; also valid in Vitest)
import { test, expect, vi } from "bun:test"; // or from "vitest"
test("mock example", async () => {
vi.mock("./dep", () => ({ fn: () => "mocked" }));
const { fn } = await import("./dep"); // import AFTER vi.mock
expect(fn()).toBe("mocked");
});Rule of thumb
- Vitest:
vi.mockis hoisted; usevi.hoistedwhen you must share values with the hoisted factory. [1] - Bun: treat
vi.mockas not hoisted; use “mock first, thenawait import()”. Bun’s module mocking is evaluated when the module is actually imported. [3]
Sources:
[1] Vitest API: vi.mock hoisting and vi.hoisted / vi.doMock behavior — https://vitest.dev/api/vi
[2] Vitest “How it works”: transforms imports + moves vi.mock to top — https://vitest.dev/guide/mocking/modules.html
[3] Bun mocks + Vitest-compat vi and module mocking behavior — https://bun.com/docs/test/mocks
[4] Vitest experimental config: vi.mock/vi.hoisted transform features “do not work in Deno or Bun” — https://cn.vitest.dev/config/experimental
vi.mock() calls inside test blocks won't affect runMigration's already-cached dependencies in Bun.
Bun's test runner does not implement Vitest's compile-time hoisting of vi.mock() calls. When vi.mock("../src/rls/scanner", ...) is invoked inside an it() block, it runs at runtime—after runMigration was imported at the module level. The scanner module is already cached by that point, so the mock has no effect.
Move vi.mock() calls to the module level (before any imports), use await import(runMigration) after the mock is registered, or run tests with --preload to register mocks before imports execute.
Affects: lines 73–75, 92–102, 122–127.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/core/test/migration.test.ts` around lines 73 - 77, The vi.mock calls
for the scanner are being registered at runtime inside the test and therefore do
not replace the already-cached scanner used by runMigration; move all
vi.mock("../src/rls/scanner", ...) calls to the module/top level in
migration.test.ts (before any import or require of the module that exports
runMigration) so the mock is hoisted before runMigration is imported, or
alternatively register the mock then dynamically import the module that exports
runMigration (e.g., await import(...)) after mocking; apply the same change for
the other vi.mock occurrences mentioned so the scanner mock actually affects
runMigration (or run tests with --preload to register mocks prior to imports).
| **File:** [`apps/test-project/src/index.ts`](apps/test-project/src/index.ts:20-31) | ||
| **Lines:** 20-31 |
There was a problem hiding this comment.
Several file links appear malformed for GitHub rendering.
Links formatted as (.../file.ts:20-31) typically won’t resolve as clickable line references. Use standard file links (and #Lx-Ly anchors when needed).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@PR31_CHANGES_DOCUMENTATION.md` around lines 26 - 27, The file links in
PR31_CHANGES_DOCUMENTATION.md use a non-GitHub format like "(.../file.ts:20-31)"
which won't render as clickable line anchors; update those references to
standard GitHub file links pointing to apps/test-project/src/index.ts with the
proper `#L20-L31` (or `#L20-L31`) anchor syntax so they become clickable (e.g., use
the repository-relative path and append `#L20-L31`); locate the occurrences
referencing apps/test-project/src/index.ts and replace them with the
GitHub-style link format.
|
|
||
| **Problem:** The code built shell commands with string interpolation using `execSync` and `url`, creating a command injection vulnerability. | ||
|
|
||
| **Solution:** Replaced with argument-array style process spawns using `Bun.spawn` with separate arguments, preventing shell interpretation. |
There was a problem hiding this comment.
Minor wording fix: hyphenate compound modifier.
Use “argument-array-style process spawns” for cleaner grammar.
🧰 Tools
🪛 LanguageTool
[grammar] ~401-~401: Use a hyphen to join words.
Context: ...Solution:** Replaced with argument-array style process spawns using Bun.spawn w...
(QB_NEW_EN_HYPHEN)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@PR31_CHANGES_DOCUMENTATION.md` at line 401, The phrase "argument-array style
process spawns" should be hyphenated as a compound modifier; update the sentence
containing "Replaced with argument-array style process spawns using `Bun.spawn`
with separate arguments, preventing shell interpretation." to "Replaced with
argument-array-style process spawns using `Bun.spawn` with separate arguments,
preventing shell interpretation." to fix the grammar.
Enable Subscription resolvers only when subscriptions are explicitly enabled in the merged configuration, rather than always including an empty Subscription object. This allows users to disable GraphQL subscriptions if not needed. test(shared): fix directory path regex to allow nested paths Update the directory path validation regex to accept nested directory structures (e.g., `a/b/c`) in addition to single-level paths (e.g., `a/b`).
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
packages/cli/src/commands/init.ts (2)
1334-1337:⚠️ Potential issue | 🟠 MajorUse masked input for
STORAGE_SECRET_KEY.The secret key is still collected with
prompts.text, which echoes sensitive input in terminal history/recordings.Suggested fix
- storageCredentials.STORAGE_SECRET_KEY = await prompts.text({ + storageCredentials.STORAGE_SECRET_KEY = await prompts.secret({ message: "Secret Access Key:", initial: "", });#!/bin/bash # Verify whether the prompts utility already exposes a masked secret input helper. fd prompts.ts rg -n "export .*secret|function secret|const secret" -- packages/cli/src/utils🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/commands/init.ts` around lines 1334 - 1337, Replace the echoing text prompt with a masked/password prompt: change the call that sets storageCredentials.STORAGE_SECRET_KEY from prompts.text(...) to the prompts password/masked helper (e.g., prompts.password(...) or prompts({ type: "password", message: "Secret Access Key:" }) depending on your prompts wrapper) so the secret is not echoed; keep the same message and initial/validation behavior but use the password/masked input API used by the prompts utility instead of prompts.text.
739-754:⚠️ Potential issue | 🔴 CriticalAuth check is bypassable and relies on unverified client input.
The generated middleware treats any
x-user-idheader + cookie presence as authenticated. That allows trivial impersonation and unauthorized presign/delete operations. Also,c.req.cookie(...)should be verified against Hono’s actual request API before shipping this template.Suggested fix (use real auth middleware/user context instead of header trust)
-function buildStorageRoute(provider: StorageProvider): string { +function buildStorageRoute(provider: StorageProvider, useAuth: boolean): string { ... -// Helper to check if user is authenticated and get user ID -async function getAuthenticatedUserId(c: any): Promise<{ id: string } | null> { - const sessionCookie = c.req.cookie('better-auth.session_token'); - if (!sessionCookie) { - return null; - } - const userId = c.req.header('x-user-id'); - if (!userId) { - return null; - } - return { id: userId }; -} +${useAuth ? "import { getAuthUser, requireAuth } from '../middleware/auth';" : ""} ... export const storageRoute = new Hono(); - -// Auth middleware for all storage routes -storageRoute.use('*', async (c, next) => { - const user = await getAuthenticatedUserId(c); - if (!user) { - return c.json({ error: 'Unauthorized' }, 401); - } - c.set('userId', user.id); - await next(); -}); +${useAuth ? "storageRoute.use('*', requireAuth);" : ""}In Hono v4, is `c.req.cookie('name')` a supported API on `Context.req`, or should cookie access use helpers from `hono/cookie` (e.g., `getCookie`)?Also applies to: 774-781
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/commands/init.ts` around lines 739 - 754, The getAuthenticatedUserId function improperly trusts client-supplied headers and an unverified cookie (uses c.req.cookie and x-user-id) allowing impersonation; replace this with real authentication lookup by integrating your app's auth middleware/session store: validate the session cookie/token against the auth provider or session DB (do not trust x-user-id), obtain the canonical user id from that verified session object, and return it; also verify correct Hono cookie access (use hono/cookie helpers like getCookie or the framework's supported API instead of c.req.cookie) and apply the same fix for the duplicated logic around lines 774-781 so all auth checks use the verified session/context rather than raw headers.
🧹 Nitpick comments (1)
packages/cli/src/commands/init.ts (1)
1268-1277: Prefer deterministic route generation over post-write string replacement.This replacement-based wiring is fragile to template text changes. Consider making
buildRoutesIndexaccept a storage flag and emit the final content directly.Refactor sketch
-function buildRoutesIndex(): string { +function buildRoutesIndex(storageEnabled: boolean): string { return `... import { usersRoute } from './users'; +${storageEnabled ? "import { storageRoute } from './storage';" : ""} ... app.route('/api/users', usersRoute); +${storageEnabled ? " app.route('/api/storage', storageRoute);" : ""} }`; } ... -await writeFile(path.join(projectPath, "src/routes/index.ts"), buildRoutesIndex()); +await writeFile(path.join(projectPath, "src/routes/index.ts"), buildRoutesIndex(!!storageProvider));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/cli/src/commands/init.ts` around lines 1268 - 1277, The current approach mutates routesIndex via string .replace calls after writing and is fragile; instead update buildRoutesIndex to accept a flag (e.g., includeStorage: boolean) and emit the final routes index content with or without the storage imports/routes directly (generate import for storageRoute and the app.route('/api/storage', storageRoute) line when includeStorage is true). Replace the post-write manipulation around variables routesIndex and routesIndexPath—call buildRoutesIndex(includeStorage) to get the completed file content and then writeFile(routesIndexPath, content); this removes reliance on matching template strings like "import { usersRoute } from './users';" and "app.route('/api/users', usersRoute);" and makes generation deterministic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/cli/src/commands/init.ts`:
- Around line 940-947: The generated env shape in init.ts uses a single
storageEnvFields string so STORAGE_ENDPOINT is optional for all providers;
change the generation to emit provider-specific schema based on storageProvider:
if storageProvider === 's3' keep STORAGE_ENDPOINT optional (and REGION optional
as before), but for 'r2', 'backblaze', and 'minio' make STORAGE_ENDPOINT
required (z.string().min(1)) and include any provider-specific required fields;
update the logic that constructs storageEnvFields (the variable named
storageEnvFields and the branching that reads storageProvider) so the produced
env.ts contains the correct zod validations per provider.
---
Duplicate comments:
In `@packages/cli/src/commands/init.ts`:
- Around line 1334-1337: Replace the echoing text prompt with a masked/password
prompt: change the call that sets storageCredentials.STORAGE_SECRET_KEY from
prompts.text(...) to the prompts password/masked helper (e.g.,
prompts.password(...) or prompts({ type: "password", message: "Secret Access
Key:" }) depending on your prompts wrapper) so the secret is not echoed; keep
the same message and initial/validation behavior but use the password/masked
input API used by the prompts utility instead of prompts.text.
- Around line 739-754: The getAuthenticatedUserId function improperly trusts
client-supplied headers and an unverified cookie (uses c.req.cookie and
x-user-id) allowing impersonation; replace this with real authentication lookup
by integrating your app's auth middleware/session store: validate the session
cookie/token against the auth provider or session DB (do not trust x-user-id),
obtain the canonical user id from that verified session object, and return it;
also verify correct Hono cookie access (use hono/cookie helpers like getCookie
or the framework's supported API instead of c.req.cookie) and apply the same fix
for the duplicated logic around lines 774-781 so all auth checks use the
verified session/context rather than raw headers.
---
Nitpick comments:
In `@packages/cli/src/commands/init.ts`:
- Around line 1268-1277: The current approach mutates routesIndex via string
.replace calls after writing and is fragile; instead update buildRoutesIndex to
accept a flag (e.g., includeStorage: boolean) and emit the final routes index
content with or without the storage imports/routes directly (generate import for
storageRoute and the app.route('/api/storage', storageRoute) line when
includeStorage is true). Replace the post-write manipulation around variables
routesIndex and routesIndexPath—call buildRoutesIndex(includeStorage) to get the
completed file content and then writeFile(routesIndexPath, content); this
removes reliance on matching template strings like "import { usersRoute } from
'./users';" and "app.route('/api/users', usersRoute);" and makes generation
deterministic.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a1fcd9ca-a1d2-4ec4-9d6c-9a8eef0e62ea
📒 Files selected for processing (3)
packages/cli/src/commands/init.tspackages/core/src/graphql/resolvers.tspackages/shared/test/constants.test.ts
| const storageEnvFields = storageProvider | ||
| ? ` STORAGE_PROVIDER: z.enum(['s3', 'r2', 'backblaze', 'minio']), | ||
| STORAGE_ACCESS_KEY: z.string().min(1), | ||
| STORAGE_SECRET_KEY: z.string().min(1), | ||
| STORAGE_BUCKET: z.string().min(1), | ||
| STORAGE_REGION: z.string().optional(), | ||
| STORAGE_ENDPOINT: z.string().optional(),` | ||
| : ""; |
There was a problem hiding this comment.
Non-S3 provider config is under-validated in generated env.ts.
STORAGE_ENDPOINT is optional for all storage providers, but r2/backblaze/minio require it. This permits invalid configs that fail only at runtime.
Suggested fix (provider-specific env shape at generation time)
- const storageEnvFields = storageProvider
- ? ` STORAGE_PROVIDER: z.enum(['s3', 'r2', 'backblaze', 'minio']),
- STORAGE_ACCESS_KEY: z.string().min(1),
- STORAGE_SECRET_KEY: z.string().min(1),
- STORAGE_BUCKET: z.string().min(1),
- STORAGE_REGION: z.string().optional(),
- STORAGE_ENDPOINT: z.string().optional(),`
- : "";
+ const storageEnvFields = storageProvider === "s3"
+ ? ` STORAGE_PROVIDER: z.literal('s3'),
+ STORAGE_ACCESS_KEY: z.string().min(1),
+ STORAGE_SECRET_KEY: z.string().min(1),
+ STORAGE_BUCKET: z.string().min(1),
+ STORAGE_REGION: z.string().min(1).default('us-east-1'),`
+ : storageProvider
+ ? ` STORAGE_PROVIDER: z.enum(['r2', 'backblaze', 'minio']),
+ STORAGE_ACCESS_KEY: z.string().min(1),
+ STORAGE_SECRET_KEY: z.string().min(1),
+ STORAGE_BUCKET: z.string().min(1),
+ STORAGE_ENDPOINT: z.string().url(),`
+ : "";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/cli/src/commands/init.ts` around lines 940 - 947, The generated env
shape in init.ts uses a single storageEnvFields string so STORAGE_ENDPOINT is
optional for all providers; change the generation to emit provider-specific
schema based on storageProvider: if storageProvider === 's3' keep
STORAGE_ENDPOINT optional (and REGION optional as before), but for 'r2',
'backblaze', and 'minio' make STORAGE_ENDPOINT required (z.string().min(1)) and
include any provider-specific required fields; update the logic that constructs
storageEnvFields (the variable named storageEnvFields and the branching that
reads storageProvider) so the produced env.ts contains the correct zod
validations per provider.
Delete outdated task tracking and PR documentation files that are no longer needed: - bb dev hot reload documentation - Error message improvement documentation - PR #31 changes documentation - Core task issues documentation - March 2026 update documentation
Summary by CodeRabbit
New Features
Bug Fixes & Improvements
Documentation