Skip to content

Move from in-memory store to persistent DB#28

Merged
fuzziecoder merged 1 commit intomainfrom
codex/fix-issue-#26-by-migrating-to-db
Feb 19, 2026
Merged

Move from in-memory store to persistent DB#28
fuzziecoder merged 1 commit intomainfrom
codex/fix-issue-#26-by-migrating-to-db

Conversation

@fuzziecoder
Copy link
Copy Markdown
Owner

@fuzziecoder fuzziecoder commented Feb 19, 2026

Title
feat: migrate backend data layer to persistent storage

Body
Data resets on server restart because dataStore is in-memory. Integrate persistence (Supabase/Postgres) with repository abstraction.

Acceptance Criteria

Data survives process restarts.
Orders/payments/users read/write through DB layer.
Existing API contracts remain compatible.

@vercel
Copy link
Copy Markdown

vercel Bot commented Feb 19, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
brocode-spot-update-app Ready Ready Preview, Comment Feb 19, 2026 1:29am

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Feb 19, 2026

📝 Walkthrough

Walkthrough

The backend transitions from in-memory data storage to a persistent SQLite database. A new database abstraction layer exposes methods for authentication, catalog retrieval, spots management, and order operations. The API server is updated to use database methods instead of the in-memory store, with documentation revised to reflect the persistent database setup.

Changes

Cohort / File(s) Summary
Documentation
backend/README.md
Title updated to "Backend API"; added database section describing SQLite persistence at backend/data/brocode.sqlite with BROCODE_DB_PATH override capability; removed in-memory storage deprecation note.
Database Layer
backend/db.js
New SQLite data access module initializing database with WAL journaling and foreign key constraints. Defines schema for users, spots, catalog_items, orders, and order_items tables. Seeds initial data on first run. Exports public API with methods: getUserByCredentials, getCatalog, getCatalogCategory, getSpots, getOrders, createOrder, and getBillBySpotId. Implements transactional integrity for order creation.
API Server
backend/server.js
Replaced in-memory dataStore references with database method calls across authentication, catalog, spots, orders, and billing endpoints. Updated imports from ./store.js to ./db.js. Removed password stripping from login response. Added startup logging of database path.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client
    participant Server as Server
    participant DB as SQLite Database
    
    Note over Client,DB: Order Creation Flow
    Client->>Server: POST /orders (spotId, userId, items)
    activate Server
    Server->>DB: BEGIN TRANSACTION
    activate DB
    DB-->>Server: Transaction started
    deactivate DB
    
    Server->>DB: INSERT INTO orders
    activate DB
    DB-->>Server: Order created
    deactivate DB
    
    Server->>DB: INSERT INTO order_items (multiple rows)
    activate DB
    DB-->>Server: Items inserted
    deactivate DB
    
    Server->>DB: COMMIT
    activate DB
    DB-->>Server: Committed
    deactivate DB
    Server-->>Client: 201 Created (order with items)
    deactivate Server
Loading
sequenceDiagram
    participant Client as Client
    participant Server as Server
    participant DB as SQLite Database
    
    Note over Client,DB: Login & Auth Flow
    Client->>Server: POST /login (username, password)
    activate Server
    Server->>DB: SELECT * FROM users WHERE username = ?
    activate DB
    DB-->>Server: User record (with hashed password)
    deactivate DB
    
    alt Credentials Match
        Server-->>Client: 200 OK (user object)
    else Invalid Credentials
        Server-->>Client: 400 Bad Request
    end
    deactivate Server
Loading

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐰 Hops of joy! No more data fade,
SQLite persists what we've made,
Transactions keep orders in line,
Database seeded, everything fine!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'Codex-generated pull request' is vague and generic, using a non-descriptive term that doesn't convey meaningful information about the changeset. Update the title to be more descriptive, such as 'Migrate backend API from in-memory store to SQLite persistence' to clearly summarize the main change.
✅ Passed checks (2 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch codex/fix-issue-#26-by-migrating-to-db

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (4)
backend/db.js (3)

160-163: getCatalogCategory method appears unused.

This method is defined but the server at line 87-88 uses database.getCatalog()[category] instead. Consider either removing this dead code or updating the server to use this more efficient method that only queries the specific category.

If you intend to use this method, update backend/server.js:

   if (method === 'GET' && path.startsWith('/api/catalog/')) {
     const category = path.replace('/api/catalog/', '');
-    const catalog = database.getCatalog();
-    const items = catalog[category];
+    const items = database.getCatalogCategory(category);
 
-    if (!items) {
+    if (!items || items.length === 0) {
       sendJson(res, 404, { error: `Unknown category: ${category}` });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/db.js` around lines 160 - 163, getCatalogCategory is defined but
unused; either remove it or update server code to call it instead of indexing
database.getCatalog(). Specifically, in server.js replace the use of
database.getCatalog()[category] with a call to
database.getCatalogCategory(category) and adapt response handling to the rows
array returned by getCatalogCategory (id, name, price) so the endpoint returns
the same shape clients expect; alternatively, delete getCatalogCategory from
backend/db.js if you prefer to keep the existing getCatalog() implementation.

6-7: Consider using import.meta.url for more reliable path resolution.

Using process.cwd() makes the default database path dependent on the current working directory when the process starts. If the server is started from a different directory, the database file could be created in an unexpected location.

Suggested fix using import.meta.url
+import { fileURLToPath } from 'node:url';
+
+const __dirname = dirname(fileURLToPath(import.meta.url));
-const defaultDbPath = resolve(process.cwd(), 'backend', 'data', 'brocode.sqlite');
+const defaultDbPath = resolve(__dirname, 'data', 'brocode.sqlite');
 const dbPath = process.env.BROCODE_DB_PATH ? resolve(process.env.BROCODE_DB_PATH) : defaultDbPath;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/db.js` around lines 6 - 7, The defaultDbPath currently uses
process.cwd() which makes the DB location depend on the process working dir;
update the resolution logic for defaultDbPath to compute a path relative to the
current module using import.meta.url (e.g., base directory derived from new
URL('.', import.meta.url)) instead of process.cwd(), and keep the dbPath
fallback behavior (use process.env.BROCODE_DB_PATH if present, otherwise use the
new defaultDbPath). Ensure you still use resolve/path utilities and update any
references to defaultDbPath and dbPath accordingly.

200-235: Consider validating spotId and userId exist before creating order.

The createOrder method doesn't verify that the provided spotId and userId reference existing records. While the foreign key constraints will cause the INSERT to fail, the error message from SQLite won't be user-friendly. Consider checking existence first and returning a descriptive error.

Suggested validation
 createOrder({ spotId, userId, items }) {
+  const spotExists = db.prepare('SELECT 1 FROM spots WHERE id = ?').get(spotId);
+  if (!spotExists) {
+    throw new Error(`Spot not found: ${spotId}`);
+  }
+  const userExists = db.prepare('SELECT 1 FROM users WHERE id = ?').get(userId);
+  if (!userExists) {
+    throw new Error(`User not found: ${userId}`);
+  }
+
   const parsedItems = items.map((item) => {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/db.js` around lines 200 - 235, The createOrder function must check
that the provided spotId and userId exist before attempting inserts; add
preflight existence queries (e.g., SELECT COUNT(*) or SELECT 1) against the
spots and users tables and if either check fails throw a descriptive error like
"Spot not found" or "User not found" before starting the transaction/BEGIN. Keep
using the same prepared statements (insertOrder, insertOrderItem) and
db.exec('BEGIN')/COMMIT/ROLLBACK flow, but perform the existence checks prior to
calling db.exec('BEGIN') so you can return clear, user-friendly errors instead
of relying on SQLite constraint failures.
backend/server.js (1)

131-135: Consider returning 404 for non-existent spots.

The bills endpoint returns a response even when the spotId doesn't exist in the database (with total: 0 and orderCount: 0). This might be intentional, but consider returning a 404 to distinguish between "spot exists with no orders" vs "spot doesn't exist".

Suggested validation
   if (method === 'GET' && path.startsWith('/api/bills/')) {
     const spotId = path.replace('/api/bills/', '');
+    const spots = database.getSpots();
+    if (!spots.some(s => s.id === spotId)) {
+      sendJson(res, 404, { error: `Spot not found: ${spotId}` });
+      return;
+    }
     const bill = database.getBillBySpotId(spotId);
     sendJson(res, 200, bill);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/server.js` around lines 131 - 135, The bills handler currently always
returns whatever database.getBillBySpotId(spotId) yields, which masks missing
spots; update the GET /api/bills/:spotId logic to check the result of
database.getBillBySpotId (or call a database.hasSpot/getSpot method) and if it
returns null/undefined, respond with a 404 (e.g., sendJson(res, 404, { error:
'Spot not found' })); otherwise return the bill as before using sendJson(res,
200, bill).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/db.js`:
- Around line 19-25: The users table and seed inserts store plaintext passwords
in the password column; replace this by hashing passwords with a secure
algorithm (bcrypt or argon2) before persisting and verifying via the
corresponding compare method at login. Update any seed data inserts (the seed
insert statements around lines 67-69) to use hashed passwords, and update
user-create and auth paths that write or check the password to call bcrypt.hash
(or argon2.hash) when creating/updating and bcrypt.compare (or argon2.verify)
when authenticating; keep the users table password column as TEXT but ensure no
plaintext is ever written to it. Ensure you import and configure the chosen
library (salt rounds for bcrypt) and update tests/fixtures accordingly.
- Around line 1-4: Add a Node.js engine requirement to package.json to ensure
the code importing node:sqlite and using DatabaseSync runs on a supported
runtime: add an "engines" field with "node": ">=22.5.0" (e.g., "engines": {
"node": ">=22.5.0" }) and keep a comment or package.json note that node:sqlite
is still experimental; this ensures the imports in backend/db.js (specifically
the use of DatabaseSync from 'node:sqlite') are only run on Node versions that
support it.

---

Nitpick comments:
In `@backend/db.js`:
- Around line 160-163: getCatalogCategory is defined but unused; either remove
it or update server code to call it instead of indexing database.getCatalog().
Specifically, in server.js replace the use of database.getCatalog()[category]
with a call to database.getCatalogCategory(category) and adapt response handling
to the rows array returned by getCatalogCategory (id, name, price) so the
endpoint returns the same shape clients expect; alternatively, delete
getCatalogCategory from backend/db.js if you prefer to keep the existing
getCatalog() implementation.
- Around line 6-7: The defaultDbPath currently uses process.cwd() which makes
the DB location depend on the process working dir; update the resolution logic
for defaultDbPath to compute a path relative to the current module using
import.meta.url (e.g., base directory derived from new URL('.',
import.meta.url)) instead of process.cwd(), and keep the dbPath fallback
behavior (use process.env.BROCODE_DB_PATH if present, otherwise use the new
defaultDbPath). Ensure you still use resolve/path utilities and update any
references to defaultDbPath and dbPath accordingly.
- Around line 200-235: The createOrder function must check that the provided
spotId and userId exist before attempting inserts; add preflight existence
queries (e.g., SELECT COUNT(*) or SELECT 1) against the spots and users tables
and if either check fails throw a descriptive error like "Spot not found" or
"User not found" before starting the transaction/BEGIN. Keep using the same
prepared statements (insertOrder, insertOrderItem) and
db.exec('BEGIN')/COMMIT/ROLLBACK flow, but perform the existence checks prior to
calling db.exec('BEGIN') so you can return clear, user-friendly errors instead
of relying on SQLite constraint failures.

In `@backend/server.js`:
- Around line 131-135: The bills handler currently always returns whatever
database.getBillBySpotId(spotId) yields, which masks missing spots; update the
GET /api/bills/:spotId logic to check the result of database.getBillBySpotId (or
call a database.hasSpot/getSpot method) and if it returns null/undefined,
respond with a 404 (e.g., sendJson(res, 404, { error: 'Spot not found' }));
otherwise return the bill as before using sendJson(res, 200, bill).

Comment thread backend/db.js
Comment on lines +1 to +4
import { existsSync, mkdirSync } from 'node:fs';
import { dirname, resolve } from 'node:path';
import { randomUUID } from 'node:crypto';
import { DatabaseSync } from 'node:sqlite';
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

Node.js node:sqlite DatabaseSync minimum version requirement

💡 Result:

DatabaseSync in Node.js’s built-in node:sqlite module requires Node.js v22.5.0 or later (that’s when DatabaseSync was added). [1]

Note: node:sqlite was added in v22.5.0, and later became usable without --experimental-sqlite starting in v22.13.0 / v23.4.0 (it’s still marked experimental). [1]

Sources:
[1] Node.js SQLite documentation (API history for node:sqlite and DatabaseSync) (nodejs.org)

Citations:


🏁 Script executed:

find . -maxdepth 2 -name "package.json" -o -name ".nvmrc" -o -name ".node-version" | head -20

Repository: fuzziecoder/Brocode-Party-Update-App

Length of output: 94


🏁 Script executed:

cat package.json 2>/dev/null | head -50

Repository: fuzziecoder/Brocode-Party-Update-App

Length of output: 912


Add Node.js version requirement to package.json.

The node:sqlite module with DatabaseSync requires Node.js 22.5.0 or later. Add an engines field to package.json to enforce this: "engines": { "node": ">=22.5.0" }. Note that node:sqlite remains an experimental module.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/db.js` around lines 1 - 4, Add a Node.js engine requirement to
package.json to ensure the code importing node:sqlite and using DatabaseSync
runs on a supported runtime: add an "engines" field with "node": ">=22.5.0"
(e.g., "engines": { "node": ">=22.5.0" }) and keep a comment or package.json
note that node:sqlite is still experimental; this ensures the imports in
backend/db.js (specifically the use of DatabaseSync from 'node:sqlite') are only
run on Node versions that support it.

Comment thread backend/db.js
Copy link
Copy Markdown

@devin-ai-integration devin-ai-integration Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 potential issue.

View 5 additional findings in Devin Review.

Open in Devin Review

Comment thread backend/db.js
Comment on lines +66 to +90
if (!hasUsers) {
const insertUser = db.prepare('INSERT INTO users (id, username, password, name, role) VALUES (?, ?, ?, ?, ?)');
insertUser.run('u-1', 'brocode', 'changeme', 'Ram', 'admin');
insertUser.run('u-2', 'dhanush', 'changeme', 'Dhanush', 'user');

const insertSpot = db.prepare('INSERT INTO spots (id, location, date, host_user_id) VALUES (?, ?, ?, ?)');
insertSpot.run('spot-2025-07-26', 'Attibele Toll Plaza', '2025-07-26T10:00:00.000Z', 'u-1');

const insertCatalogItem = db.prepare('INSERT INTO catalog_items (id, category, name, price) VALUES (?, ?, ?, ?)');
insertCatalogItem.run('d-1', 'drinks', 'Brocode Beer', 180);
insertCatalogItem.run('d-2', 'drinks', 'Kingfisher Beer', 170);
insertCatalogItem.run('f-1', 'food', 'Beef Biriyani', 220);
insertCatalogItem.run('f-2', 'food', 'Parotta', 30);
insertCatalogItem.run('c-1', 'cigarettes', 'Marlboro', 25);
insertCatalogItem.run('c-2', 'cigarettes', 'Classic', 20);

db.prepare(
'INSERT INTO orders (id, spot_id, user_id, total_amount, created_at) VALUES (?, ?, ?, ?, ?)'
).run('ord-1', 'spot-2025-07-26', 'u-2', 360, '2025-07-26T10:30:00.000Z');

db.prepare(
`INSERT INTO order_items (order_id, product_id, name, quantity, unit_price, total)
VALUES (?, ?, ?, ?, ?, ?)`
).run('ord-1', 'd-1', 'Brocode Beer', 2, 180, 360);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Seed data not wrapped in a transaction, causing irrecoverable partial seeding on crash

The seed data inserts at backend/db.js:66-90 are not wrapped in a transaction, but the guard condition at line 64 only checks whether users exist (SELECT COUNT(*) AS count FROM users). If the process crashes after users are inserted but before spots, catalog items, or orders are inserted, subsequent restarts will see hasUsers as true and skip the entire seed block. The database will be permanently missing spots, catalog items, and/or orders with no automatic recovery path.

Root Cause and Impact

The createOrder method at backend/db.js:225-235 correctly uses BEGIN/COMMIT/ROLLBACK for atomicity, but the seed logic does not. The guard:

const hasUsers = db.prepare('SELECT COUNT(*) AS count FROM users').get().count > 0;

only checks for users. If users are committed but the process dies before catalog items or spots are inserted, the next startup skips seeding entirely.

Impact: The application would start with an empty catalog, no spots, and no sample orders, breaking the expected initial state. The only fix would be manual database intervention or deleting the SQLite file.

Prompt for agents
In backend/db.js, wrap the seed data block (lines 66-90) in a transaction to ensure atomicity. Replace the block with:

if (!hasUsers) {
  db.exec('BEGIN');
  try {
    const insertUser = db.prepare(...);
    // ... all existing insert statements ...
    db.exec('COMMIT');
  } catch (error) {
    db.exec('ROLLBACK');
    throw error;
  }
}

This ensures that if any seed insert fails or the process crashes mid-seed, the entire seed is rolled back and will be retried on next startup.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

@fuzziecoder fuzziecoder changed the title Codex-generated pull request Move from in-memory store to persistent DB Feb 19, 2026
@fuzziecoder fuzziecoder merged commit 2cbec1d into main Feb 19, 2026
4 checks passed
fuzziecoder added a commit that referenced this pull request Feb 20, 2026
…ing-to-db-aljyuy

Secure backend: scrypt password hashing and legacy migration (Issue #28)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant