Skip to content

Docker fixes, dual databases, and Drizzle Studio#3

Open
simantak-dabhade wants to merge 3 commits into
mainfrom
fix/dockerfile-permissions
Open

Docker fixes, dual databases, and Drizzle Studio#3
simantak-dabhade wants to merge 3 commits into
mainfrom
fix/dockerfile-permissions

Conversation

@simantak-dabhade
Copy link
Copy Markdown
Contributor

@simantak-dabhade simantak-dabhade commented May 15, 2026

Summary

  • Fix frontend Dockerfile permissions: WORKDIR /app is created as root, but USER node switches before bun install. Added chown node:node /app so it can create node_modules.

  • Fix Docker networking for auth proxy: Inside Docker, localhost means the container itself. Added BACKEND_URL=http://backend:3501 so the Next.js rewrite proxies auth requests to the backend container correctly.

  • Dual Postgres databases: Split into bigset_internal (auth/app tables) and bigset_data (user-created dataset tables) on the same Postgres instance. db/init.sql creates the second database on first startup. Backend gets DATA_DATABASE_URL env var and a second Drizzle instance (data-db.ts).

  • Drizzle Studio x2: Two studio services for browsing each database during dev:

    • :3600bigset_internal (auth tables)
    • :3601bigset_data (dataset tables)

Port map

Port Service
3500 Frontend (Next.js)
3501 Backend (Fastify)
3600 Drizzle Studio — internal
3601 Drizzle Studio — data
5432 Postgres

Test plan

  • make dev — all five services start (db, backend, frontend, studio-internal, studio-data)
  • Sign up at localhost:3500/auth/sign-up — works (no ECONNREFUSED)
  • Open https://local.drizzle.studio?port=3600 — shows auth tables with new user
  • Open https://local.drizzle.studio?port=3601 — connects to empty bigset_data
  • make down then make dev — data persists
  • make clean then make dev — fresh start, both databases recreated

🤖 Generated with Claude Code

WORKDIR /app is created as root, but USER node switches before
bun install runs. chown /app to node so it can create node_modules.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 15, 2026

Review Change Stack

📝 Walkthrough

Walkthrough

Adds a second Postgres database (bigset_data) via an init SQL mount and changes POSTGRES_DB to bigset_internal. Splits backend DB URLs into DATABASE_URL (bigset_internal) and DATA_DATABASE_URL (bigset_data), exports DATA_DATABASE_URL, and adds a drizzle dataDb client. Updates docker-compose: backend env/health/wait, two Drizzle Studio services (ports 3600/3601) connected to each DB, and frontend BACKEND_URL. Frontend Dockerfile.dev now combines Bun install and chown /app to node:node in a single RUN.

sequenceDiagram
  participant Developer
  participant DockerCompose
  participant Postgres
  participant Backend
  participant DrizzleStudio_Internal as Studio_internal
  participant DrizzleStudio_Data as Studio_data
  participant Frontend

  Developer->>DockerCompose: docker-compose up (dev)
  DockerCompose->>Postgres: start with POSTGRES_DB=bigset_internal and mount init.sql
  Postgres->>Postgres: run init.sql (CREATE DATABASE bigset_data)
  DockerCompose->>Backend: wait for db healthy, start Backend with DATABASE_URL & DATA_DATABASE_URL
  Backend->>Postgres: connect to bigset_internal (DATABASE_URL)
  Backend->>Postgres: connect to bigset_data (DATA_DATABASE_URL) via dataDb
  DockerCompose->>DrizzleStudio_Internal: start studio bound to bigset_internal (port 3600)
  DockerCompose->>DrizzleStudio_Data: start studio bound to bigset_data (port 3601)
  Developer->>Frontend: access frontend (BACKEND_URL -> http://backend:3501)
Loading
🚥 Pre-merge checks | ✅ 4
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately captures the three main changes: Docker permissions fix, dual database setup, and Drizzle Studio additions, matching the changeset content.
Description check ✅ Passed The description is comprehensive and directly related to the changeset, explaining each fix and addition with clear rationale and test plan.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/dockerfile-permissions

Comment @coderabbitai help to get the list of available commands and usage tips.

@simantak-dabhade simantak-dabhade requested a review from manav-tf May 15, 2026 22:01
- Add `studio` service to docker-compose — runs Drizzle Studio on :4983
  so you can browse the database in the browser during dev
- Set `BACKEND_URL=http://backend:3501` on the frontend container — inside
  Docker, `localhost` means the container itself, not the host. The Next.js
  rewrite was proxying auth requests to localhost:3501 which doesn't exist
  inside the frontend container. Using the Docker service name `backend`
  resolves to the correct container on the Docker network.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
docker-compose.dev.yml (1)

36-47: ⚡ Quick win

Consider mounting backend source volumes for live schema updates.

The studio service is built from the same context as backend but doesn't mount any source volumes. Drizzle Studio reads schema definitions from the codebase, so without volumes you'll need to rebuild the container to see schema changes during development.

Consider adding the same volume mount as the backend service for better developer ergonomics.

🔄 Suggested volume configuration
  studio:
    build:
      context: ./backend
      dockerfile: Dockerfile.dev
    ports:
      - "4983:4983"
+   volumes:
+     - ./backend/src:/app/src
    environment:
      DATABASE_URL: postgres://bigset:bigset@db:5432/bigset
    depends_on:
      db:
        condition: service_healthy
    command: ["npx", "drizzle-kit", "studio", "--host", "0.0.0.0"]
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@docker-compose.dev.yml` around lines 36 - 47, The studio service is missing a
source volume mount so schema changes aren't picked up live; update the studio
service definition (service name "studio", build context "./backend", command
["npx","drizzle-kit","studio","--host","0.0.0.0"]) to include the same backend
source volume used by the backend service (mount the project/backend source
directory into the container) so Drizzle Studio reads schema file updates
without rebuilding; ensure the volume path and any node_modules or build
artifact mounts mirror the backend service to avoid file ownership or dev/server
conflicts.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Nitpick comments:
In `@docker-compose.dev.yml`:
- Around line 36-47: The studio service is missing a source volume mount so
schema changes aren't picked up live; update the studio service definition
(service name "studio", build context "./backend", command
["npx","drizzle-kit","studio","--host","0.0.0.0"]) to include the same backend
source volume used by the backend service (mount the project/backend source
directory into the container) so Drizzle Studio reads schema file updates
without rebuilding; ensure the volume path and any node_modules or build
artifact mounts mirror the backend service to avoid file ownership or dev/server
conflicts.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 46ae32da-5f5c-45e9-823a-605f2e20f4cd

📥 Commits

Reviewing files that changed from the base of the PR and between de08cca and 4bd7917.

📒 Files selected for processing (1)
  • docker-compose.dev.yml

- Split into two databases: `bigset_internal` (auth/app) and `bigset_data`
  (user-created datasets) on the same Postgres instance
- Add `db/init.sql` to create `bigset_data` on first startup
- Add `DATA_DATABASE_URL` env var and `data-db.ts` Drizzle instance
- Replace single studio with two: studio-internal (:3600) and studio-data (:3601)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@simantak-dabhade simantak-dabhade changed the title Fix frontend Dockerfile permissions for bun install Docker fixes, dual databases, and Drizzle Studio May 15, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@docker-compose.dev.yml`:
- Around line 31-32: Replace hardcoded credentials in the DATABASE_URL and
DATA_DATABASE_URL values with environment-variable interpolation (e.g., use
${DB_USER}, ${DB_PASSWORD}, ${DB_HOST}, ${DB_PORT}, ${DB_NAME}) and ensure the
same change is applied to the other occurrences noted (the entries referenced at
the other two locations). Update the compose service environment to build the
URLs from those interpolated vars and document required env var names in your
.env or deployment config so secrets are not committed in the repo.
- Line 17: The compose setup depends on ./db/init.sql mounted to
/docker-entrypoint-initdb.d/init.sql which only runs on first initialization, so
ensure creation of the bigset_data object even when an existing pgdata volume
exists by adding a startup migration step: modify the postgres service to run an
init script (or a small entrypoint wrapper) that checks for and creates
bigset_data if missing (using psql CMD against the database), or add a separate
one-off service/task that runs ./db/init.sql against the running DB on each
start; reference the existing mount
./db/init.sql:/docker-entrypoint-initdb.d/init.sql, the pgdata volume, and the
bigset_data object when implementing this check-and-create migration.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 918a0e7d-855f-4be4-b1c3-8cdb942aa6dc

📥 Commits

Reviewing files that changed from the base of the PR and between 4bd7917 and 07438a4.

📒 Files selected for processing (5)
  • backend/.env.example
  • backend/src/data-db.ts
  • backend/src/env.ts
  • db/init.sql
  • docker-compose.dev.yml
✅ Files skipped from review due to trivial changes (2)
  • db/init.sql
  • backend/.env.example

Comment thread docker-compose.dev.yml
retries: 10
volumes:
- pgdata:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

bigset_data creation is not guaranteed for existing dev volumes.

Line 17 relies on /docker-entrypoint-initdb.d/init.sql, which only runs on first cluster initialization. With an existing pgdata volume, bigset_data may never be created, and the new connections can fail at runtime.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@docker-compose.dev.yml` at line 17, The compose setup depends on
./db/init.sql mounted to /docker-entrypoint-initdb.d/init.sql which only runs on
first initialization, so ensure creation of the bigset_data object even when an
existing pgdata volume exists by adding a startup migration step: modify the
postgres service to run an init script (or a small entrypoint wrapper) that
checks for and creates bigset_data if missing (using psql CMD against the
database), or add a separate one-off service/task that runs ./db/init.sql
against the running DB on each start; reference the existing mount
./db/init.sql:/docker-entrypoint-initdb.d/init.sql, the pgdata volume, and the
bigset_data object when implementing this check-and-create migration.

Comment thread docker-compose.dev.yml
Comment on lines +31 to +32
DATABASE_URL: postgres://bigset:bigset@db:5432/bigset_internal
DATA_DATABASE_URL: postgres://bigset:bigset@db:5432/bigset_data
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Remove inline DB credentials from compose URLs.

These URLs embed username/password directly in committed config. Move credentials to environment-variable interpolation so secrets are not hardcoded in repo history.

🔧 Suggested compose change
   backend:
@@
     environment:
+      DB_USER: ${DB_USER}
+      DB_PASSWORD: ${DB_PASSWORD}
@@
-      DATABASE_URL: postgres://bigset:bigset@db:5432/bigset_internal
-      DATA_DATABASE_URL: postgres://bigset:bigset@db:5432/bigset_data
+      DATABASE_URL: postgres://${DB_USER}:${DB_PASSWORD}`@db`:5432/bigset_internal
+      DATA_DATABASE_URL: postgres://${DB_USER}:${DB_PASSWORD}`@db`:5432/bigset_data

   studio-internal:
@@
     environment:
-      DATABASE_URL: postgres://bigset:bigset@db:5432/bigset_internal
+      DATABASE_URL: postgres://${DB_USER}:${DB_PASSWORD}`@db`:5432/bigset_internal

   studio-data:
@@
     environment:
-      DATABASE_URL: postgres://bigset:bigset@db:5432/bigset_data
+      DATABASE_URL: postgres://${DB_USER}:${DB_PASSWORD}`@db`:5432/bigset_data

As per coding guidelines, Do not commit secrets, API keys, or internal documentation to the repository. Use environment variables for sensitive data.

Also applies to: 45-45, 58-58

🧰 Tools
🪛 Checkov (3.2.528)

[medium] 31-32: Basic Auth Credentials

(CKV_SECRET_4)

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@docker-compose.dev.yml` around lines 31 - 32, Replace hardcoded credentials
in the DATABASE_URL and DATA_DATABASE_URL values with environment-variable
interpolation (e.g., use ${DB_USER}, ${DB_PASSWORD}, ${DB_HOST}, ${DB_PORT},
${DB_NAME}) and ensure the same change is applied to the other occurrences noted
(the entries referenced at the other two locations). Update the compose service
environment to build the URLs from those interpolated vars and document required
env var names in your .env or deployment config so secrets are not committed in
the repo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant