Docker fixes, dual databases, and Drizzle Studio#3
Conversation
WORKDIR /app is created as root, but USER node switches before bun install runs. chown /app to node so it can create node_modules. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdds a second Postgres database (bigset_data) via an init SQL mount and changes POSTGRES_DB to bigset_internal. Splits backend DB URLs into DATABASE_URL (bigset_internal) and DATA_DATABASE_URL (bigset_data), exports DATA_DATABASE_URL, and adds a drizzle dataDb client. Updates docker-compose: backend env/health/wait, two Drizzle Studio services (ports 3600/3601) connected to each DB, and frontend BACKEND_URL. Frontend Dockerfile.dev now combines Bun install and chown /app to node:node in a single RUN. sequenceDiagram
participant Developer
participant DockerCompose
participant Postgres
participant Backend
participant DrizzleStudio_Internal as Studio_internal
participant DrizzleStudio_Data as Studio_data
participant Frontend
Developer->>DockerCompose: docker-compose up (dev)
DockerCompose->>Postgres: start with POSTGRES_DB=bigset_internal and mount init.sql
Postgres->>Postgres: run init.sql (CREATE DATABASE bigset_data)
DockerCompose->>Backend: wait for db healthy, start Backend with DATABASE_URL & DATA_DATABASE_URL
Backend->>Postgres: connect to bigset_internal (DATABASE_URL)
Backend->>Postgres: connect to bigset_data (DATA_DATABASE_URL) via dataDb
DockerCompose->>DrizzleStudio_Internal: start studio bound to bigset_internal (port 3600)
DockerCompose->>DrizzleStudio_Data: start studio bound to bigset_data (port 3601)
Developer->>Frontend: access frontend (BACKEND_URL -> http://backend:3501)
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
- Add `studio` service to docker-compose — runs Drizzle Studio on :4983 so you can browse the database in the browser during dev - Set `BACKEND_URL=http://backend:3501` on the frontend container — inside Docker, `localhost` means the container itself, not the host. The Next.js rewrite was proxying auth requests to localhost:3501 which doesn't exist inside the frontend container. Using the Docker service name `backend` resolves to the correct container on the Docker network. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
docker-compose.dev.yml (1)
36-47: ⚡ Quick winConsider mounting backend source volumes for live schema updates.
The
studioservice is built from the same context asbackendbut doesn't mount any source volumes. Drizzle Studio reads schema definitions from the codebase, so without volumes you'll need to rebuild the container to see schema changes during development.Consider adding the same volume mount as the backend service for better developer ergonomics.
🔄 Suggested volume configuration
studio: build: context: ./backend dockerfile: Dockerfile.dev ports: - "4983:4983" + volumes: + - ./backend/src:/app/src environment: DATABASE_URL: postgres://bigset:bigset@db:5432/bigset depends_on: db: condition: service_healthy command: ["npx", "drizzle-kit", "studio", "--host", "0.0.0.0"]🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@docker-compose.dev.yml` around lines 36 - 47, The studio service is missing a source volume mount so schema changes aren't picked up live; update the studio service definition (service name "studio", build context "./backend", command ["npx","drizzle-kit","studio","--host","0.0.0.0"]) to include the same backend source volume used by the backend service (mount the project/backend source directory into the container) so Drizzle Studio reads schema file updates without rebuilding; ensure the volume path and any node_modules or build artifact mounts mirror the backend service to avoid file ownership or dev/server conflicts.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@docker-compose.dev.yml`:
- Around line 36-47: The studio service is missing a source volume mount so
schema changes aren't picked up live; update the studio service definition
(service name "studio", build context "./backend", command
["npx","drizzle-kit","studio","--host","0.0.0.0"]) to include the same backend
source volume used by the backend service (mount the project/backend source
directory into the container) so Drizzle Studio reads schema file updates
without rebuilding; ensure the volume path and any node_modules or build
artifact mounts mirror the backend service to avoid file ownership or dev/server
conflicts.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 46ae32da-5f5c-45e9-823a-605f2e20f4cd
📒 Files selected for processing (1)
docker-compose.dev.yml
- Split into two databases: `bigset_internal` (auth/app) and `bigset_data` (user-created datasets) on the same Postgres instance - Add `db/init.sql` to create `bigset_data` on first startup - Add `DATA_DATABASE_URL` env var and `data-db.ts` Drizzle instance - Replace single studio with two: studio-internal (:3600) and studio-data (:3601) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@docker-compose.dev.yml`:
- Around line 31-32: Replace hardcoded credentials in the DATABASE_URL and
DATA_DATABASE_URL values with environment-variable interpolation (e.g., use
${DB_USER}, ${DB_PASSWORD}, ${DB_HOST}, ${DB_PORT}, ${DB_NAME}) and ensure the
same change is applied to the other occurrences noted (the entries referenced at
the other two locations). Update the compose service environment to build the
URLs from those interpolated vars and document required env var names in your
.env or deployment config so secrets are not committed in the repo.
- Line 17: The compose setup depends on ./db/init.sql mounted to
/docker-entrypoint-initdb.d/init.sql which only runs on first initialization, so
ensure creation of the bigset_data object even when an existing pgdata volume
exists by adding a startup migration step: modify the postgres service to run an
init script (or a small entrypoint wrapper) that checks for and creates
bigset_data if missing (using psql CMD against the database), or add a separate
one-off service/task that runs ./db/init.sql against the running DB on each
start; reference the existing mount
./db/init.sql:/docker-entrypoint-initdb.d/init.sql, the pgdata volume, and the
bigset_data object when implementing this check-and-create migration.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 918a0e7d-855f-4be4-b1c3-8cdb942aa6dc
📒 Files selected for processing (5)
backend/.env.examplebackend/src/data-db.tsbackend/src/env.tsdb/init.sqldocker-compose.dev.yml
✅ Files skipped from review due to trivial changes (2)
- db/init.sql
- backend/.env.example
| retries: 10 | ||
| volumes: | ||
| - pgdata:/var/lib/postgresql/data | ||
| - ./db/init.sql:/docker-entrypoint-initdb.d/init.sql |
There was a problem hiding this comment.
bigset_data creation is not guaranteed for existing dev volumes.
Line 17 relies on /docker-entrypoint-initdb.d/init.sql, which only runs on first cluster initialization. With an existing pgdata volume, bigset_data may never be created, and the new connections can fail at runtime.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@docker-compose.dev.yml` at line 17, The compose setup depends on
./db/init.sql mounted to /docker-entrypoint-initdb.d/init.sql which only runs on
first initialization, so ensure creation of the bigset_data object even when an
existing pgdata volume exists by adding a startup migration step: modify the
postgres service to run an init script (or a small entrypoint wrapper) that
checks for and creates bigset_data if missing (using psql CMD against the
database), or add a separate one-off service/task that runs ./db/init.sql
against the running DB on each start; reference the existing mount
./db/init.sql:/docker-entrypoint-initdb.d/init.sql, the pgdata volume, and the
bigset_data object when implementing this check-and-create migration.
| DATABASE_URL: postgres://bigset:bigset@db:5432/bigset_internal | ||
| DATA_DATABASE_URL: postgres://bigset:bigset@db:5432/bigset_data |
There was a problem hiding this comment.
Remove inline DB credentials from compose URLs.
These URLs embed username/password directly in committed config. Move credentials to environment-variable interpolation so secrets are not hardcoded in repo history.
🔧 Suggested compose change
backend:
@@
environment:
+ DB_USER: ${DB_USER}
+ DB_PASSWORD: ${DB_PASSWORD}
@@
- DATABASE_URL: postgres://bigset:bigset@db:5432/bigset_internal
- DATA_DATABASE_URL: postgres://bigset:bigset@db:5432/bigset_data
+ DATABASE_URL: postgres://${DB_USER}:${DB_PASSWORD}`@db`:5432/bigset_internal
+ DATA_DATABASE_URL: postgres://${DB_USER}:${DB_PASSWORD}`@db`:5432/bigset_data
studio-internal:
@@
environment:
- DATABASE_URL: postgres://bigset:bigset@db:5432/bigset_internal
+ DATABASE_URL: postgres://${DB_USER}:${DB_PASSWORD}`@db`:5432/bigset_internal
studio-data:
@@
environment:
- DATABASE_URL: postgres://bigset:bigset@db:5432/bigset_data
+ DATABASE_URL: postgres://${DB_USER}:${DB_PASSWORD}`@db`:5432/bigset_dataAs per coding guidelines, Do not commit secrets, API keys, or internal documentation to the repository. Use environment variables for sensitive data.
Also applies to: 45-45, 58-58
🧰 Tools
🪛 Checkov (3.2.528)
[medium] 31-32: Basic Auth Credentials
(CKV_SECRET_4)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@docker-compose.dev.yml` around lines 31 - 32, Replace hardcoded credentials
in the DATABASE_URL and DATA_DATABASE_URL values with environment-variable
interpolation (e.g., use ${DB_USER}, ${DB_PASSWORD}, ${DB_HOST}, ${DB_PORT},
${DB_NAME}) and ensure the same change is applied to the other occurrences noted
(the entries referenced at the other two locations). Update the compose service
environment to build the URLs from those interpolated vars and document required
env var names in your .env or deployment config so secrets are not committed in
the repo.
Summary
Fix frontend Dockerfile permissions:
WORKDIR /appis created as root, butUSER nodeswitches beforebun install. Addedchown node:node /appso it can createnode_modules.Fix Docker networking for auth proxy: Inside Docker,
localhostmeans the container itself. AddedBACKEND_URL=http://backend:3501so the Next.js rewrite proxies auth requests to the backend container correctly.Dual Postgres databases: Split into
bigset_internal(auth/app tables) andbigset_data(user-created dataset tables) on the same Postgres instance.db/init.sqlcreates the second database on first startup. Backend getsDATA_DATABASE_URLenv var and a second Drizzle instance (data-db.ts).Drizzle Studio x2: Two studio services for browsing each database during dev:
:3600—bigset_internal(auth tables):3601—bigset_data(dataset tables)Port map
Test plan
make dev— all five services start (db, backend, frontend, studio-internal, studio-data)make downthenmake dev— data persistsmake cleanthenmake dev— fresh start, both databases recreated🤖 Generated with Claude Code