Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 23 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,14 @@ INNGEST_DEV=1
INNGEST_BASE_URL=http://localhost:8288
# If your Inngest handler lives at a custom route, set:
INNGEST_SERVE_PATH=/api/inngest

# --- Object Storage (Cloudflare R2 in prod; MinIO locally) ---
# Local S3 emulator (MinIO) — the start script will auto-create the bucket when this endpoint is set:
R2_ENDPOINT=http://localhost:9000
R2_BUCKET=development
R2_ACCESS_KEY_ID=minioadmin
R2_SECRET_ACCESS_KEY=minioadmin
R2_PUBLIC_BASE_URL=http://localhost:9000/development
```

### 3. Start local dev services (Docker)
Expand All @@ -73,19 +81,26 @@ We provide a single [`docker-compose.yml`](docker-compose.yml) and a helper scri
- **Redis** on `localhost:6379`
- **Serverless Redis HTTP (SRH)** on `http://localhost:8079` (Upstash-compatible REST proxy)
- **Inngest Dev Server** on `http://localhost:8288`
- **MinIO (S3 API)** on `http://localhost:9000` (console at `http://localhost:9001`)

Run:

```bash
pnpm dev:start-docker
pnpm docker:up
```

> On Linux, if `host.docker.internal` isn’t available, add `extra_hosts` to the Inngest service in `docker-compose.yml`:
> On Linux, if `host.docker.internal` isn’t available, add `extra_hosts` to the Inngest and MinIO services in `docker-compose.yml`:
>
> ```yaml
> extra_hosts: ["host.docker.internal:host-gateway"]
> ```

To stop everything cleanly:

```bash
pnpm docker:down
```
Comment on lines 88 to +102
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Docs align; consider a short safety tip.

Looks great. Optional: note that MinIO ports can be bound to 127.0.0.1 in docker-compose to avoid LAN exposure.

🤖 Prompt for AI Agents
In README.md around lines 88 to 102, add a short safety tip that recommends
binding MinIO ports to localhost to avoid exposing the service on the LAN;
update the docs to suggest setting the MinIO service ports to 127.0.0.1 in
docker-compose (or equivalent) and briefly state the security benefit and that
this is optional for local-only access.


### 4. Run Drizzle database migrations & seeds

```bash
Expand All @@ -104,18 +119,17 @@ pnpm dev

Open [http://localhost:3000](http://localhost:3000)

The Inngest Dev UI will be available at [http://localhost:8288](http://localhost:8288) and is already configured to call the local Next.js server at `http://localhost:3000/api/inngest`.

---

## 🧰 Useful Commands

```bash
pnpm dev # start dev server (uses .env.development.local)
pnpm dev:start-docker # start Dockerized local services and wait until ready
pnpm lint # Biome lint/format checks
pnpm typecheck # tsc --noEmit
pnpm test:run # Vitest
pnpm dev # start Next.js dev server
pnpm docker:up # start Dockerized local services and wait until ready
pnpm docker:down # stop all Dockerized local services (docker compose down)
pnpm lint # Biome lint/format checks
pnpm typecheck # tsc --noEmit
pnpm test:run # Vitest

# Drizzle
pnpm db:generate # generate SQL migrations from schema
Expand Down
17 changes: 16 additions & 1 deletion docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@ services:
environment:
APPEND_PORT: "postgres:5432"
ALLOW_ADDR_REGEX: ".*"
LOG_TRAFFIC: "true"
LOG_TRAFFIC: "false"
LOG_CONN_INFO: "true"
ports:
- "5433:80"
depends_on:
Expand Down Expand Up @@ -59,5 +60,19 @@ services:
ports:
- "8288:8288"

# Local MinIO S3 (TCP on 9000)
minio:
image: minio/minio:latest
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
ports:
- "9000:9000" # S3 API endpoint
- "9001:9001" # Web console
Comment on lines +71 to +72
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Limit MinIO exposure to loopback.

Binding to all interfaces exposes the S3 API and console to your LAN. Prefer loopback in dev.

-      - "9000:9000" # S3 API endpoint
-      - "9001:9001" # Web console
+      - "127.0.0.1:9000:9000" # S3 API endpoint
+      - "127.0.0.1:9001:9001" # Web console
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- "9000:9000" # S3 API endpoint
- "9001:9001" # Web console
- "127.0.0.1:9000:9000" # S3 API endpoint
- "127.0.0.1:9001:9001" # Web console
🤖 Prompt for AI Agents
In docker-compose.yml around lines 71 to 72, the MinIO ports are currently bound
to all interfaces ("9000:9000" and "9001:9001"); change the host bindings to
loopback (e.g. "127.0.0.1:9000:9000" and "127.0.0.1:9001:9001") so the S3 API
and web console are only accessible from the local machine in development, and
keep a comment noting this is intentional for dev-only exposure.

volumes:
- minio_data:/data
Comment on lines +63 to +74
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Add a healthcheck for MinIO.

Lets up --wait (or external waits) detect readiness without custom logic.

   minio:
     image: minio/minio:RELEASE.2025-09-07T15-34-20Z
     command: server /data --console-address ":9001"
+    healthcheck:
+      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/ready"]
+      interval: 10s
+      timeout: 5s
+      retries: 5
     environment:
       MINIO_ROOT_USER: minioadmin
       MINIO_ROOT_PASSWORD: minioadmin
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Local MinIO S3 (TCP on 9000)
minio:
image: minio/minio:latest
command: server /data --console-address ":9001"
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
ports:
- "9000:9000" # S3 API endpoint
- "9001:9001" # Web console
volumes:
- minio_data:/data
# Local MinIO S3 (TCP on 9000)
minio:
image: minio/minio:RELEASE.2025-09-07T15-34-20Z
command: server /data --console-address ":9001"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/ready"]
interval: 10s
timeout: 5s
retries: 5
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
ports:
- "9000:9000" # S3 API endpoint
- "9001:9001" # Web console
volumes:
- minio_data:/data


volumes:
pg_data:
minio_data:
142 changes: 101 additions & 41 deletions lib/r2.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,47 +7,101 @@ import {
S3Client,
} from "@aws-sdk/client-s3";

function requireEnv(name: string): string {
const value = process.env[name];
if (!value) throw new Error(`${name} is not set`);
return value;
function getEnvOrThrow(name: string): string {
const v = process.env[name];
if (!v) throw new Error(`${name} is not set`);
return v;
}

function normalizeEndpoint(raw?: string): string | undefined {
if (!raw) return undefined;
const trimmed = raw.trim();
// Ensure a scheme so URL() doesn’t throw if someone sets "localhost:9000"
if (!/^https?:\/\//i.test(trimmed)) {
return `http://${trimmed}`;
}
return trimmed;
}

function isR2Host(u: string): boolean {
try {
const host = new URL(u).host;
return /\.r2\.cloudflarestorage\.com$/i.test(host);
} catch {
return false;
}
}

let s3Singleton: S3Client | null = null;

export function getS3(): S3Client {
if (s3Singleton) return s3Singleton;
const accountId = requireEnv("R2_ACCOUNT_ID");
const accessKeyId = requireEnv("R2_ACCESS_KEY_ID");
const secretAccessKey = requireEnv("R2_SECRET_ACCESS_KEY");

s3Singleton = new S3Client({
region: "auto",
endpoint: `https://${accountId}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId,
secretAccessKey,
},
});

// creds are required in both local and R2 cases
const accessKeyId = getEnvOrThrow("R2_ACCESS_KEY_ID");
const secretAccessKey = getEnvOrThrow("R2_SECRET_ACCESS_KEY");

const endpoint = normalizeEndpoint(process.env.R2_ENDPOINT);
const usingLocal = !!endpoint && !isR2Host(endpoint);

if (usingLocal) {
// ---- Local/Non-R2 S3 endpoint (e.g., MinIO/LocalStack) ----
s3Singleton = new S3Client({
region: process.env.R2_REGION || "us-east-1",
endpoint, // e.g., http://localhost:9000
credentials: { accessKeyId, secretAccessKey },
forcePathStyle: true,
});
} else {
// ---- Cloudflare R2 (S3 API) ----
const accountId =
process.env.R2_ACCOUNT_ID ||
(() => {
if (!endpoint) {
throw new Error(
"R2_ACCOUNT_ID is required for Cloudflare R2 but not set",
);
}
return ""; // unused when endpoint is provided explicitly
})();

s3Singleton = new S3Client({
region: "auto",
endpoint: endpoint || `https://${accountId}.r2.cloudflarestorage.com`,
credentials: { accessKeyId, secretAccessKey },
// path-style off for R2
});
}

return s3Singleton;
}

export function getBucket(): string {
return requireEnv("R2_BUCKET");
function getBucket(): string {
return getEnvOrThrow("R2_BUCKET");
}

export function makePublicUrl(key: string): string {
const accountId = requireEnv("R2_ACCOUNT_ID");
const bucket = getBucket();
const rawBase =
process.env.R2_PUBLIC_BASE_URL ||
`https://${bucket}.${accountId}.r2.cloudflarestorage.com`;
const base = rawBase.replace(/\/+$/, "");
const encodedKey = key
.split("/")
.map((p) => encodeURIComponent(p))
.join("/");
return `${base}/${encodedKey}`;

const explicit = process.env.R2_PUBLIC_BASE_URL?.replace(/\/+$/, "");
if (explicit) {
const encoded = key.split("/").map(encodeURIComponent).join("/");
return `${explicit}/${encoded}`;
}

const endpoint = normalizeEndpoint(process.env.R2_ENDPOINT);

// Local path-style when endpoint is non-R2
if (endpoint && !isR2Host(endpoint)) {
const base = `${endpoint.replace(/\/+$/, "")}/${bucket}`;
const encoded = key.split("/").map(encodeURIComponent).join("/");
return `${base}/${encoded}`;
}

// R2 virtual-hosted default (requires ACCOUNT_ID)
const accountId = getEnvOrThrow("R2_ACCOUNT_ID");
const encoded = key.split("/").map(encodeURIComponent).join("/");
return `https://${bucket}.${accountId}.r2.cloudflarestorage.com/${encoded}`;
}

export async function putObject(options: {
Expand All @@ -58,14 +112,15 @@ export async function putObject(options: {
}): Promise<void> {
const s3 = getS3();
const bucket = getBucket();
const cmd = new PutObjectCommand({
Bucket: bucket,
Key: options.key,
Body: options.body,
ContentType: options.contentType,
CacheControl: options.cacheControl,
});
await s3.send(cmd);
await s3.send(
new PutObjectCommand({
Bucket: bucket,
Key: options.key,
Body: options.body,
ContentType: options.contentType,
CacheControl: options.cacheControl,
}),
);
}

export type DeleteResult = Array<{
Expand All @@ -86,11 +141,12 @@ export async function deleteObjects(keys: string[]): Promise<DeleteResult> {
const slice = keys.slice(i, i + MAX_PER_BATCH);
const objects: ObjectIdentifier[] = slice.map((k) => ({ Key: k }));
try {
const cmd = new DeleteObjectsCommand({
Bucket: bucket,
Delete: { Objects: objects, Quiet: false },
});
const resp = await s3.send(cmd);
const resp = await s3.send(
new DeleteObjectsCommand({
Bucket: bucket,
Delete: { Objects: objects, Quiet: false },
}),
);

const deletedSet = new Set<string>(
(resp.Deleted || []).map((d) => d.Key || ""),
Expand All @@ -112,6 +168,10 @@ export async function deleteObjects(keys: string[]): Promise<DeleteResult> {
}
} catch (err) {
const message = (err as Error)?.message || "unknown";
console.error("[r2] deleteObjects failed", {
keys: slice,
error: message,
});
for (const k of slice) {
results.push({ key: k, deleted: false, error: message });
}
Expand Down
3 changes: 2 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@
"type": "module",
"scripts": {
"dev": "next dev --turbo",
"dev:start-docker": "scripts/start-dev-infra.sh",
"docker:up": "scripts/start-dev-infra.sh",
"docker:down": "scripts/stop-dev-infra.sh",
"build": "next build",
"start": "next start",
"lint": "biome check",
Expand Down
87 changes: 80 additions & 7 deletions scripts/start-dev-infra.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,15 @@ set -euo pipefail
ROOT_DIR="$(cd -- "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
cd "$ROOT_DIR"

# Load environment vars from .env.local from repo root
ENV_FILE="$ROOT_DIR/.env.local"
if [ -f "$ENV_FILE" ]; then
echo "💉 Loading $ENV_FILE"
set -a
. "$ENV_FILE"
set +a
fi
Comment on lines +10 to +15
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Silence ShellCheck for dynamic env sourcing.

Avoid false positives on . "$ENV_FILE" by annotating.

 if [ -f "$ENV_FILE" ]; then
   echo "💉 Loading $ENV_FILE"
   set -a
+  # shellcheck disable=SC1090
   . "$ENV_FILE"
   set +a
 fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if [ -f "$ENV_FILE" ]; then
echo "💉 Loading $ENV_FILE"
set -a
. "$ENV_FILE"
set +a
fi
if [ -f "$ENV_FILE" ]; then
echo "💉 Loading $ENV_FILE"
set -a
# shellcheck disable=SC1090
. "$ENV_FILE"
set +a
fi
🧰 Tools
🪛 Shellcheck (0.11.0)

[warning] 13-13: ShellCheck can't follow non-constant source. Use a directive to specify location.

(SC1090)

🤖 Prompt for AI Agents
In scripts/start-dev-infra.sh around lines 10 to 15, ShellCheck raises a false
positive for the dynamic env file sourcing ". \"$ENV_FILE\""; add a ShellCheck
annotation immediately above the sourcing line to silence it by either adding "#
shellcheck source=/dev/null" or "# shellcheck disable=SC1090" so the dynamic
source is allowed while keeping the rest of the checks intact.


# Allow overriding the compose command (e.g., DOCKER_COMPOSE="docker-compose")
DOCKER_COMPOSE="${DOCKER_COMPOSE:-docker compose}"

Expand Down Expand Up @@ -60,15 +69,79 @@ wait_for_port "127.0.0.1" 6379 "Redis"
wait_for_port "127.0.0.1" 8079 "SRH (Upstash-compatible HTTP)"
# Inngest Dev Server
wait_for_port "127.0.0.1" 8288 "Inngest Dev Server"
# MinIO S3 API
wait_for_port "127.0.0.1" 9000 "MinIO S3 API"

# --- MinIO bucket setup ------------------------------------------------------
# Defaults for local emulator if not provided in .env.local
: "${R2_ACCESS_KEY_ID:=minioadmin}"
: "${R2_SECRET_ACCESS_KEY:=minioadmin}"
: "${R2_BUCKET:=development}"

echo "🪣 Ensuring MinIO bucket exists: ${R2_BUCKET}"

# Cross-platform networking for the disposable mc container
OS="$(uname -s || echo unknown)"
if [[ "$OS" == "Linux" ]]; then
MC_NET_FLAG="--network=host"
MINIO_ENDPOINT="http://localhost:9000"
else
MC_NET_FLAG=""
MINIO_ENDPOINT="http://host.docker.internal:9000"
fi

# Reuse a persistent config volume so the 'local' alias persists across runs
docker volume create mc-config >/dev/null

# 1) define/update the alias
if ! docker run --rm $MC_NET_FLAG \
-v mc-config:/root/.mc \
minio/mc alias set local "$MINIO_ENDPOINT" "$R2_ACCESS_KEY_ID" "$R2_SECRET_ACCESS_KEY" >/dev/null; then
echo "⚠️ Warning: Could not set MinIO alias (may already exist)"
fi

# 2) create bucket if missing
if ! docker run --rm $MC_NET_FLAG -v mc-config:/root/.mc minio/mc ls "local/${R2_BUCKET}" >/dev/null 2>&1; then
docker run --rm $MC_NET_FLAG -v mc-config:/root/.mc minio/mc mb -p "local/${R2_BUCKET}"
fi

# 3) 🔓 allow anonymous GET (public-read) so the browser can load images
docker run --rm $MC_NET_FLAG -v mc-config:/root/.mc minio/mc anonymous set download "local/${R2_BUCKET}" >/dev/null

# 4) quick listing
docker run --rm $MC_NET_FLAG -v mc-config:/root/.mc minio/mc ls local | sed -n '1,5p' || true

# --- Done! (hopefully) ------------------------------------------------------
echo
echo "🎉 Local infra is ready!"
echo " Postgres: postgres://postgres:postgres@localhost:5432/main"
echo " wsproxy: ws://localhost:5433/v1 (driver uses this automatically)"
echo " Redis: redis://localhost:6379"
echo " SRH: http://localhost:8079"
echo " Inngest: http://localhost:8288"
echo " * Postgres: postgres://postgres:postgres@localhost:5432/main"
echo " * wsproxy: ws://localhost:5433/v1 (driver uses this automatically)"
echo " * Redis: redis://localhost:6379"
echo " * SRH: http://localhost:8079"
echo " * Inngest: http://localhost:8288"
echo " * MinIO: http://localhost:9000 (console: http://localhost:9001)"
echo

echo "📜 Following logs (Ctrl+C to stop log tail; services keep running)…"
exec $DOCKER_COMPOSE logs -f --tail=100
# graceful shutdown on Ctrl+C / SIGTERM
cleanup() {
echo
echo "🛑 Ctrl+C detected — shutting down Docker stack…"
# stop the log tail first (if running)
if [[ -n "${LOG_PID:-}" ]]; then
kill "$LOG_PID" 2>/dev/null || true
fi
# bring the stack down
if $DOCKER_COMPOSE down; then
exit 130 # standard code for user-initiated Ctrl+C
else
exit $? # propagate failure from `down`
fi
}
trap cleanup INT TERM

echo "📜 Following logs (Ctrl+C to stop AND shut down services)…"
$DOCKER_COMPOSE logs -f --tail=100 &
LOG_PID=$!

# wait on the log tail; if it exits (or you press Ctrl+C), the trap runs
wait "$LOG_PID"
Loading