fix(storage): proxy uploads through server when S3_PROXY=true#150
Merged
fix(storage): proxy uploads through server when S3_PROXY=true#150
Conversation
Fixes self-hosted Docker Compose deployments where the browser cannot reach
the MinIO endpoint directly (internal hostname like http://minio:9000).
When S3_PROXY=true, generatePresignedUploadUrl now returns a server-side
proxy URL instead of a presigned S3 URL. The browser PUTs the file to
/api/storage/{key}, the server verifies an HMAC-signed token, then streams
the body to MinIO using the existing uploadObject() path.
The token is signed with S3_SECRET_ACCESS_KEY and encodes key, content-type,
and expiry — stateless, no Redis required. The PUT handler enforces the same
5 MB MAX_FILE_SIZE limit used by other upload paths.
S3_PROXY=false (default) is completely unaffected.
Closes #137
verifyProxyUploadToken: 10 unit tests covering valid/invalid tokens,
expiry, key and content-type mismatches, wrong-length signatures, and
wrong secrets. Config mock trimmed to {} since the function is pure.
PUT /api/storage/* handler: 8 tests covering both 403 branches (S3 not
configured, S3_PROXY disabled), Content-Length pre-check, path-traversal
rejection, token verification, body size hard gate, successful upload, and
secretAccessKey threading. Config mock made mutable so s3Proxy can be
flipped per-test. Handler extracted to handleProxyUpload for testability.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 5042248f0c
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
- Reject missing content-type explicitly (400) instead of forwarding empty string to S3 - Invalidate proxy GET cache on successful PUT to prevent stale reads - Add test coverage for missing ct param
- Throw a clear error if BASE_URL is unset when S3_PROXY=true, rather than crashing with "Cannot read property 'replace' of undefined" - Add comment explaining the intentional 32-char (128-bit) HMAC truncation
… cleaner URL builder - Pass pre-built publicUrl into buildProxyUploadUrl instead of recomputing config.baseUrl + /api/storage/ path segment (was duplicated with buildPublicUrl) - Guard Content-Length early-reject only when header is present; absence is not a signal — the post-read hard gate is the real enforcement - Use new Uint8Array(body) instead of Buffer.from(body) to avoid copying the upload body before passing to S3 (zero-copy view) - Actively evict stale proxyCache entries on cache miss instead of silently skipping them, preventing unbounded Map growth under high key churn - Trim JSDoc paragraph and arithmetic comment to keep only the non-obvious why
…ll buffering Replaces the Content-Length pre-check + post-read byteLength check with a streaming reader that accumulates chunks and cancels the stream as soon as the running total exceeds MAX_FILE_SIZE. Clients that omit or underreport Content-Length (chunked transfer) are now rejected without buffering more than MAX_FILE_SIZE bytes, preventing memory exhaustion from concurrent oversized uploads.
…am chunks Exports the function and adds 6 focused tests using real multi-chunk ReadableStream instances. Verifies: correct assembly of in-limit bodies, stream.cancel() fires when limit is exceeded mid-stream, empty body, exact-limit acceptance, one-byte-over rejection, and many-small-chunks assembly.
…URL is also set The simplify refactor passed publicUrl (which uses S3_PUBLIC_URL when configured) into buildProxyUploadUrl. This caused Case D (S3_PROXY=true + S3_PUBLIC_URL set) to produce an upload URL pointing at the CDN instead of the server's /api/storage route. Fix: buildProxyUploadUrl always uses config.baseUrl + /api/storage regardless of S3_PUBLIC_URL. The publicUrl (download URL) is independent of the upload route. Also adds a 21-test configuration matrix covering all four deployment variations: A. S3_PROXY=false, no S3_PUBLIC_URL — MinIO / Railway / AWS private bucket B. S3_PROXY=false, S3_PUBLIC_URL set — AWS public / Cloudflare R2 + CDN C. S3_PROXY=true, no S3_PUBLIC_URL — Docker self-hosted / ngrok D. S3_PROXY=true, S3_PUBLIC_URL set — proxy uploads, CDN downloads
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes #137 — self-hosted Docker Compose deployments where the browser can't reach the MinIO endpoint (e.g.
http://minio:9000) fail silently on all file uploads (avatars, logos, images).S3_PROXY=truealready fixed downloads; this extends it to uploads.S3_PROXY=true,generatePresignedUploadUrlreturns a server-side proxy URL (/api/storage/{key}?ct=...&exp=...&sig=...) instead of a presigned S3 URL. The browser PUTs to the Quackback server, which streams the body to MinIO via the existinguploadObject()path — no MinIO exposure needed.S3_SECRET_ACCESS_KEY, encoding key + content-type + expiry. Stateless; no Redis required.PUT /api/storage/*handler validates the token (timing-safe comparison, expiry check) and enforces the existing 5 MBMAX_FILE_SIZElimit with both aContent-Lengthpre-check and a post-read hard gate.S3_PROXY=false(default) is completely unaffected — all existing setups (AWS S3, R2, Railway, local dev) continue to use presigned PUT URLs as before.Test plan
verifyProxyUploadToken— valid tokens, expiry, key/content-type/secret mismatches, null params, wrong-length signatureshandleProxyUpload— both 403 branches (isS3Configuredands3Proxy), Content-Length pre-check, path-traversal rejection, token verification failure, body size hard gate, successful upload, secretAccessKey threadingS3_PROXY=trueandS3_ENDPOINT=http://minio:9000, confirm avatar/logo uploads complete successfully