feat(comments): R2 media uploads and composer improvements#143
feat(comments): R2 media uploads and composer improvements#143
Conversation
Add Cloudflare R2-backed image/video upload for issue/PR comments (multipart route, finalize HTML embeds). Integrate react-dropzone with inline uploading placeholders, scroll-to-actions when needed, and remote R2 binding for local dev. Extend MarkdownEditor with field-sizing, non-blocking uploads, replaceUploadPlaceholder, and conditional scroll behavior. Document optional R2_PUBLIC_BASE_URL in .dev.vars.example.
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Plus Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds a comment media upload system: client hooks and editor integrations for paste/drop uploads, worker POST upload handler, R2 bucket configuration and public URL handling, server finalization endpoint with verification, textarea auto-sizing, and markdown/video rendering support. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Editor as Markdown Editor
participant Hook as useCommentMediaUpload
participant UploadAPI as /api/comment-media/upload
participant R2 as R2 Bucket
participant Finalizer as finalizeCommentMediaUpload
User->>Editor: Paste or drop file
Editor->>Hook: onPaste/onDrop
Hook->>Editor: insertAtCaret(placeholderId)
Hook->>UploadAPI: POST multipart file
UploadAPI->>UploadAPI: auth, validate, sanitize
UploadAPI->>R2: upload stream
R2-->>UploadAPI: key/publicUrl
UploadAPI-->>Hook: { key, publicUrl, kind, contentType }
Hook->>Hook: probe dimensions (img/video)
Hook->>Finalizer: POST { key, width, height, kind, fileName }
Finalizer->>R2: verify object, check size/type
Finalizer-->>Hook: { ok: true, html } or { ok: false, error }
Hook->>Editor: replaceUploadPlaceholder(placeholderId, html or error text)
Editor-->>User: Rendered media or failure notice
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
Deploying with
|
| Status | Name | Latest Commit | Updated (UTC) |
|---|---|---|---|
| ✅ Deployment successful! View logs |
diffkit | a45c437 | Apr 17 2026, 01:23 AM |
Wrangler remote mode (R2 remote: true) requires wrangler login; GitHub Actions has no session, so Vitest failed at Vite startup. Set @cloudflare/vite-plugin remoteBindings to false when CI or VITEST is set so tests use local Miniflare only.
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (2)
apps/dashboard/wrangler.jsonc (1)
48-57: Isolate dev uploads from production data paths.With
remote: true(Line 56), local dev writes hit a real bucket. Please ensure this binding points to a dedicated non-prod bucket per environment to avoid accidental data mixing and cost surprises.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard/wrangler.jsonc` around lines 48 - 57, The r2_buckets entry for binding "COMMENT_MEDIA" currently has "remote": true and hardcoded "bucket_name"/"preview_bucket_name" pointing at the real production bucket; change this to point to a dedicated non‑prod bucket per environment by parameterizing the bucket names (e.g., use env vars like COMMENT_MEDIA_BUCKET / COMMENT_MEDIA_PREVIEW_BUCKET or append an env suffix such as "-dev" / "-staging") in the r2_buckets block so local/dev deployments do not write to the production bucket; update the "bucket_name" and "preview_bucket_name" values for the "COMMENT_MEDIA" binding accordingly and ensure the corresponding environment variables are set in your dev/staging configs.apps/dashboard/src/hooks/use-comment-media-upload.ts (1)
142-149: Multi-file uploads are still serialized.This loop waits for each upload + finalize cycle before starting the next one, so later files do not get a placeholder or start uploading until earlier ones finish. That makes large drops/pastes feel blocked. Consider inserting placeholders for all files first and then running uploads concurrently, or with a small bounded pool.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/dashboard/src/hooks/use-comment-media-upload.ts` around lines 142 - 149, processFiles currently serializes uploads by awaiting each uploadAndInsert(file) inside the loop; change it to first create placeholders for every File (call the function that inserts placeholders—e.g., uploadAndInsertPlaceholder or a new helper) without awaiting the full upload, then kick off actual uploadAndInsert jobs concurrently (e.g., Promise.all or a bounded concurrency pool) so placeholders appear immediately and uploads run in parallel; update processFiles to reference uploadAndInsert for the actual upload tasks and the placeholder insertion helper for fast feedback.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/dashboard/src/components/details/detail-activity.tsx`:
- Around line 67-68: useCommentMediaUpload currently injects temporary upload
tokens into the editor value but the component still allows sends while uploads
are in progress; update the hook (useCommentMediaUpload) to return a
pendingUploads count or boolean (e.g., pendingCount or hasPendingUploads) and
then in detail-activity.tsx gate all actions that submit or change PR state (the
send/comment submit handler and the PR-state change handlers at the other
mentioned block) by checking that pendingUploads is zero/false; disable the
submit UI and early-return from the action handlers when pending uploads exist
so the raw [[DIFFKIT_UPLOAD:...]] placeholder cannot be persisted.
In `@apps/dashboard/src/entry-worker.ts`:
- Around line 56-64: The early return for the upload branch bypasses the global
header-setting, so change the branch around handleCommentMediaUpload to call the
handler, capture its Response, and then merge/apply the SECURITY_HEADERS before
returning; specifically, replace the direct return of
handleCommentMediaUpload(request) with awaiting
handleCommentMediaUpload(request) into a response variable and copying/appending
the SECURITY_HEADERS into that response (matching how the global header-setting
block applies SECURITY_HEADERS) and then return the modified response.
In `@apps/dashboard/src/lib/comment-media-upload.handler.ts`:
- Around line 48-66: The current upload trusts client-provided file.type and
writes it to R2; instead, read the initial bytes from file.stream() (or buffer
the upload up to maxBytesForCommentMediaKind(kind)) and validate the file
signature using a trusted detector (e.g., file-type/fromBuffer or image/video
decoders like sharp/ffprobe) to derive the real MIME and kind, reject if the
detected kind/mime doesn't match classifyCommentMedia(file.type) or expected
image/video types, then use the detected MIME when calling bucket.put (update
httpMetadata.contentType) and only store after signature validation so the later
finalize step that re-reads metadata cannot be spoofed; keep using
sanitizeCommentMediaFilename and buildCommentMediaObjectKey and enforce size
limits based on the validated stream.
In `@apps/dashboard/src/lib/comment-media.server.ts`:
- Line 41: The current object key generation uses
`comment-media/${userId}/${Date.now()}-${safe}` which can collide if the same
user uploads identical filenames within the same millisecond; change the key to
append a high-entropy suffix (e.g., `crypto.randomUUID()` or a short
`nanoid()`/random hex) to ensure uniqueness, e.g., replace the
`${Date.now()}-${safe}` part with `${Date.now()}-${safe}-${randomSuffix}` and
add the appropriate import/utility to generate `randomSuffix` (use
`crypto.randomUUID()` where available or a tiny nanoid helper).
- Around line 122-133: The verifier currently calls bucket.head(key) and assumes
it only returns null on misses; wrap the bucket.head(key) call in a try-catch
inside the function (the block that currently defines head and returns object
info) so any thrown errors are caught and the function returns { ok: false,
reason: "Failed to verify object" }; keep the existing null and size checks
(head === null -> { ok: false, reason: "Object not found" } and size <= 0 -> {
ok: false, reason: "Object is empty" }) and on success return { ok: true, size:
head.size, contentType: head.httpMetadata?.contentType } as before.
In `@apps/dashboard/src/lib/media.functions.ts`:
- Around line 15-17: The identityValidator<TInput> is allowing any payload
through; replace it with a runtime validator that enforces finite numeric width
and height and constrained string values for kind and key before calling
clampDisplayDimensions or HTML generation. Implement a validation function (or
small schema) used in the same spots as identityValidator that: 1) checks width
and height are numbers and isFinite, 2) checks kind and key are non-empty
strings (and optionally match allowed enums/regex), and 3) throws/returns a
clear validation error for invalid input so the server function rejects bad
payloads at the boundary (update usages of identityValidator in
media.functions.ts to call the new validate function and handle/propagate
validation errors).
---
Nitpick comments:
In `@apps/dashboard/src/hooks/use-comment-media-upload.ts`:
- Around line 142-149: processFiles currently serializes uploads by awaiting
each uploadAndInsert(file) inside the loop; change it to first create
placeholders for every File (call the function that inserts placeholders—e.g.,
uploadAndInsertPlaceholder or a new helper) without awaiting the full upload,
then kick off actual uploadAndInsert jobs concurrently (e.g., Promise.all or a
bounded concurrency pool) so placeholders appear immediately and uploads run in
parallel; update processFiles to reference uploadAndInsert for the actual upload
tasks and the placeholder insertion helper for fast feedback.
In `@apps/dashboard/wrangler.jsonc`:
- Around line 48-57: The r2_buckets entry for binding "COMMENT_MEDIA" currently
has "remote": true and hardcoded "bucket_name"/"preview_bucket_name" pointing at
the real production bucket; change this to point to a dedicated non‑prod bucket
per environment by parameterizing the bucket names (e.g., use env vars like
COMMENT_MEDIA_BUCKET / COMMENT_MEDIA_PREVIEW_BUCKET or append an env suffix such
as "-dev" / "-staging") in the r2_buckets block so local/dev deployments do not
write to the production bucket; update the "bucket_name" and
"preview_bucket_name" values for the "COMMENT_MEDIA" binding accordingly and
ensure the corresponding environment variables are set in your dev/staging
configs.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro Plus
Run ID: ce633fed-b50a-49cc-96c6-106d281adf0a
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (15)
apps/dashboard/.dev.vars.exampleapps/dashboard/package.jsonapps/dashboard/src/components/details/comment-reply-form.tsxapps/dashboard/src/components/details/detail-activity.tsxapps/dashboard/src/entry-worker.tsapps/dashboard/src/env.d.tsapps/dashboard/src/hooks/use-comment-media-upload.tsapps/dashboard/src/lib/comment-media-upload.handler.tsapps/dashboard/src/lib/comment-media.server.tsapps/dashboard/src/lib/media.functions.tsapps/dashboard/worker-configuration.d.tsapps/dashboard/wrangler.jsoncpackages/ui/src/components/markdown-editor.tsxpackages/ui/src/components/markdown.tsxpackages/ui/src/hooks/use-textarea-content-field-sizing.ts
| const { media: mediaUpload, onPaste: onMediaPaste } = | ||
| useCommentMediaUpload(editorRef); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether useCommentMediaUpload exposes pending-state and whether forms gate submit on it.
set -euo pipefail
echo "== Hook definition =="
fd "use-comment-media-upload.ts" apps/dashboard/src --exec sed -n '1,260p' {}
echo
echo "== Reply/Comment form usages =="
rg -n --type=tsx -C3 'useCommentMediaUpload\(|media=\{mediaUpload\}|onPaste=\{onMediaPaste\}|disabled=\{!value\.trim\(\) \|\| isSending\}'Repository: stylessh/diffkit
Length of output: 5112
🏁 Script executed:
fd "comment-reply-form.tsx" apps/dashboard/src --exec cat -n {}Repository: stylessh/diffkit
Length of output: 4221
Gate submit button on pending media uploads to prevent submitting with unresolved placeholders.
The disabled={!value.trim() || isSending} condition on line 116 does not account for in-flight uploads. Users can currently submit comments containing unresolved upload placeholders. The useCommentMediaUpload hook does not expose a pending upload state, so either:
- The hook should expose a pending uploads count/flag, and the form should gate submit on it, or
- The form should validate that no upload placeholders remain before submission
Affected lines: 37-38, 96-97, 116-117
| const { media: mediaUpload, onPaste: onMediaPaste } = | ||
| useCommentMediaUpload(editorRef); |
There was a problem hiding this comment.
Disable comment actions while uploads are still pending.
useCommentMediaUpload inserts a temporary upload token into value before the upload/finalize round-trip completes, and this component still treats any non-empty body as sendable. That lets users submit a comment or PR-state change with the raw [[DIFFKIT_UPLOAD:...]] placeholder still in the persisted text. Please expose a pending-upload flag/count from the hook and gate both actions on it.
Also applies to: 176-183
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard/src/components/details/detail-activity.tsx` around lines 67 -
68, useCommentMediaUpload currently injects temporary upload tokens into the
editor value but the component still allows sends while uploads are in progress;
update the hook (useCommentMediaUpload) to return a pendingUploads count or
boolean (e.g., pendingCount or hasPendingUploads) and then in
detail-activity.tsx gate all actions that submit or change PR state (the
send/comment submit handler and the PR-state change handlers at the other
mentioned block) by checking that pendingUploads is zero/false; disable the
submit UI and early-return from the action handlers when pending uploads exist
so the raw [[DIFFKIT_UPLOAD:...]] placeholder cannot be persisted.
| if ( | ||
| url.pathname === "/api/comment-media/upload" && | ||
| request.method === "POST" | ||
| ) { | ||
| const { handleCommentMediaUpload } = await import( | ||
| "#/lib/comment-media-upload.handler" | ||
| ); | ||
| return handleCommentMediaUpload(request); | ||
| } |
There was a problem hiding this comment.
Apply security headers on early API return path too.
The new upload branch (Line 56-Line 64) returns before the global header-setting block, so this endpoint misses SECURITY_HEADERS.
💡 Proposed fix
if (
url.pathname === "/api/comment-media/upload" &&
request.method === "POST"
) {
const { handleCommentMediaUpload } = await import(
"#/lib/comment-media-upload.handler"
);
- return handleCommentMediaUpload(request);
+ const response = await handleCommentMediaUpload(request);
+ for (const [key, value] of Object.entries(SECURITY_HEADERS)) {
+ response.headers.set(key, value);
+ }
+ return response;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if ( | |
| url.pathname === "/api/comment-media/upload" && | |
| request.method === "POST" | |
| ) { | |
| const { handleCommentMediaUpload } = await import( | |
| "#/lib/comment-media-upload.handler" | |
| ); | |
| return handleCommentMediaUpload(request); | |
| } | |
| if ( | |
| url.pathname === "/api/comment-media/upload" && | |
| request.method === "POST" | |
| ) { | |
| const { handleCommentMediaUpload } = await import( | |
| "#/lib/comment-media-upload.handler" | |
| ); | |
| const response = await handleCommentMediaUpload(request); | |
| for (const [key, value] of Object.entries(SECURITY_HEADERS)) { | |
| response.headers.set(key, value); | |
| } | |
| return response; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard/src/entry-worker.ts` around lines 56 - 64, The early return
for the upload branch bypasses the global header-setting, so change the branch
around handleCommentMediaUpload to call the handler, capture its Response, and
then merge/apply the SECURITY_HEADERS before returning; specifically, replace
the direct return of handleCommentMediaUpload(request) with awaiting
handleCommentMediaUpload(request) into a response variable and copying/appending
the SECURITY_HEADERS into that response (matching how the global header-setting
block applies SECURITY_HEADERS) and then return the modified response.
| const kind = classifyCommentMedia(file.type); | ||
| if (!kind) { | ||
| return Response.json({ error: "Unsupported file type" }, { status: 415 }); | ||
| } | ||
|
|
||
| const maxBytes = maxBytesForCommentMediaKind(kind); | ||
| if (file.size > maxBytes) { | ||
| return Response.json({ error: "File is too large" }, { status: 413 }); | ||
| } | ||
|
|
||
| const filename = sanitizeCommentMediaFilename(file.name); | ||
| const key = buildCommentMediaObjectKey(session.user.id, filename); | ||
|
|
||
| await bucket.put(key, file.stream(), { | ||
| httpMetadata: { | ||
| contentType: file.type, | ||
| cacheControl: "public, max-age=31536000, immutable", | ||
| }, | ||
| }); |
There was a problem hiding this comment.
Don’t trust the multipart MIME as your server-side media check.
The validation here is driven by file.type, and that same client-declared value is then copied into R2 httpMetadata.contentType. The finalize step later re-reads that metadata, so a crafted request can upload arbitrary bytes and still pass the image/video gate. Please validate the file signature server-side before storing/finalizing it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard/src/lib/comment-media-upload.handler.ts` around lines 48 - 66,
The current upload trusts client-provided file.type and writes it to R2;
instead, read the initial bytes from file.stream() (or buffer the upload up to
maxBytesForCommentMediaKind(kind)) and validate the file signature using a
trusted detector (e.g., file-type/fromBuffer or image/video decoders like
sharp/ffprobe) to derive the real MIME and kind, reject if the detected
kind/mime doesn't match classifyCommentMedia(file.type) or expected image/video
types, then use the detected MIME when calling bucket.put (update
httpMetadata.contentType) and only store after signature validation so the later
finalize step that re-reads metadata cannot be spoofed; keep using
sanitizeCommentMediaFilename and buildCommentMediaObjectKey and enforce size
limits based on the validated stream.
| filename: string, | ||
| ): string { | ||
| const safe = sanitizeCommentMediaFilename(filename); | ||
| return `comment-media/${userId}/${Date.now()}-${safe}`; |
There was a problem hiding this comment.
Prevent key collisions in object key generation.
Line 41 can generate duplicate keys when the same user uploads identical filenames within the same millisecond, causing overwrite risk.
💡 Proposed fix
export function buildCommentMediaObjectKey(
userId: string,
filename: string,
): string {
const safe = sanitizeCommentMediaFilename(filename);
- return `comment-media/${userId}/${Date.now()}-${safe}`;
+ const unique = crypto.randomUUID();
+ return `comment-media/${userId}/${Date.now()}-${unique}-${safe}`;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard/src/lib/comment-media.server.ts` at line 41, The current
object key generation uses `comment-media/${userId}/${Date.now()}-${safe}` which
can collide if the same user uploads identical filenames within the same
millisecond; change the key to append a high-entropy suffix (e.g.,
`crypto.randomUUID()` or a short `nanoid()`/random hex) to ensure uniqueness,
e.g., replace the `${Date.now()}-${safe}` part with
`${Date.now()}-${safe}-${randomSuffix}` and add the appropriate import/utility
to generate `randomSuffix` (use `crypto.randomUUID()` where available or a tiny
nanoid helper).
| const head = await bucket.head(key); | ||
| if (!head) { | ||
| return { ok: false, reason: "Object not found" }; | ||
| } | ||
| if (!head.size || head.size <= 0) { | ||
| return { ok: false, reason: "Object is empty" }; | ||
| } | ||
| return { | ||
| ok: true, | ||
| size: head.size, | ||
| contentType: head.httpMetadata?.contentType, | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Cloudflare Workers, can R2Bucket.head(key) throw runtime exceptions (for transient service/network issues), or does it only return null when the object does not exist?
💡 Result:
In Cloudflare Workers, R2Bucket.head(key) returns null when the object does not exist. It can throw runtime exceptions for transient service or network issues, such as internal errors (10001) or service unavailable (10043). Official docs state that R2 operations generally throw exceptions with error codes, but specifically for get and head, NoSuchKey (10007) returns null instead of throwing. Other errors, including transient ones, throw exceptions that should be caught.
Citations:
- 1: https://developers.cloudflare.com/r2/api/workers/workers-api-reference/
- 2: https://developers.cloudflare.com/r2/api/error-codes/
- 3: Error: R2 HeadObject failed after 3 retries: Error: head: We encountered an internal error. Please try a... nodejs/release-cloudflare-worker#95
Handle transient R2 failures inside verifier.
Line 122 currently assumes bucket.head only returns null for misses. However, R2Bucket.head(key) throws exceptions for transient service or network issues (such as internal errors or service unavailable conditions), while only returning null for NoSuchKey errors. This causes the function to throw despite its return type being a discriminated union that never includes exceptions.
Wrap the bucket.head call in a try-catch block to return { ok: false, reason: "Failed to verify object" } on any transient errors:
Proposed fix
export async function verifyCommentMediaObject(
bucket: R2Bucket, // global from worker types
key: string,
): Promise<
| { ok: true; size: number; contentType?: string }
| { ok: false; reason: string }
> {
- const head = await bucket.head(key);
+ let head: R2Object | null;
+ try {
+ head = await bucket.head(key);
+ } catch {
+ return { ok: false, reason: "Failed to verify object" };
+ }
if (!head) {
return { ok: false, reason: "Object not found" };
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const head = await bucket.head(key); | |
| if (!head) { | |
| return { ok: false, reason: "Object not found" }; | |
| } | |
| if (!head.size || head.size <= 0) { | |
| return { ok: false, reason: "Object is empty" }; | |
| } | |
| return { | |
| ok: true, | |
| size: head.size, | |
| contentType: head.httpMetadata?.contentType, | |
| }; | |
| let head: R2Object | null; | |
| try { | |
| head = await bucket.head(key); | |
| } catch { | |
| return { ok: false, reason: "Failed to verify object" }; | |
| } | |
| if (!head) { | |
| return { ok: false, reason: "Object not found" }; | |
| } | |
| if (!head.size || head.size <= 0) { | |
| return { ok: false, reason: "Object is empty" }; | |
| } | |
| return { | |
| ok: true, | |
| size: head.size, | |
| contentType: head.httpMetadata?.contentType, | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard/src/lib/comment-media.server.ts` around lines 122 - 133, The
verifier currently calls bucket.head(key) and assumes it only returns null on
misses; wrap the bucket.head(key) call in a try-catch inside the function (the
block that currently defines head and returns object info) so any thrown errors
are caught and the function returns { ok: false, reason: "Failed to verify
object" }; keep the existing null and size checks (head === null -> { ok: false,
reason: "Object not found" } and size <= 0 -> { ok: false, reason: "Object is
empty" }) and on success return { ok: true, size: head.size, contentType:
head.httpMetadata?.contentType } as before.
| function identityValidator<TInput>(data: TInput) { | ||
| return data; | ||
| } |
There was a problem hiding this comment.
Replace the identity validator with real runtime validation.
This server fn currently accepts any payload shape, so malformed width/height/kind/key values can reach clampDisplayDimensions, key verification, and HTML generation unchecked. Since this endpoint is client-callable, it should reject non-finite dimensions and invalid strings at the boundary instead of relying on TypeScript types.
Also applies to: 31-33
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/dashboard/src/lib/media.functions.ts` around lines 15 - 17, The
identityValidator<TInput> is allowing any payload through; replace it with a
runtime validator that enforces finite numeric width and height and constrained
string values for kind and key before calling clampDisplayDimensions or HTML
generation. Implement a validation function (or small schema) used in the same
spots as identityValidator that: 1) checks width and height are numbers and
isFinite, 2) checks kind and key are non-empty strings (and optionally match
allowed enums/regex), and 3) throws/returns a clear validation error for invalid
input so the server function rejects bad payloads at the boundary (update usages
of identityValidator in media.functions.ts to call the new validate function and
handle/propagate validation errors).
Summary
POST /api/comment-media/upload, finalize with server-built GitHub-style<img>/<video>HTML.⏳ Uploading asset…→ embed), paste support.field-sizing+ fallback autosize, vertical resize, scroll guards for textarea + action row,replaceUploadPlaceholder,scrollAnchorReffor Send/Close PR row.remote: truefor local writes to real bucket; optionalR2_PUBLIC_BASE_URLin.dev.vars.example.Notes
Summary by CodeRabbit