-
Notifications
You must be signed in to change notification settings - Fork 808
Loom importer backend #1034
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loom importer backend #1034
Conversation
WalkthroughThe PR introduces workflow infrastructure (Loom import), refactors database and S3 services, brands IDs across the web app, updates schema, removes admin/screenshot modules, adjusts dev ports/MinIO, and wires tracing/HTTP client layers. It also adds two new apps (workflow-runner, workflow-manager), updates dependencies to ESM-compatible imports, and tightens API/auth flows. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client as Client (Web)
participant API as Workflows HTTP API
participant Runner as Workflow Runner
participant DB as Database
participant S3 as S3 Provider
participant Loom as Loom HTTP
Note over Client,API: Request to import a Loom video
Client->>API: POST /workflows/LoomImportVideo (payload)
API-->>Runner: Proxy workflow start (RPC/HTTP)
activate Runner
Runner->>DB: create video record (tx)
Runner->>Loom: GET downloadUrl
Loom-->>Runner: 200 Stream + Content-Length
Runner->>S3: PUT object (stream)
S3-->>Runner: 200 OK
Runner-->>API: result { videoId, fileKey }
deactivate Runner
API-->>Client: 200 OK (json)
Note over Runner: Tracing spans exported via OTLP
sequenceDiagram
autonumber
participant Client as Client
participant Server as Backend API
participant DB as Database
Note over Client,Server: Auth middleware resolution
Client->>Server: Request with Authorization header?
alt API key (length 36)
Server->>DB: Lookup user by authApiKeys join
DB-->>Server: Option<User>
alt Found
Server-->>Client: Continue as user
else Not found
Server-->>Client: 401 Unauthorized
end
else Session-based
Server->>Server: getCurrentUser()
alt Has user
Server-->>Client: Continue as user
else
Server-->>Client: 401 Unauthorized
end
end
Server-->>Client: Errors mapped (BadRequest/Unauthorized/500)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120+ minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
apps/web/app/(org)/verify-otp/form.tsx (5)
76-76
: Improve error handling with proper Error types.Throwing strings is not a best practice in TypeScript. Consider using Error objects for better error handling.
- if (otpCode.length !== 6) throw "Please enter a complete 6-digit code"; + if (otpCode.length !== 6) throw new Error("Please enter a complete 6-digit code");
84-88
: Improve error handling consistency.For consistency with the previous fix, use Error objects here as well.
if (!res.url.includes("/login-success")) { setCode(["", "", "", "", "", ""]); inputRefs.current[0]?.focus(); - throw "Invalid code. Please try again."; + throw new Error("Invalid code. Please try again."); }
114-114
: Use Error object for consistency.- throw `Please wait ${remainingSeconds} seconds before requesting a new code`; + throw new Error(`Please wait ${remainingSeconds} seconds before requesting a new code`);
125-125
: Use Error object for consistency.- throw "Please wait 30 seconds before requesting a new code"; + throw new Error("Please wait 30 seconds before requesting a new code");
94-99
: Update error handling to work with Error objects.After changing to Error objects, update the error handlers to extract the message properly.
onError: (e) => { - if (typeof e === "string") { - toast.error(e); + if (e instanceof Error) { + toast.error(e.message); } else { toast.error("An error occurred. Please try again."); } },Apply the same change to the
handleResend
mutation'sonError
handler on lines 135-139.Also applies to: 135-139
packages/local-docker/docker-compose.yml (1)
55-55
: Remove unused volume definition.The
minio-certs
volume is no longer used after removing the certificate mount. This should be cleaned up.volumes: ps-mysql: minio-data: - minio-certs:
♻️ Duplicate comments (3)
packages/web-backend/src/Spaces/SpacesRepo.ts (1)
6-6
: Consider the TypeScript file extension in the import path.Same as in OrganisationsRepo.ts - the import explicitly includes the
.ts
extension, which is unconventional for TypeScript imports.packages/web-domain/src/Video.ts (1)
4-8
: Consider the TypeScript file extension pattern.Similar to other files in this PR, explicit
.ts
extensions are used in imports. Ensure this pattern is consistently applied and compatible with your build configuration.packages/web-backend/src/Videos/VideosRpcs.ts (1)
4-4
: Consider the TypeScript file extension pattern.Consistent with other files in this PR, the import uses an explicit
.ts
extension.
🧹 Nitpick comments (21)
apps/web/app/(org)/dashboard/admin/page.tsx (1)
8-11
: Avoid hardcoding identities; centralize admin policy and normalize compareMove this to a single isAdmin(currentUser) helper (e.g., shared auth module) that checks role/claims or an allowlist from env. Also normalize case before domain checks.
Example helper usage here:
- if ( - !currentUser || - (currentUser.email !== "richie@mcilroy.co" && - !currentUser.email.endsWith("@cap.so")) - ) { + if (!currentUser || !isAdmin(currentUser)) { redirect("/dashboard"); }apps/workflow-manager/package.json (1)
5-5
: Verify Deno availability and security considerations.The dev script uses
--allow-all
permissions which grants unrestricted access to all resources. Consider using more granular permissions for better security.Based on the search results, I can see that Deno provides a --allow-all flag that grants all permissions to the script. This disables the security sandbox entirely, and should be used with caution. The --allow-all has the same security properties as running a script in Node.js (ie none).
For a workflow manager in a development environment, consider using more granular permissions instead of
--allow-all
. Based on the typical needs of a workflow manager, you could use:- "dev": "dotenv -e ../../.env -- deno --allow-all --watch src/index.ts" + "dev": "dotenv -e ../../.env -- deno --allow-env --allow-net --allow-read --allow-write --watch src/index.ts"However, since this is a development script and this is insecure and should be used for experimentation only. When developing real applications prefer explicit permissions, the current approach may be acceptable for local development with the understanding of the security implications.
packages/database/index.ts (1)
7-7
: Potential runtime error with non-null assertion.Using the non-null assertion operator on
process.env.DATABASE_URL!
could cause a runtime error if the environment variable is not set, whereas the previousserverEnv().DATABASE_URL
provided validation.Consider adding validation before the non-null assertion:
function createDrizzle() { - const URL = process.env.DATABASE_URL!; + const URL = process.env.DATABASE_URL; + if (!URL) { + throw new Error("DATABASE_URL environment variable is required"); + }packages/web-domain/src/S3Bucket.ts (1)
16-16
: Consider renamingdecodeSync
for clarity.While the current export works, consider a more descriptive name to indicate what is being decoded.
-export const decodeSync = Schema.decodeSync(S3Bucket); +export const decodeS3BucketSync = Schema.decodeSync(S3Bucket);apps/web/next.config.mjs (1)
113-131
: Remove commented-out webpack configuration.Large blocks of commented-out code should be removed to maintain code cleanliness. If this configuration might be needed later, consider documenting it elsewhere or creating a GitHub issue to track it.
- // webpack: (config) => { - // config.module.rules.push({ - // test: /\.(?:js|ts)$/, - // use: [ - // { - // loader: "babel-loader", - // options: { - // presets: ["next/babel"], - // plugins: [ - // "@babel/plugin-transform-private-property-in-object", - // "@babel/plugin-transform-private-methods", - // ], - // }, - // }, - // ], - // }); - - // return config; - // },apps/workflow-manager/src/index.ts (1)
16-23
: Consider adding graceful shutdown handling.For production readiness, consider implementing graceful shutdown to handle SIGTERM/SIGINT signals properly.
Would you like me to generate a graceful shutdown implementation that properly closes database connections and completes in-flight workflows?
packages/web-backend/src/Organisations/OrganisationsRepo.ts (1)
6-6
: Consider the TypeScript file extension in the import path.The import explicitly includes the
.ts
extension. While this works in some TypeScript configurations (like with specificmoduleResolution
settings), it's unconventional and may cause issues with certain build tools or configurations. TypeScript typically resolves modules without the extension.Consider removing the
.ts
extension for better compatibility:-import { Database } from "../Database.ts"; +import { Database } from "../Database";If explicit extensions are required by your build configuration (e.g., for ESM compatibility), ensure this pattern is consistently applied across the entire codebase and documented in your coding guidelines.
packages/local-docker/docker-compose.yml (1)
35-35
: Consider data persistence implications of the volume change.The volume mount has changed from a named volume (
minio-data:/bitnami/minio/data
) to a host path (~/minio/data:/data
). This has implications:
- Data will now be stored in the user's home directory
- Different developers may have different data states
- The
minio-data
volume defined at line 54 is now unusedConsider either:
- Using a named volume for consistency across environments:
- - ~/minio/data:/data + - minio-data:/data
- Or remove the unused volume definition at line 54:
volumes: ps-mysql: - minio-data: minio-certs:
packages/web-domain/src/Video.ts (2)
41-58
: Add exhaustive type checking for source types.The
getSource
method handles all current source types but doesn't have compile-time exhaustiveness checking. Consider adding a default case or using TypeScript's exhaustiveness checking to ensure all source types are handled as new ones are added.Add exhaustiveness checking:
static getSource(self: Video) { if (self.source.type === "MediaConvert") return new M3U8Source({ videoId: self.id, ownerId: self.ownerId, subpath: "output/video_recording_000.m3u8", }); if (self.source.type === "local") return new M3U8Source({ videoId: self.id, ownerId: self.ownerId, subpath: "combined-source/stream.m3u8", }); if (self.source.type === "desktopMP4") return new Mp4Source({ videoId: self.id, ownerId: self.ownerId }); + + // Exhaustiveness check + const _exhaustiveCheck: never = self.source.type; + throw new Error(`Unhandled source type: ${_exhaustiveCheck}`); }Alternatively, use a switch statement with exhaustiveness:
static getSource(self: Video) { switch (self.source.type) { case "MediaConvert": return new M3U8Source({ videoId: self.id, ownerId: self.ownerId, subpath: "output/video_recording_000.m3u8", }); case "local": return new M3U8Source({ videoId: self.id, ownerId: self.ownerId, subpath: "combined-source/stream.m3u8", }); case "desktopMP4": return new Mp4Source({ videoId: self.id, ownerId: self.ownerId }); default: const _exhaustiveCheck: never = self.source.type; throw new Error(`Unhandled source type: ${_exhaustiveCheck}`); } }
70-72
: Consider validating file path construction.The
getFileKey
method constructs S3 keys using string concatenation. While functional, consider adding validation to ensure the IDs don't contain characters that could create invalid S3 keys (e.g., leading/trailing slashes, double slashes).Consider adding a utility function for safe path construction:
private static sanitizePathSegment(segment: string): string { return segment.replace(/^\/+|\/+$/g, '').replace(/\/+/g, '/'); } getFileKey() { return [this.ownerId, this.videoId, "result.mp4"] .map(Mp4Source.sanitizePathSegment) .join('/'); }packages/database/auth/auth-options.ts (1)
137-137
: Consider error handling for the dynamic import.While the dynamic import of
next/headers
avoids loading it unnecessarily, consider wrapping it in a try-catch to handle potential import failures gracefully.if (needsOrganizationSetup) { - const { cookies } = await import("next/headers"); + let cookies; + try { + const headerModule = await import("next/headers"); + cookies = headerModule.cookies; + } catch (error) { + console.error("Failed to import next/headers:", error); + // Continue without Dub tracking + } + + if (cookies) { const dubId = cookies().get("dub_id")?.value; const dubPartnerData = cookies().get("dub_partner_data")?.value; // ... rest of the Dub tracking logic + }packages/web-backend/src/Workflows.ts (1)
7-7
: Consider more descriptive naming for the workflows layer.The current
WorkflowsLayer
only containsLoomImportVideoLive
. As more workflows are added, consider a more descriptive pattern likeAllWorkflowsLayer
or organizing by feature.-export const WorkflowsLayer = Layer.mergeAll(LoomImportVideoLive); +// TODO: As more workflows are added, consider organizing by feature domain +export const WorkflowsLayer = Layer.mergeAll(LoomImportVideoLive);packages/web-backend/src/S3Buckets/S3BucketsRepo.ts (1)
58-58
: Consider adding an index on the ownerId column.Since
getForUser
queries byownerId
, ensure there's an appropriate database index to optimize this lookup, especially as the number of users and buckets grows.#!/bin/bash # Check if an index exists on s3_buckets.ownerId rg -A10 "s3Buckets.*mysqlTable" packages/database/schema.ts | rg -i "index.*owner" # Also check migration files for index creation fd -e sql -e ts . packages/database/migrations | xargs rg -l "s3_buckets.*ownerId.*INDEX|INDEX.*s3_buckets.*ownerId" 2>/dev/null || echo "No index found on ownerId"packages/database/schema.ts (1)
244-245
: Consider makingorgId
non-null soon to avoid future migration complexity.The TODO comment indicates this should be non-null. Leaving nullable columns that should be required can lead to:
- Data integrity issues
- Complex migration scripts later
- Inconsistent application logic
Consider prioritizing this schema change to avoid accumulating technical debt.
Would you like me to help create a migration plan or open an issue to track making
orgId
non-null?apps/workflow-runner/src/index.ts (1)
59-59
: Consider making the HTTP port configurable.Similar to the runner address, the HTTP port is hard-coded to
42169
. This should be configurable for deployment flexibility.- Layer.provide(NodeHttpServer.layer(createServer, { port: 42169 })), + Layer.provide(Layer.unwrapEffect( + Effect.gen(function* () { + const port = yield* Config.number("HTTP_PORT").pipe(Config.withDefault(42169)); + return NodeHttpServer.layer(createServer, { port }); + }) + )),apps/web/app/api/loom/[...route]/route.ts (2)
58-58
: Video name is hard-coded instead of using actual metadata.The video name is hard-coded as
"loom video name"
. This should come from the Loom video metadata.- name: "loom video name", + name: payload.loom.name ?? "Untitled Loom Video",Update the schema accordingly:
loom: Schema.Struct({ downloadUrl: Schema.String, videoId: Schema.String, + name: Schema.optional(Schema.String),
82-84
: Consider implementing proper HTTP method handling.All HTTP methods (GET, HEAD, POST) are handled by the same handler, but the API only defines a POST endpoint. GET and HEAD requests will likely fail.
Consider either:
- Only exporting POST if that's the only supported method
- Implementing proper method-specific handling
-export const GET = handler; -export const HEAD = handler; export const POST = handler;packages/web-backend/src/S3Buckets/index.ts (1)
82-82
: Remove debug console.log statement before merging.This appears to be a debugging artifact that should be removed from production code.
-console.log({ config });
packages/web-backend/src/Loom/ImportVideo.ts (1)
139-149
: Consider improving upload progress reporting.The current implementation uses a separate Effect that reads
downloadedBytes
asynchronously, which could lead to race conditions or stale values. Consider using Stream operators for more accurate progress tracking.-.pipe( - Effect.race( - // TODO: Connect this with upload progress - Effect.repeat( - Effect.gen(function* () { - const bytes = yield* Effect.succeed(downloadedBytes); - yield* Effect.log(`Downloaded ${bytes} bytes`); - }), - Schedule.forever.pipe(Schedule.delayed(() => "2 seconds")), - ).pipe(Effect.delay("100 millis")), - ), -) +.pipe( + Effect.tap(() => + Effect.log(`Upload complete: ${downloadedBytes} bytes`) + ) +)Alternatively, you could emit progress events through a Queue or Hub for real-time monitoring if needed.
apps/web/lib/server.ts (2)
47-78
: Make local workflow URL configurable (avoid hardcoded localhost)apps/web/lib/server.ts — WorkflowRpcClient falls back to "http://localhost:42169/rpc" and WorkflowHttpClient uses baseUrl "http://localhost:42169".
Action: Read LOCAL_WORKFLOW_URL (default "http://localhost:42169") and use it for both clients (append "/rpc" for the RPC client if missing). Add LOCAL_WORKFLOW_URL to serverEnv (imported from @cap/env) so configuration is consistent with REMOTE_WORKFLOW_URL/REMOTE_WORKFLOW_SECRET.
65-73
: Harden remote workflow authentication — avoid staticToken
headerapps/web/lib/server.ts:71 sets
Authorization: Token ${secret}
. For production, replace/augment this with stronger controls:
- Rotate credentials and avoid long‑lived static tokens.
- Use mTLS or OAuth2 client_credentials (short‑lived JWTs with audience and expiry).
- Store/manage secrets in a secrets manager (Vault / AWS Secrets Manager / Azure Key Vault) and support automated rotation.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (65)
apps/desktop/package.json
(2 hunks)apps/desktop/src-tauri/tauri.conf.json
(1 hunks)apps/discord-bot/package.json
(1 hunks)apps/tasks/package.json
(0 hunks)apps/web/app/(org)/dashboard/admin/page.tsx
(1 hunks)apps/web/app/(org)/verify-otp/form.tsx
(1 hunks)apps/web/app/api/desktop/[...route]/session.ts
(0 hunks)apps/web/app/api/loom/[...route]/route.ts
(1 hunks)apps/web/app/api/playlist/route.ts
(1 hunks)apps/web/app/api/upload/[...route]/signed.ts
(1 hunks)apps/web/lib/server.ts
(2 hunks)apps/web/lib/tracing-server.ts
(1 hunks)apps/web/lib/tracing.ts
(1 hunks)apps/web/next.config.mjs
(2 hunks)apps/web/package.json
(4 hunks)apps/workflow-manager/package.json
(1 hunks)apps/workflow-manager/src/index.ts
(1 hunks)apps/workflow-runner/package.json
(1 hunks)apps/workflow-runner/src/index.ts
(1 hunks)biome.json
(1 hunks)package.json
(1 hunks)packages/database/auth/auth-options.ts
(2 hunks)packages/database/auth/drizzle-adapter.ts
(1 hunks)packages/database/index.ts
(1 hunks)packages/database/package.json
(4 hunks)packages/database/schema.ts
(3 hunks)packages/env/index.ts
(1 hunks)packages/env/package.json
(1 hunks)packages/env/server.ts
(1 hunks)packages/local-docker/docker-compose.yml
(2 hunks)packages/utils/package.json
(1 hunks)packages/utils/src/index.ts
(1 hunks)packages/utils/src/index.tsx
(0 hunks)packages/web-api-contract-effect/package.json
(1 hunks)packages/web-backend/package.json
(1 hunks)packages/web-backend/src/Auth.ts
(2 hunks)packages/web-backend/src/Database.ts
(1 hunks)packages/web-backend/src/Folders/FoldersPolicy.ts
(2 hunks)packages/web-backend/src/Folders/FoldersRpcs.ts
(1 hunks)packages/web-backend/src/Folders/index.ts
(2 hunks)packages/web-backend/src/Loom/ImportVideo.ts
(1 hunks)packages/web-backend/src/Loom/index.ts
(1 hunks)packages/web-backend/src/Organisations/OrganisationsRepo.ts
(2 hunks)packages/web-backend/src/Rpcs.ts
(1 hunks)packages/web-backend/src/S3Buckets/S3BucketAccess.ts
(3 hunks)packages/web-backend/src/S3Buckets/S3BucketsRepo.ts
(2 hunks)packages/web-backend/src/S3Buckets/index.ts
(3 hunks)packages/web-backend/src/Spaces/SpacesRepo.ts
(2 hunks)packages/web-backend/src/Videos/VideosPolicy.ts
(2 hunks)packages/web-backend/src/Videos/VideosRepo.ts
(1 hunks)packages/web-backend/src/Videos/VideosRpcs.ts
(1 hunks)packages/web-backend/src/Videos/index.ts
(4 hunks)packages/web-backend/src/Workflows.ts
(1 hunks)packages/web-backend/src/index.ts
(1 hunks)packages/web-domain/package.json
(1 hunks)packages/web-domain/src/Authentication.ts
(2 hunks)packages/web-domain/src/Folder.ts
(1 hunks)packages/web-domain/src/Policy.ts
(1 hunks)packages/web-domain/src/Rpcs.ts
(1 hunks)packages/web-domain/src/S3Bucket.ts
(1 hunks)packages/web-domain/src/Video.ts
(2 hunks)packages/web-domain/src/Workflows.ts
(1 hunks)packages/web-domain/src/index.ts
(1 hunks)packages/web-domain/src/utils.ts
(1 hunks)scripts/env-cli.js
(1 hunks)
💤 Files with no reviewable changes (3)
- apps/web/app/api/desktop/[...route]/session.ts
- apps/tasks/package.json
- packages/utils/src/index.tsx
🧰 Additional context used
📓 Path-based instructions (5)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use strict TypeScript and avoid any; prefer shared types from packages
**/*.{ts,tsx}
: Use Biome to format/lint TypeScript with a 2-space indent
TypeScript file names should be kebab-case (e.g., user-menu.tsx)
Files:
packages/web-domain/src/Policy.ts
packages/web-backend/src/Videos/VideosRpcs.ts
packages/web-domain/src/Folder.ts
packages/utils/src/index.ts
packages/web-backend/src/Loom/index.ts
packages/web-backend/src/Folders/FoldersRpcs.ts
packages/web-domain/src/utils.ts
packages/web-backend/src/Rpcs.ts
packages/web-backend/src/Organisations/OrganisationsRepo.ts
packages/web-backend/src/Workflows.ts
packages/web-backend/src/Folders/FoldersPolicy.ts
apps/workflow-runner/src/index.ts
apps/web/lib/tracing-server.ts
packages/web-backend/src/Videos/VideosPolicy.ts
packages/database/index.ts
apps/web/lib/tracing.ts
apps/workflow-manager/src/index.ts
apps/web/app/api/playlist/route.ts
apps/web/app/api/upload/[...route]/signed.ts
packages/database/auth/auth-options.ts
packages/web-backend/src/Spaces/SpacesRepo.ts
packages/env/index.ts
packages/web-domain/src/Authentication.ts
apps/web/app/api/loom/[...route]/route.ts
packages/database/auth/drizzle-adapter.ts
packages/web-domain/src/S3Bucket.ts
apps/web/app/(org)/dashboard/admin/page.tsx
packages/web-backend/src/Loom/ImportVideo.ts
packages/web-domain/src/Rpcs.ts
packages/web-backend/src/S3Buckets/S3BucketAccess.ts
apps/web/app/(org)/verify-otp/form.tsx
packages/web-backend/src/index.ts
packages/web-backend/src/Folders/index.ts
packages/web-backend/src/S3Buckets/S3BucketsRepo.ts
packages/database/schema.ts
packages/web-domain/src/index.ts
packages/web-backend/src/S3Buckets/index.ts
packages/web-backend/src/Videos/VideosRepo.ts
packages/web-backend/src/Auth.ts
packages/web-domain/src/Workflows.ts
packages/web-backend/src/Videos/index.ts
packages/web-domain/src/Video.ts
packages/web-backend/src/Database.ts
apps/web/lib/server.ts
packages/env/server.ts
apps/web/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/**/*.{ts,tsx}
: Use TanStack Query v5 for client-side server state and data fetching in the web app
Mutations should call Server Actions and perform precise cache updates with setQueryData/setQueriesData, avoiding broad invalidations
Prefer Server Components for initial data and pass initialData to client components for React Query hydration
Files:
apps/web/lib/tracing-server.ts
apps/web/lib/tracing.ts
apps/web/app/api/playlist/route.ts
apps/web/app/api/upload/[...route]/signed.ts
apps/web/app/api/loom/[...route]/route.ts
apps/web/app/(org)/dashboard/admin/page.tsx
apps/web/app/(org)/verify-otp/form.tsx
apps/web/lib/server.ts
{apps/web,packages/ui}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
{apps/web,packages/ui}/**/*.{ts,tsx}
: Use Tailwind CSS exclusively for styling in the web app and shared React UI components
Component naming: React components in PascalCase; hooks in camelCase starting with 'use'
Files:
apps/web/lib/tracing-server.ts
apps/web/lib/tracing.ts
apps/web/app/api/playlist/route.ts
apps/web/app/api/upload/[...route]/signed.ts
apps/web/app/api/loom/[...route]/route.ts
apps/web/app/(org)/dashboard/admin/page.tsx
apps/web/app/(org)/verify-otp/form.tsx
apps/web/lib/server.ts
apps/web/app/api/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
When HTTP routes are necessary, implement them under app/api/*, configure CORS correctly, and set precise revalidation
Files:
apps/web/app/api/playlist/route.ts
apps/web/app/api/upload/[...route]/signed.ts
apps/web/app/api/loom/[...route]/route.ts
**/*.{tsx,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
React/Solid components should be named using PascalCase
Files:
apps/web/app/(org)/dashboard/admin/page.tsx
apps/web/app/(org)/verify-otp/form.tsx
🧠 Learnings (4)
📚 Learning: 2025-09-08T16:48:20.727Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: AGENTS.md:0-0
Timestamp: 2025-09-08T16:48:20.727Z
Learning: Applies to **/tauri.ts : Do not edit auto-generated file: tauri.ts
Applied to files:
apps/desktop/src-tauri/tauri.conf.json
📚 Learning: 2025-08-25T10:58:06.142Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-25T10:58:06.142Z
Learning: Applies to apps/web/**/*.{tsx} : Use Next/Image for remote assets instead of plain img for performance and optimization
Applied to files:
apps/web/next.config.mjs
📚 Learning: 2025-08-25T10:58:06.142Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-25T10:58:06.142Z
Learning: Applies to **/*.{ts,tsx} : Use strict TypeScript and avoid any; prefer shared types from packages
Applied to files:
packages/utils/package.json
📚 Learning: 2025-08-25T10:58:06.142Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-25T10:58:06.142Z
Learning: Applies to apps/web/actions/**/*.ts : Place Server Actions for the web app under apps/web/actions and mark them with 'use server'
Applied to files:
packages/web-backend/src/index.ts
🧬 Code graph analysis (24)
packages/web-backend/src/Organisations/OrganisationsRepo.ts (1)
packages/web-backend/src/Database.ts (1)
Database
(9-19)
packages/web-backend/src/Workflows.ts (2)
packages/web-backend/src/Loom/ImportVideo.ts (1)
LoomImportVideoLive
(50-162)packages/web-domain/src/Workflows.ts (1)
Workflows
(8-8)
packages/web-backend/src/Folders/FoldersPolicy.ts (1)
packages/web-backend/src/Database.ts (1)
Database
(9-19)
apps/workflow-runner/src/index.ts (5)
packages/web-domain/src/Workflows.ts (1)
Workflows
(8-8)packages/web-backend/src/Workflows.ts (1)
WorkflowsLayer
(7-7)packages/web-backend/src/Videos/index.ts (1)
Videos
(10-118)packages/web-backend/src/S3Buckets/index.ts (1)
S3Buckets
(13-199)packages/web-backend/src/Database.ts (1)
Database
(9-19)
apps/web/lib/tracing-server.ts (2)
apps/web/lib/server.ts (1)
TracingLayer
(32-32)apps/web/lib/tracing.ts (1)
getTracingConfig
(5-29)
packages/web-backend/src/Videos/VideosPolicy.ts (1)
packages/web-backend/src/Database.ts (1)
Database
(9-19)
apps/web/app/api/playlist/route.ts (1)
packages/database/schema.ts (1)
s3Buckets
(372-382)
packages/web-backend/src/Spaces/SpacesRepo.ts (1)
packages/web-backend/src/Database.ts (1)
Database
(9-19)
apps/web/app/api/loom/[...route]/route.ts (3)
packages/web-domain/src/Workflows.ts (2)
Api
(21-21)Workflows
(8-8)packages/web-domain/src/Authentication.ts (2)
HttpAuthMiddleware
(12-22)CurrentUser
(7-10)apps/web/lib/server.ts (1)
apiToHandler
(144-161)
packages/web-backend/src/Loom/ImportVideo.ts (7)
packages/web-backend/src/Database.ts (1)
DatabaseError
(4-7)packages/web-domain/src/Video.ts (1)
Video
(14-59)packages/web-backend/src/S3Buckets/S3BucketAccess.ts (2)
S3Error
(13-15)S3BucketAccess
(47-299)packages/web-backend/src/Videos/index.ts (1)
Videos
(10-118)packages/web-backend/src/S3Buckets/index.ts (1)
S3Buckets
(13-199)packages/web-domain/src/Workflows.ts (1)
HttpClient
(23-26)packages/web-domain/src/S3Bucket.ts (1)
S3Bucket
(6-14)
packages/web-domain/src/Rpcs.ts (3)
packages/web-domain/src/Workflows.ts (1)
RpcGroup
(9-9)packages/web-domain/src/Video.ts (1)
VideoRpcs
(131-140)packages/web-domain/src/Folder.ts (1)
FolderRpcs
(29-44)
packages/web-backend/src/S3Buckets/S3BucketAccess.ts (1)
packages/web-backend/src/S3Buckets/S3BucketClientProvider.ts (1)
S3BucketClientProvider
(4-13)
packages/web-backend/src/Folders/index.ts (2)
packages/web-backend/src/Folders/FoldersPolicy.ts (1)
FoldersPolicy
(8-47)packages/web-backend/src/Database.ts (1)
Database
(9-19)
packages/web-backend/src/S3Buckets/S3BucketsRepo.ts (3)
packages/database/index.ts (1)
db
(29-34)packages/web-domain/src/S3Bucket.ts (1)
S3Bucket
(6-14)packages/web-backend/src/Database.ts (1)
Database
(9-19)
packages/database/schema.ts (1)
packages/database/helpers.ts (1)
nanoId
(6-9)
packages/web-domain/src/index.ts (1)
packages/web-domain/src/Workflows.ts (1)
Workflows
(8-8)
packages/web-backend/src/S3Buckets/index.ts (4)
packages/web-backend/src/S3Buckets/S3BucketClientProvider.ts (1)
S3BucketClientProvider
(4-13)packages/database/crypto.ts (1)
decrypt
(98-128)packages/web-backend/src/S3Buckets/S3BucketsRepo.ts (1)
S3BucketsRepo
(8-73)packages/web-backend/src/Database.ts (1)
Database
(9-19)
packages/web-backend/src/Videos/VideosRepo.ts (3)
packages/web-domain/src/Video.ts (4)
Video
(14-59)ImportSource
(61-64)VideoId
(10-10)VideoId
(11-11)packages/web-backend/src/Database.ts (1)
Database
(9-19)packages/database/helpers.ts (1)
nanoId
(6-9)
packages/web-backend/src/Auth.ts (2)
packages/database/index.ts (1)
db
(29-34)packages/web-backend/src/Database.ts (1)
Database
(9-19)
packages/web-domain/src/Workflows.ts (3)
apps/web/lib/Rpcs.ts (1)
Rpc
(11-14)packages/web-domain/src/Rpcs.ts (1)
Rpcs
(6-6)packages/web-domain/src/index.ts (1)
Rpcs
(6-6)
packages/web-backend/src/Videos/index.ts (3)
packages/web-backend/src/Videos/VideosRepo.ts (1)
VideosRepo
(15-97)packages/web-backend/src/Database.ts (1)
Database
(9-19)packages/web-backend/src/S3Buckets/index.ts (1)
S3Buckets
(13-199)
packages/web-domain/src/Video.ts (2)
packages/web-domain/src/S3Bucket.ts (2)
S3BucketId
(3-3)S3BucketId
(4-4)packages/web-domain/src/Folder.ts (2)
FolderId
(8-8)FolderId
(9-9)
packages/web-backend/src/Database.ts (1)
packages/database/index.ts (1)
db
(29-34)
apps/web/lib/server.ts (3)
packages/web-domain/src/Workflows.ts (3)
Workflows
(8-8)RpcClient
(12-18)HttpClient
(23-26)packages/env/server.ts (1)
serverEnv
(82-86)packages/web-backend/src/Database.ts (1)
Database
(9-19)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (rust)
userId: "loomVideoId123", | ||
orgId: "loomOrgId123", | ||
video: { | ||
id: payload.loom.videoId, | ||
name: "loom video name", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hard-coded Loom user/org IDs need to be replaced with actual values.
The Loom user and organization IDs are hard-coded as "loomVideoId123"
and "loomOrgId123"
. These should be extracted from the Loom video data or authentication context.
yield* workflows.LoomImportVideo({
payload: {
cap: {
userId: user.id,
orgId: user.activeOrgId,
},
loom: {
- userId: "loomVideoId123",
- orgId: "loomOrgId123",
+ userId: payload.loom.userId, // Extract from Loom data
+ orgId: payload.loom.orgId, // Extract from Loom data
video: {
id: payload.loom.videoId,
name: "loom video name",
You'll also need to update the payload schema to include these fields:
.setPayload(
Schema.Struct({
loom: Schema.Struct({
downloadUrl: Schema.String,
videoId: Schema.String,
+ userId: Schema.String,
+ orgId: Schema.String,
}),
}),
)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
userId: "loomVideoId123", | |
orgId: "loomOrgId123", | |
video: { | |
id: payload.loom.videoId, | |
name: "loom video name", | |
userId: payload.loom.userId, // Extract from Loom data | |
orgId: payload.loom.orgId, // Extract from Loom data | |
video: { | |
id: payload.loom.videoId, | |
name: "loom video name", |
userId: "loomVideoId123", | |
orgId: "loomOrgId123", | |
video: { | |
id: payload.loom.videoId, | |
name: "loom video name", | |
.setPayload( | |
Schema.Struct({ | |
loom: Schema.Struct({ | |
downloadUrl: Schema.String, | |
videoId: Schema.String, | |
userId: Schema.String, | |
orgId: Schema.String, | |
}), | |
}), | |
) |
🤖 Prompt for AI Agents
In apps/web/app/api/loom/[...route]/route.ts around lines 54 to 58, replace the
hard-coded userId "loomVideoId123" and orgId "loomOrgId123" with the actual
values sourced from the Loom video payload or the authenticated Loom context
(e.g., payload.loom.userId and payload.loom.orgId or from the auth token), and
update the incoming payload TypeScript/schema to require and validate those
fields so the route uses payload.loom.userId and payload.loom.orgId (or
equivalent) instead of constants.
const DatabaseLive = Layer.unwrapEffect( | ||
Effect.gen(function* () { | ||
const url = Redacted.make(yield* Config.string("DATABASE_URL")); | ||
|
||
return MysqlClient.layer({ url }); | ||
}), | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding error handling for missing DATABASE_URL.
The current implementation might fail silently if DATABASE_URL is not configured. Consider adding explicit error handling.
const DatabaseLive = Layer.unwrapEffect(
Effect.gen(function* () {
- const url = Redacted.make(yield* Config.string("DATABASE_URL"));
+ const url = yield* Config.string("DATABASE_URL").pipe(
+ Effect.mapError(() => new Error("DATABASE_URL environment variable is required")),
+ Effect.map(Redacted.make)
+ );
return MysqlClient.layer({ url });
}),
);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const DatabaseLive = Layer.unwrapEffect( | |
Effect.gen(function* () { | |
const url = Redacted.make(yield* Config.string("DATABASE_URL")); | |
return MysqlClient.layer({ url }); | |
}), | |
); | |
const DatabaseLive = Layer.unwrapEffect( | |
Effect.gen(function* () { | |
const url = yield* Config.string("DATABASE_URL").pipe( | |
Effect.mapError(() => new Error("DATABASE_URL environment variable is required")), | |
Effect.map(Redacted.make) | |
); | |
return MysqlClient.layer({ url }); | |
}), | |
); |
🤖 Prompt for AI Agents
In apps/workflow-manager/src/index.ts around lines 8 to 14, the code naively
reads DATABASE_URL and constructs the MysqlClient layer which can fail silently
if the env var is missing; change the Effect.gen block to validate the yielded
value from Config.string and if it's missing/empty produce a clear failure
(throw a new Error or return Effect.fail with message like "Missing required
DATABASE_URL") before calling Redacted.make and MysqlClient.layer so the startup
fails fast with an explicit error message.
NodeClusterRunnerSocket.layer({ | ||
storage: "sql", | ||
shardingConfig: { | ||
runnerAddress: Option.some(RunnerAddress.make("localhost", 42069)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hard-coded runner address limits deployment flexibility.
The runner address is hard-coded to "localhost:42069"
, which will cause issues in production deployments where the runner might be on a different host or require service discovery.
Consider making this configurable:
- runnerAddress: Option.some(RunnerAddress.make("localhost", 42069)),
+ runnerAddress: Option.some(RunnerAddress.make(
+ yield* Config.string("RUNNER_HOST").pipe(Config.withDefault("localhost")),
+ yield* Config.number("RUNNER_PORT").pipe(Config.withDefault(42069))
+ )),
Then wrap the layer creation in Layer.unwrapEffect
:
-const ClusterWorkflowLive = ClusterWorkflowEngine.layer.pipe(
+const ClusterWorkflowLive = Layer.unwrapEffect(
+ Effect.gen(function* () {
+ const runnerHost = yield* Config.string("RUNNER_HOST").pipe(Config.withDefault("localhost"));
+ const runnerPort = yield* Config.number("RUNNER_PORT").pipe(Config.withDefault(42069));
+ return ClusterWorkflowEngine.layer.pipe(
Layer.provide(
NodeClusterRunnerSocket.layer({
storage: "sql",
shardingConfig: {
- runnerAddress: Option.some(RunnerAddress.make("localhost", 42069)),
+ runnerAddress: Option.some(RunnerAddress.make(runnerHost, runnerPort)),
},
}),
),
Layer.provide(SqlLayer),
+ );
+ }),
);
🤖 Prompt for AI Agents
In apps/workflow-runner/src/index.ts around line 32, the runnerAddress is
hard-coded to "localhost:42069", which prevents flexible deployments; change
this to read from configuration or an environment variable (e.g.,
process.env.RUNNER_HOST and RUNNER_PORT or a single RUNNER_ADDRESS) and
construct RunnerAddress from those values with sensible defaults; validate and
fall back if missing, and then when creating the layer that depends on this
dynamic value, wrap the creation in Layer.unwrapEffect so the layer can be built
from the effect that reads env/config and constructs the RunnerAddress at
runtime.
"recommended": true, | ||
"suspicious": { | ||
"noShadowRestrictedNames": "off" | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t disable noShadowRestrictedNames globally
Turning this off repo‑wide permits shadowing identifiers like undefined, NaN, Infinity, arguments, and eval, inviting subtle, hard‑to‑debug bugs. Either keep the recommended default or localize exceptions via inline biome‑ignore on the specific lines/files.
Apply one of these:
Option A — revert to recommended default
"rules": {
- "recommended": true,
- "suspicious": {
- "noShadowRestrictedNames": "off"
- }
+ "recommended": true
}
Option B — keep signal without blocking
"rules": {
"recommended": true,
"suspicious": {
- "noShadowRestrictedNames": "off"
+ "noShadowRestrictedNames": "warn"
}
}
Option C — leave rules as recommended and suppress at call‑sites
Add before the specific line:
/* biome-ignore lint/suspicious/noShadowRestrictedNames: intentional shadowing */
🤖 Prompt for AI Agents
In biome.json around lines 23 to 26, the repository has disabled the
suspicious/noShadowRestrictedNames rule globally which allows shadowing of
restricted identifiers; revert this change by removing the "suspicious": {
"noShadowRestrictedNames": "off" } entry so the recommended setting is restored,
or if this shadowing is intentional, do not disable globally — restore the rule
and either (a) keep a local signal (e.g., set it back to "recommended" or
omitted) or (b) suppress only the specific occurrences by adding an inline
biome-ignore comment immediately before the offending line(s): /* biome-ignore
lint/suspicious/noShadowRestrictedNames: intentional shadowing */ so the rule
remains enabled repo-wide.
package.json
Outdated
"pnpm": { | ||
"overrides": { | ||
"undici": "5.28.4" | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❓ Verification inconclusive
Verify undici version compatibility.
The pinned undici version override may resolve dependency conflicts, but should be verified for compatibility with all workspace packages.
Based on the search results, undici version 5.28.4 has a high-severity security vulnerability (CVE-2025-22150) related to "Insecure Randomness" that was fixed in versions 5.28.5, 6.21.1, and 7.2.3. This presents a potential security risk.
The pinned undici version 5.28.4 contains a high-severity security vulnerability. Consider updating to a secure version:
"pnpm": {
"overrides": {
- "undici": "5.28.4"
+ "undici": "5.28.5"
}
}
Update undici override to a patched version (CVE-2025-22150).
undici 5.28.4 is affected by CVE-2025-22150 (insufficient randomness, medium severity); patched in 5.28.5 (also fixed in 6.21.1 and 7.2.3) — update the package.json override to 5.28.5 and verify for regressions. (nvd.nist.gov)
"pnpm": {
"overrides": {
- "undici": "5.28.4"
+ "undici": "5.28.5"
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
"pnpm": { | |
"overrides": { | |
"undici": "5.28.4" | |
} | |
} | |
"pnpm": { | |
"overrides": { | |
"undici": "5.28.5" | |
} | |
} |
🤖 Prompt for AI Agents
In package.json around lines 41 to 45, the pnpm override pins undici to 5.28.4
which is vulnerable to CVE-2025-22150; update the override value to "5.28.5" (or
a later patched release such as 6.21.1/7.2.3) and run a fresh install and test
suite to verify no regressions; ensure lockfile (pnpm-lock.yaml) is updated and
CI passes before merging.
.leftJoin( | ||
Db.authApiKeys, | ||
Dz.eq(Db.users.id, Db.authApiKeys.userId), | ||
) | ||
.where(Dz.eq(Db.authApiKeys.id, authHeader)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Security: Consider hashing API keys before storage.
The query suggests API keys are stored in plain text (direct equality check). For better security, API keys should be hashed before storage, with only a prefix kept for identification.
🏁 Script executed:
#!/bin/bash
# Check how API keys are stored in the database schema
rg -A10 "authApiKeys.*mysqlTable" packages/database/schema.ts
# Check if there's any hashing logic when creating API keys
rg -B5 -A5 "authApiKeys.*insert|insert.*authApiKeys" --type ts
Length of output: 1185
Security: Stop storing API keys in plaintext — hash tokens and verify by prefix+hash.
Evidence: packages/database/schema.ts defines authApiKeys.id as varchar(36); apps/web/app/api/desktop/[...route]/session.ts inserts crypto.randomUUID() into authApiKeys.id and returns it as api_key; packages/web-backend/src/Auth.ts compares the incoming header directly to authApiKeys.id (Dz.eq(...)).
Actions required:
- Don’t store the raw token. On issuance, generate a long random secret (≥32 bytes), return it once, store a token_prefix (for lookup) and a one‑way token_hash (HMAC-SHA256 with a server-side secret or a memory‑hard hash like argon2id). Use constant‑time comparison for verification.
- Update: schema (add token_prefix/token_hash and adjust types), issuance flow (apps/web/.../session.ts), and Auth lookup (packages/web-backend/src/Auth.ts) to lookup by prefix then verify hash.
- Rotate/invalidate existing plaintext keys via a migration and reissue new hashed tokens.
🤖 Prompt for AI Agents
In packages/web-backend/src/Auth.ts around lines 53 to 57, the code currently
compares the incoming API key header directly to authApiKeys.id (plaintext
lookup); change this to: parse the incoming token into prefix + secret, query
the DB by token_prefix only (no plaintext token lookup), then compute/verify the
one‑way token_hash (use HMAC-SHA256 with a server-side secret or argon2id)
against the stored token_hash using a constant-time comparison; update this file
to perform prefix extraction, DB lookup by token_prefix, and secure hash
verification and reject on mismatch. Also ensure any new fields (token_prefix,
token_hash) and the server secret are read from config/env so the code uses
them, and add clear failure logging without exposing the raw token.
const [_, customBucket] = yield* s3Buckets | ||
.getProviderForUser(payload.cap.userId) | ||
.pipe(Effect.catchAll(() => Effect.die(null))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace Effect.die with proper error handling.
Using Effect.die
will terminate the fiber with an unrecoverable defect. Consider mapping the error to LoomImportVideoError
instead.
-const [_, customBucket] = yield* s3Buckets
- .getProviderForUser(payload.cap.userId)
- .pipe(Effect.catchAll(() => Effect.die(null)));
+const [_, customBucket] = yield* s3Buckets
+ .getProviderForUser(payload.cap.userId)
+ .pipe(Effect.catchAll((error) =>
+ Effect.fail(new DatabaseError({ cause: error }))
+ ));
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In packages/web-backend/src/Loom/ImportVideo.ts around lines 66 to 68, the code
currently uses Effect.die(null) inside catchAll which creates an unrecoverable
defect; replace this with proper failure handling by mapping the caught error
into a LoomImportVideoError and returning a failing effect instead. Concretely,
change the pipe to catchAll(err => Effect.fail(new LoomImportVideoError(err)))
(or use mapError to convert the error to LoomImportVideoError) and ensure
LoomImportVideoError is imported/defined so the effect fails with a typed,
recoverable error rather than dying.
.get(payload.loom.video.downloadUrl) | ||
.pipe(Effect.catchAll((cause) => new LoomApiError({ cause }))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Map HTTP errors to LoomApiError correctly.
The current implementation creates a LoomApiError
instance incorrectly - it should use Effect.fail
to properly fail the effect with the error.
-const resp = yield* http
- .get(payload.loom.video.downloadUrl)
- .pipe(Effect.catchAll((cause) => new LoomApiError({ cause })));
+const resp = yield* http
+ .get(payload.loom.video.downloadUrl)
+ .pipe(Effect.catchAll((cause) => Effect.fail(new LoomApiError({ cause }))));
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
.get(payload.loom.video.downloadUrl) | |
.pipe(Effect.catchAll((cause) => new LoomApiError({ cause }))); | |
.get(payload.loom.video.downloadUrl) | |
.pipe(Effect.catchAll((cause) => Effect.fail(new LoomApiError({ cause })))); |
🤖 Prompt for AI Agents
In packages/web-backend/src/Loom/ImportVideo.ts around lines 112 to 113, the
catch handler constructs a LoomApiError but returns it directly instead of
failing the effect; replace the current Effect.catchAll handler to return
Effect.fail(new LoomApiError({ cause })) so the effect is properly failed with
the LoomApiError, ensuring the stream maps HTTP errors into a failing effect
rather than resolving to an error object.
orgId: Option.getOrNull(data.orgId ?? Option.none()), | ||
bucket: Option.getOrNull(data.bucketId ?? Option.none()), | ||
metadata: Option.getOrNull(data.metadata ?? Option.none()), | ||
transcriptionStatus: Option.getOrNull( | ||
data.transcriptionStatus ?? Option.none(), | ||
), | ||
folderId: Option.getOrNull(data.folderId ?? Option.none()), | ||
width: Option.getOrNull(data.width ?? Option.none()), | ||
height: Option.getOrNull(data.height ?? Option.none()), | ||
duration: Option.getOrNull(data.duration ?? Option.none()), | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Redundant Option handling for optional fields.
The pattern Option.getOrNull(data.field ?? Option.none())
is unnecessarily complex. Since these fields are already Option
types from the schema, you can use them directly.
-orgId: Option.getOrNull(data.orgId ?? Option.none()),
-bucket: Option.getOrNull(data.bucketId ?? Option.none()),
-metadata: Option.getOrNull(data.metadata ?? Option.none()),
-transcriptionStatus: Option.getOrNull(
- data.transcriptionStatus ?? Option.none(),
-),
-folderId: Option.getOrNull(data.folderId ?? Option.none()),
-width: Option.getOrNull(data.width ?? Option.none()),
-height: Option.getOrNull(data.height ?? Option.none()),
-duration: Option.getOrNull(data.duration ?? Option.none()),
+orgId: Option.getOrNull(data.orgId),
+bucket: Option.getOrNull(data.bucketId),
+metadata: Option.getOrNull(data.metadata),
+transcriptionStatus: Option.getOrNull(data.transcriptionStatus),
+folderId: Option.getOrNull(data.folderId),
+width: Option.getOrNull(data.width),
+height: Option.getOrNull(data.height),
+duration: Option.getOrNull(data.duration),
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
orgId: Option.getOrNull(data.orgId ?? Option.none()), | |
bucket: Option.getOrNull(data.bucketId ?? Option.none()), | |
metadata: Option.getOrNull(data.metadata ?? Option.none()), | |
transcriptionStatus: Option.getOrNull( | |
data.transcriptionStatus ?? Option.none(), | |
), | |
folderId: Option.getOrNull(data.folderId ?? Option.none()), | |
width: Option.getOrNull(data.width ?? Option.none()), | |
height: Option.getOrNull(data.height ?? Option.none()), | |
duration: Option.getOrNull(data.duration ?? Option.none()), | |
}, | |
orgId: Option.getOrNull(data.orgId), | |
bucket: Option.getOrNull(data.bucketId), | |
metadata: Option.getOrNull(data.metadata), | |
transcriptionStatus: Option.getOrNull(data.transcriptionStatus), | |
folderId: Option.getOrNull(data.folderId), | |
width: Option.getOrNull(data.width), | |
height: Option.getOrNull(data.height), | |
duration: Option.getOrNull(data.duration), | |
}, |
🤖 Prompt for AI Agents
In packages/web-backend/src/Videos/VideosRepo.ts around lines 61 to 71, the code
uses the redundant pattern Option.getOrNull(data.field ?? Option.none()) for
optional fields; remove the unnecessary nullish fallback and call
Option.getOrNull with the field directly (e.g., Option.getOrNull(data.orgId),
Option.getOrNull(data.bucketId), etc.), keeping each optional mapping consistent
and concise.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (17)
packages/web-domain/src/S3Bucket.ts (1)
1-14
: Use Schema.Redacted for credentials; stop emitting raw secrets in responses
- packages/web-domain/src/S3Bucket.ts — change accessKeyId / secretAccessKey from Schema.String to Schema.Redacted.
- packages/web-api-contract-effect/src/index.ts and packages/web-api-contract/src/desktop.ts — update contract/config types to use Schema.Redacted / Config.redacted (instead of plain string).
- apps/web/app/api/desktop/[...route]/s3Config.ts — currently builds/returns decryptedConfig.accessKeyId and decryptedConfig.secretAccessKey; remove or tightly scope this behavior (only to fully-trusted clients) and document justification.
- Audit decrypt-and-use paths (packages/web-backend/src/S3Buckets/index.ts, apps/web/utils/s3.ts) to ensure secrets are only in memory when needed and never logged; DB already uses encryptedText for storage (packages/database/schema.ts:379–380).
- Add automated checks/tests or linter rules to prevent logging/serialization of secret fields.
apps/web/actions/folders/moveVideoToFolder.ts (1)
26-31
: Critical auth bug + overbroad UPDATE: restrict by org/space and 404 when not found.
- currentVideo lookup and videos UPDATE are not scoped to the user’s activeOrganizationId.
- spaceVideos UPDATE only filters by videoId; it will update all space rows for that video.
Fix:
- const [currentVideo] = await db() - .select({ folderId: videos.folderId, id: videos.id }) - .from(videos) - .where(eq(videos.id, videoId)); + const [currentVideo] = await db() + .select({ folderId: videos.folderId, id: videos.id, orgId: videos.orgId }) + .from(videos) + .where(and(eq(videos.id, videoId), eq(videos.orgId, user.activeOrganizationId))); + if (!currentVideo) { + throw new Error("Video not found or not accessible"); + } if (spaceId) { await db() .update(spaceVideos) .set({ folderId: folderId === null ? null : folderId, }) - .where(eq(spaceVideos.videoId, videoId)); + .where(and(eq(spaceVideos.videoId, videoId), eq(spaceVideos.spaceId, spaceId))); } await db() .update(videos) .set({ folderId, updatedAt: new Date(), }) - .where(eq(videos.id, videoId)); + .where(and(eq(videos.id, videoId), eq(videos.orgId, user.activeOrganizationId)));Also applies to: 50-57, 61-67
apps/web/actions/folders/updateFolder.ts (1)
63-71
: Critical: update is not scoped to the active org (authorization gap).A user could update any folderId across orgs when parentId is undefined. Restrict by organization.
- await db() - .update(folders) - .set({ - ...(name !== undefined ? { name } : {}), - ...(color !== undefined ? { color } : {}), - ...(parentId !== undefined ? { parentId } : {}), - }) - .where(eq(folders.id, folderId)); + await db() + .update(folders) + .set({ + ...(name !== undefined ? { name } : {}), + ...(color !== undefined ? { color } : {}), + ...(parentId !== undefined ? { parentId } : {}), + }) + .where(and(eq(folders.id, folderId), eq(folders.organizationId, user.activeOrganizationId)));apps/web/actions/caps/share.ts (3)
51-61
: organizations.id is filtered with spaceIds — likely conflated inputs.Using
spaceIds
to select organizations implies the input mixes space and org IDs. If callers pass only space IDs,directOrgIds
will be empty and, combined with logic below, can unintentionally drop all org-level shares.Minimal safeguard (non-breaking): only reconcile org shares when explicit org IDs are present.
- const directOrgIds = await db() + const directOrgIds = await db() .select() .from(organizations) .where( and( inArray(organizations.id, spaceIds), inArray(organizations.id, userOrganizationIds), ), ) .then((orgs) => orgs.map((org) => org.id));And gate the org-share reconciliation (see next comment’s diff).
72-107
: Do not reconcile org shares when no orgs were provided.Without this guard, passing only space IDs will delete all existing org shares.
Apply:
- const organizationIds = directOrgIds; - - const currentSharedOrganizations = await db() - .select() - .from(sharedVideos) - .where(eq(sharedVideos.videoId, capId)); + const organizationIds = directOrgIds; + const shouldUpdateOrgShares = organizationIds.length > 0; + + let currentSharedOrganizations = [] as typeof sharedVideos.$inferSelect[]; + if (shouldUpdateOrgShares) { + currentSharedOrganizations = await db() + .select() + .from(sharedVideos) + .where(eq(sharedVideos.videoId, capId)); + } - for (const sharedOrganization of currentSharedOrganizations) { - if (!organizationIds.includes(sharedOrganization.organizationId)) { - await db() - .delete(sharedVideos) - .where( - and( - eq(sharedVideos.videoId, capId), - eq( - sharedVideos.organizationId, - sharedOrganization.organizationId, - ), - ), - ); - } - } + if (shouldUpdateOrgShares) { + for (const sharedOrganization of currentSharedOrganizations) { + if (!organizationIds.includes(sharedOrganization.organizationId)) { + await db() + .delete(sharedVideos) + .where( + and( + eq(sharedVideos.videoId, capId), + eq(sharedVideos.organizationId, sharedOrganization.organizationId), + ), + ); + } + } + } - for (const organizationId of organizationIds) { - const existingShare = currentSharedOrganizations.find( - (share) => share.organizationId === organizationId, - ); - if (!existingShare) { - await db().insert(sharedVideos).values({ - id: nanoId(), - videoId: capId, - organizationId: organizationId, - sharedByUserId: user.id, - }); - } - } + if (shouldUpdateOrgShares) { + for (const organizationId of organizationIds) { + const existingShare = currentSharedOrganizations.find( + (share) => share.organizationId === organizationId, + ); + if (!existingShare) { + await db().insert(sharedVideos).values({ + id: nanoId(), + videoId: capId, + organizationId, + sharedByUserId: user.id, + }); + } + } + }
74-107
: Make the multi-step share update atomic (wrap in a DB transaction).Multiple deletes/inserts/updates can leave state inconsistent on failure. Wrap share reconciliation and public toggle in a single transaction.
export async function shareCap({ capId, spaceIds, public: isPublic, }: ShareCapParams) { try { const user = await getCurrentUser(); @@ - const userOrganizations = await db() + const dbc = db(); + await dbc.transaction(async (tx) => { + const userOrganizations = await tx .select({ organizationId: organizationMembers.organizationId, }) .from(organizationMembers) .where(eq(organizationMembers.userId, user.id)); @@ - const directOrgIds = await db() + const directOrgIds = await tx .select() .from(organizations) .where( and( inArray(organizations.id, spaceIds), inArray(organizations.id, userOrganizationIds), ), ) .then((orgs) => orgs.map((org) => org.id)); @@ - const spacesData = await db() + const spacesData = await tx .select() .from(spaces) .where( and( inArray(spaces.id, spaceIds), inArray(spaces.organizationId, userOrganizationIds), ), ); @@ - let currentSharedOrganizations = [] as typeof sharedVideos.$inferSelect[]; + let currentSharedOrganizations = [] as typeof sharedVideos.$inferSelect[]; if (shouldUpdateOrgShares) { - currentSharedOrganizations = await db() + currentSharedOrganizations = await tx .select() .from(sharedVideos) .where(eq(sharedVideos.videoId, capId)); } @@ - await db() + await tx .delete(sharedVideos) .where( and( eq(sharedVideos.videoId, capId), eq(sharedVideos.organizationId, sharedOrganization.organizationId), ), ); @@ - await db().insert(sharedVideos).values({ + await tx.insert(sharedVideos).values({ id: nanoId(), videoId: capId, organizationId, sharedByUserId: user.id, }); @@ - const currentSpaceVideos = await db() + const currentSpaceVideos = await tx .select() .from(spaceVideos) .where(eq(spaceVideos.videoId, capId)); @@ - await db() + await tx .delete(spaceVideos) .where( and( eq(spaceVideos.videoId, capId), eq(spaceVideos.spaceId, spaceVideo.spaceId), ), ); @@ - await db().insert(spaceVideos).values({ + await tx.insert(spaceVideos).values({ id: nanoId(), videoId: capId, spaceId: spaceId, addedById: user.id, }); @@ - if (typeof isPublic === "boolean") { - await db() - .update(videos) - .set({ public: isPublic }) - .where(eq(videos.id, capId)); - } + if (typeof isPublic === "boolean") { + await tx + .update(videos) + .set({ public: isPublic }) + .where(eq(videos.id, capId)); + } + });Also applies to: 111-141, 144-149
apps/web/app/api/thumbnail/route.ts (2)
63-65
: Don’t trust userId from query to build S3 key; derive from DB.Use video.ownerId and video.id to avoid mismatch/IDOR and reduce false negatives.
- const prefix = `${userId}/${videoId}/`; + const prefix = `${video.ownerId}/${video.id}/`;Optionally deprecate the userId query param since it’s unused after this change.
67-73
: CORS origin reflection — enforce allow-list in getHeadersapps/web/utils/helpers.ts:getHeaders currently sets Access-Control-Allow-Origin to origin || '*' (reflects arbitrary Origin) — validate origin against allowedOrigins (or reuse apps/web/utils/cors.ts which already checks the allow-list) so only trusted origins are echoed; update callers as needed.
apps/web/app/api/video/domain-info/route.ts (2)
67-71
: domainVerified type bug: returns Date | false, not boolean.Clients likely expect boolean. Convert to Boolean(timestamp).
- domainVerified: organization[0].domainVerified || false, + domainVerified: Boolean(organization[0].domainVerified),
88-91
: Same boolean fix for ownerOrganizations response.- domainVerified: ownerOrganizations[0].domainVerified || false, + domainVerified: Boolean(ownerOrganizations[0].domainVerified),apps/web/actions/videos/password.ts (1)
82-105
: Harden cookie: set httpOnly/secure/sameSite and finite maxAge.Currently sets a client-accessible cookie with an encrypted hash and default attributes. Make it httpOnly, secure, and set sane TTL.
- cookies().set("x-cap-password", await encrypt(video.password)); + cookies().set( + "x-cap-password", + await encrypt(video.password), + { + httpOnly: true, + secure: true, + sameSite: "lax", + path: "/", + maxAge: 60 * 60 * 24 * 7 // 7 days + } + );apps/web/actions/videos/get-transcript.ts (1)
22-48
: Unauthorized read of private transcripts via Server Action.The action fetches and returns transcript content without checking authorization, enabling access to private videos if the ID is known.
Minimal fix: allow only public videos or the owner. Follow-up: align with
VideosPolicy
to also allow org-shared viewers.const result = query[0]; if (!result?.video) { return { success: false, message: "Video information is missing" }; } const { video } = result; + // Basic authorization gate: public or owner only. + // TODO(brendonovich): Align with VideosPolicy to support org-shared visibility. + if (!video.public) { + if (!user || user.id !== video.ownerId) { + return { success: false, message: "Unauthorized" }; + } + }apps/web/app/api/desktop/[...route]/video.ts (2)
81-86
: Prevent ID enumeration; scope lookup by owner.Without owner scoping, a user can detect others’ video existence.
- .where(eq(videos.id, Video.VideoId.make(videoId))); + .where( + and( + eq(videos.id, Video.VideoId.make(videoId)), + eq(videos.ownerId, user.id), + ), + );
199-204
: Restrict delete prefetch to owner to avoid leaking existence and 500s.Current select fetches any user’s video; later S3 ops may fail.
- .where(eq(videos.id, videoId)); + .where(and(eq(videos.id, videoId), eq(videos.ownerId, user.id)));packages/database/auth/auth-options.ts (2)
33-36
: Don’t ship NextAuth debug=true to production.Gate by env to avoid leaking internals and PII in logs.
- debug: true, + debug: serverEnv().NODE_ENV !== "production",
97-101
: Remove sensitive logging (email + OTP).This logs secrets in prod when RESEND_API_KEY is set.
- console.log({ identifier, token }); + // Avoid logging sensitive identifiers or OTPs in production. const { OTPEmail } = await import("../emails/otp-email"); const email = OTPEmail({ code: token, email: identifier }); - console.log({ email }); await sendEmail({ email: identifier, subject: `Your Cap Verification Code`, react: email, });packages/database/package.json (1)
46-47
: React version mismatch likely to conflict with Next 14 and react-email peers.Having react/react-dom 19.x in this package can cause duplicate Reacts and peer conflicts. Align with workspace React (likely 18.x) or remove from this package if not required directly.
- "react": "^19.1.0", - "react-dom": "^19.1.0", + "react": "workspace:*", + "react-dom": "workspace:*",If this package only needs react-email, consider moving React to peerDependencies or relying on the app’s root dependency.
Also applies to: 31-33
♻️ Duplicate comments (4)
apps/web/app/api/loom/[...route]/route.ts (1)
50-52
: Replace hard-coded Loom IDs with payload/context.Use actual loom.userId/orgId from the request payload or token.
- userId: "loomVideoId123", - orgId: "loomOrgId123", + userId: payload.loom.userId, + orgId: payload.loom.orgId,packages/web-backend/src/Videos/VideosRepo.ts (1)
61-71
: Simplify redundantOption.none()
fallbacks.
Option.getOrNull(x ?? Option.none())
is unnecessary; pass the option directly.Apply:
- orgId: Option.getOrNull(data.orgId ?? Option.none()), - bucket: Option.getOrNull(data.bucketId ?? Option.none()), - metadata: Option.getOrNull(data.metadata ?? Option.none()), - transcriptionStatus: Option.getOrNull( - data.transcriptionStatus ?? Option.none(), - ), - folderId: Option.getOrNull(data.folderId ?? Option.none()), - width: Option.getOrNull(data.width ?? Option.none()), - height: Option.getOrNull(data.height ?? Option.none()), - duration: Option.getOrNull(data.duration ?? Option.none()), + orgId: Option.getOrNull(data.orgId), + bucket: Option.getOrNull(data.bucketId), + metadata: Option.getOrNull(data.metadata), + transcriptionStatus: Option.getOrNull(data.transcriptionStatus), + folderId: Option.getOrNull(data.folderId), + width: Option.getOrNull(data.width), + height: Option.getOrNull(data.height), + duration: Option.getOrNull(data.duration),packages/web-backend/src/Loom/ImportVideo.ts (2)
65-67
: ReplaceEffect.die
with a typed failure (or let errors bubble).Dying bypasses your workflow’s typed error channel and can crash fibers.
Apply:
- const [_, customBucket] = yield* s3Buckets - .getProviderForUser(payload.cap.userId) - .pipe(Effect.catchAll(() => Effect.die(null))); + const [_, customBucket] = yield* s3Buckets + .getProviderForUser(payload.cap.userId) + .pipe(Effect.catchAll((cause) => + Effect.fail(new DatabaseError({ cause })) + ));Alternatively, drop
catchAll
so typed errors propagate toActivity.error
.
110-113
: Bug:catchAll
returns a successLoomApiError
object, breaking downstream usage.
Headers.get(resp.headers, …)
will explode becauseresp
can beLoomApiError
. You must fail the effect.Apply:
- const resp = yield* http - .get(payload.loom.video.downloadUrl) - .pipe(Effect.catchAll((cause) => new LoomApiError({ cause }))); + const resp = yield* http + .get(payload.loom.video.downloadUrl) + .pipe(Effect.catchAll((cause) => Effect.fail(new LoomApiError({ cause }))));
🧹 Nitpick comments (85)
apps/web/app/(org)/dashboard/folder/[id]/components/ClientMyCapsLink.tsx (4)
13-13
: Use a type-only import to avoid bundling domain code into the client.
Video
is used only for types here; switch to a type-only namespace import to ensure it’s erased at build time.Apply:
-import { Video } from "@cap/web-domain"; +import type * as Video from "@cap/web-domain";If
@cap/web-domain
doesn’t support a namespace type import, keep the value import but confirm no runtime usage ofVideo
.
22-31
: Harden mobile drop payload validation; don’t trust shape or types from registerDropTarget.The callback accepts untyped data and forwards it as a branded
VideoId
, bypassing the benefit of the brand. Validate the payload before callinghandleDrop
.Apply:
- const unregister = registerDropTarget(linkRef.current, (data) => { - if (data && data.type === "application/cap") { - handleDrop({ id: data.id, name: data.name }); - } - }); + const unregister = registerDropTarget(linkRef.current, (data: unknown) => { + const d = data as { type?: string; id?: unknown; name?: unknown }; + if ( + d?.type === "application/cap" && + typeof d.id === "string" && + typeof d.name === "string" + ) { + handleDrop({ id: d.id as unknown as Video.VideoId, name: d.name }); + } else { + console.error("Invalid cap drag data"); + } + });Optionally export a typed payload from
ClientCapCard
so this remains strongly typed end-to-end.
51-75
: Fix stuck highlight on error and strengthen event narrowing + overloads for clarity.
- Current check
"preventDefault" in e
can mis-narrow; prefer"dataTransfer" in e
.- If JSON.parse fails or payload is invalid,
isDragOver
can remain true.- Add an overload and a minimal type guard, plus a duplicate-drop guard.
Apply:
- const handleDrop = async ( - e: React.DragEvent<HTMLAnchorElement> | { id: Video.VideoId; name: string }, - ) => { - if ("preventDefault" in e) { - e.preventDefault(); - try { - // Get the cap data from the data transfer - const capData = JSON.parse(e.dataTransfer.getData("application/cap")); - - if (!capData || !capData.id) { - console.error("Invalid cap data"); - return; - } - - await processDrop(capData); - } catch (error) { - console.error("Error processing drop:", error); - toast.error("Failed to move video"); - } - } else { - // Handle mobile drop with direct data - await processDrop(e); - } - }; + type DropPayload = { id: Video.VideoId; name: string }; + const isDropPayload = (v: unknown): v is DropPayload => + !!v && typeof (v as any).name === "string" && typeof (v as any).id === "string"; + + async function handleDrop(e: React.DragEvent<HTMLAnchorElement>): Promise<void>; + async function handleDrop(data: DropPayload): Promise<void>; + async function handleDrop( + e: React.DragEvent<HTMLAnchorElement> | DropPayload, + ): Promise<void> { + if (isMovingVideo) return; + if ("dataTransfer" in (e as any)) { + const evt = e as React.DragEvent<HTMLAnchorElement>; + evt.preventDefault(); + try { + const raw = evt.dataTransfer.getData("application/cap"); + const parsed = raw ? JSON.parse(raw) : null; + if (!isDropPayload(parsed)) { + console.error("Invalid cap data"); + return; + } + await processDrop(parsed); + } catch (error) { + console.error("Error processing drop:", error); + toast.error("Failed to move video"); + } finally { + setIsDragOver(false); + } + } else { + if (!isDropPayload(e)) { + console.error("Invalid cap data"); + return; + } + await processDrop(e); + } + }
1-1
: Rename file to kebab-case to match repo conventions.Per guidelines, use kebab-case for TS/TSX filenames:
client-my-caps-link.tsx
.packages/web-domain/src/S3Bucket.ts (2)
1-1
: Rename file to kebab‑case per repo guidelines.
S3Bucket.ts
→s3-bucket.ts
.
8-11
: Tighten validation for S3 bucket fieldsFound
region: Schema.String
in two locations — tighten validations across both.
- region: restrict to valid AWS region format (pattern or literal union). Locations: packages/web-domain/src/S3Bucket.ts:9, packages/web-api-contract-effect/src/index.ts:30.
- endpoint: validate URL shape or hostname instead of free string (replace Schema.OptionFromNullOr(Schema.String) with URL/hostname validator).
- name, ownerId: enforce non-empty (use non-empty string schema or minLength > 0).
apps/web/actions/folders/moveVideoToFolder.ts (4)
71-73
: Avoid revalidating a “null” folder path for spaces.When folderId is null this generates /folder/null. Guard it:
-if (spaceId) { - revalidatePath(`/dashboard/spaces/${spaceId}/folder/${folderId}`); -} +if (spaceId) { + revalidatePath(`/dashboard/spaces/${spaceId}`); + if (folderId) { + revalidatePath(`/dashboard/spaces/${spaceId}/folder/${folderId}`); + } +}
63-65
: Let DB manage updatedAt (schema already has onUpdateNow).Remove explicit updatedAt to avoid drift with DB timestamps.
- .set({ - folderId, - updatedAt: new Date(), - }) + .set({ folderId })
50-67
: Wrap multi-table writes in a transaction.Keeps spaceVideos and videos updates consistent on failure.
- if (spaceId) { - await db() - .update(spaceVideos) - .set({ folderId: folderId === null ? null : folderId }) - .where(and(eq(spaceVideos.videoId, videoId), eq(spaceVideos.spaceId, spaceId))); - } - - await db() - .update(videos) - .set({ folderId }) - .where(and(eq(videos.id, videoId), eq(videos.orgId, user.activeOrganizationId))); + await db().transaction(async (tx) => { + if (spaceId) { + await tx + .update(spaceVideos) + .set({ folderId: folderId === null ? null : folderId }) + .where(and(eq(spaceVideos.videoId, videoId), eq(spaceVideos.spaceId, spaceId))); + } + await tx + .update(videos) + .set({ folderId }) + .where(and(eq(videos.id, videoId), eq(videos.orgId, user.activeOrganizationId))); + });
1-1
: Rename file to kebab-case per guidelines.Suggested: apps/web/actions/folders/move-video-to-folder.ts
apps/web/actions/folders/updateFolder.ts (4)
53-56
: Scope ancestor lookups to org as well.Tiny hardening to avoid cross-tenant leakage in the traversal.
- .where(eq(folders.id, currentParentId)); + .where(and(eq(folders.id, currentParentId), eq(folders.organizationId, user.activeOrganizationId)));
71-73
: Optionally revalidate impacted parent folders when parent changes.If parentId changed, also revalidate the new and previous parent folders to refresh counts/lists.
If you want, I can wire this by first fetching the current parentId, comparing it, and revalidating both paths.
1-1
: Rename file to kebab-case per guidelines.Suggested: apps/web/actions/folders/update-folder.ts
6-6
: Prefer domain FolderColor for folder colorImport and use Folder.FolderColor from @cap/web-domain (defined in packages/web-domain/src/Folder.ts) instead of an inline union to avoid type drift — update apps/web/actions/folders/updateFolder.ts accordingly.
apps/web/actions/caps/share.ts (4)
134-139
: Consider idempotent inserts to avoid duplicates.If a unique constraint exists on (videoId, spaceId), prefer an upsert to handle retries safely.
- await db().insert(spaceVideos).values({ + await db() + .insert(spaceVideos) + .values({ id: nanoId(), videoId: capId, spaceId: spaceId, addedById: user.id, - }); + }) + .onConflictDoNothing(); // or onConflictDoUpdate(...) if neededSame for sharedVideos insert above.
109-109
: Naming nit: spacesIds → spaceIdsToApply.Minor readability nit to avoid confusion with the input
spaceIds
.- const spacesIds = spacesData.map((space) => space.id); + const spaceIdsToApply = spacesData.map((space) => space.id); @@ - for (const spaceId of spacesIds) { + for (const spaceId of spaceIdsToApply) {Also applies to: 129-129
18-22
: Prefer domain-typed IDs for spaces (and orgs) for consistency.To align with
Video.VideoId
, typespaceIds
(and any neworgIds
) with domain ID brands.-import type { Video } from "@cap/web-domain"; +import type { Video, Space, Organization } from "@cap/web-domain"; @@ -interface ShareCapParams { - capId: Video.VideoId; - spaceIds: string[]; - public?: boolean; -} +interface ShareCapParams { + capId: Video.VideoId; + spaceIds: Space.SpaceId[]; + orgIds?: Organization.OrganizationId[]; // optional explicit org targets + public?: boolean; +}Pair this with removing the org/space overloading noted earlier.
155-156
: Use structured logging; avoid noisy stack dumps in Server Actions.Swap to your logger (with request/user context) and avoid logging full error objects if they can contain sensitive info.
- console.error("Error sharing cap:", error); + logger.error({ err: error, capId, userId: user?.id }, "Error sharing cap");apps/web/app/(org)/dashboard/caps/components/UploadCapButton.tsx (3)
285-293
: Revoke thumbnail object URL to avoid small memory leaks.After using thumbnailUrl, revoke it. Since parent keeps it in state, revoke in the parent when clearing/resetting preview (e.g., in onComplete or an effect cleanup).
Example (parent):
- setUploadingThumbnailUrl(undefined); + if (thumbnailUrlRef.current) URL.revokeObjectURL(thumbnailUrlRef.current); + setUploadingThumbnailUrl(undefined);Also applies to: 371-373
1-1
: Rename file to kebab-case per guidelines.upload-cap-button.tsx is the expected filename style.
27-27
: Remove unused public prop.grey isn’t used; trim from the component’s API to reduce surface area.
- size?: "sm" | "lg" | "md"; - grey?: boolean; + size?: "sm" | "lg" | "md";apps/web/app/(org)/dashboard/folder/[id]/components/BreadcrumbItem.tsx (1)
1-1
: Rename file to kebab-case per guidelines.breadcrumb-item.tsx is the expected filename style.
apps/web/app/(org)/dashboard/folder/[id]/components/UploadCapButtonWithFolder.tsx (2)
29-36
: Duplicate router.refresh — remove one to avoid double revalidation.UploadCapButton already calls router.refresh() after completion; doing it here too triggers a redundant refresh.
setUploadProgress(0); - router.refresh();
1-1
: Rename file to kebab-case per guidelines.upload-cap-button-with-folder.tsx is the expected filename style.
apps/web/lib/folder.ts (3)
35-59
: Breadcrumb now uses Folder.FolderId — align helper signature and avoid repeated revalidate.
- Consider changing getFolderById to accept Folder.FolderId (not string | undefined) for consistency.
- getFolderById revalidates the path; called in a loop here, this can cause multiple revalidations. Prefer revalidating once at the end (as you already do here).
-export async function getFolderById(folderId: string | undefined) { - if (!folderId) throw new Error("Folder ID is required"); +export async function getFolderById(folderId: Folder.FolderId) { const [folder] = await db() .select() .from(folders) .where(eq(folders.id, Folder.FolderId.make(folderId))); if (!folder) throw new Error("Folder not found"); - revalidatePath(`/dashboard/folder/${folderId}`); return folder; }
62-101
: Prefer inArray over manual IN construction for readability and query planning.Use Drizzle’s inArray to build the IN clause instead of sql.join, and let the driver handle parameterization.
-import { and, desc, eq } from "drizzle-orm"; +import { and, desc, eq, inArray } from "drizzle-orm"; @@ - .where( - sql`${spaceVideos.videoId} IN (${sql.join( - videoIds.map((id) => sql`${id}`), - sql`, `, - )})`, - ); + .where(inArray(spaceVideos.videoId, videoIds)); @@ - .where( - sql`${sharedVideos.videoId} IN (${sql.join( - videoIds.map((id) => sql`${id}`), - sql`, `, - )})`, - ); + .where(inArray(sharedVideos.videoId, videoIds));
144-239
: Signature switch to Folder.FolderId looks good — tighten types when calling getSharedSpacesForVideos.Ensure the array passed is typed as Video.VideoId[] to match the helper’s signature.
- const videoIds = videoData.map((video) => video.id); + const videoIds = videoData.map((video) => video.id as Video.VideoId);apps/web/app/api/thumbnail/route.ts (2)
41-49
: Use 404 (Not Found), not 401 (Unauthorized).This path is a missing resource, not an auth failure.
- status: 401, + status: 404,
54-57
: Consistent status code for not found.Same here: return 404.
- status: 401, + status: 404,apps/web/app/api/video/domain-info/route.ts (2)
26-32
: ownerId is non-null in schema; this 500 branch is effectively unreachable.Consider removing or downgrading to an assertion.
1-6
: Set explicit revalidation and ensure CORS per guidelines.Add no-cache and, if needed cross-origin usage, consistent headers like the thumbnail route.
Add (outside this range):
export const revalidate = 0;If this is called cross-origin, return Response.json(data, { headers: getHeaders(origin) }).
apps/web/app/(org)/dashboard/caps/components/SharingDialog.tsx (2)
25-25
: LGTM on adopting shared ID type; optional import simplificationType‑only import is correct. If
VideoId
is exported directly, prefer importing it instead of theVideo
namespace for brevity.-import type { Video } from "@cap/web-domain"; +import type { VideoId } from "@cap/web-domain";
71-71
: Mutation vars typed to Video.VideoId — optional tidy‑up and cache updateConsider a small alias (e.g.,
type UpdateSharingVars = { capId: Video.VideoId; spaceIds: string[]; public: boolean }
) and use it foruseMutation
variables to avoid repetition. Also consider a precise React Query cache update inonSuccess
(e.g.,setQueryData
for the cap detail/list) per our web guidelines, instead of relying solely ononSharingUpdated
.apps/web/app/(org)/dashboard/caps/components/PasswordDialog.tsx (5)
40-55
: Type the mutations, clear sensitive state on success, and do a precise cache touch.
- Add generics so
result
/e
aren’tunknown
under strict TS.- Clear
password
on success to avoid retaining sensitive input in memory.- Invalidate the specific video query (narrow, not broad) to refresh UI per guidelines.
- const updatePassword = useMutation({ + const updatePassword = useMutation<string, Error>({ mutationFn: () => setVideoPassword(videoId, password).then((v) => { if (v.success) return v.value; throw new Error(v.error); }), - onSuccess: (result) => { + onSuccess: (result: string) => { toast.success(result); onPasswordUpdated(true); + setPassword(""); + qc.invalidateQueries({ queryKey: ["video", videoId] }); onClose(); }, - onError: (e) => { + onError: (e: Error) => { toast.error(e.message); }, }); - const removePassword = useMutation({ + const removePassword = useMutation<string, Error>({ mutationFn: () => removeVideoPassword(videoId).then((v) => { if (v.success) return v.value; throw new Error(v.error); }), - onSuccess: (result) => { + onSuccess: (result: string) => { toast.success(result); onPasswordUpdated(false); + setPassword(""); + qc.invalidateQueries({ queryKey: ["video", videoId] }); onClose(); }, - onError: (e) => { + onError: (e: Error) => { toast.error(e.message); }, });Add these supporting lines outside the shown ranges:
// import import { useMutation, useQueryClient } from "@tanstack/react-query"; // after useState(...) const qc = useQueryClient();Also applies to: 56-70
75-75
: Only react to close events and clear password on dialog close.Prevents firing
onClose
on open toggles and ensures sensitive input is cleared.- <Dialog open={isOpen} onOpenChange={onClose}> + <Dialog + open={isOpen} + onOpenChange={(open) => { + if (!open) { + setPassword(""); + onClose(); + } + }} + >
88-93
: Add an accessible label to the password field.Placeholders aren’t labels; add an ARIA label (or visible label if
@cap/ui
supports it).<Input type="password" placeholder={hasPassword ? "Enter new password" : "Password"} value={password} onChange={(e) => setPassword(e.target.value)} + aria-label={hasPassword ? "New password" : "Password"} />
72-73
: Disable “Save” when the password is empty.Avoid accidental setting to empty string; aligns with having a dedicated Remove action.
const pending = removePassword.isPending || updatePassword.isPending; +const canSave = password.trim().length > 0 && !pending; ... <Button size="sm" variant="dark" onClick={() => updatePassword.mutate()} spinner={pending} - disabled={pending} + disabled={!canSave} > Save </Button>Also applies to: 106-114
1-1
: File naming: prefer kebab-case per repo guidelines.Consider renaming to
password-dialog.tsx
.apps/web/lib/Notification.ts (1)
8-8
: Keep value import; consider hoisting the branded ID for reuse.Import must remain value-level to call
Video.VideoId.make
. Minor nit: compute once for clarity.import { Video } from "@cap/web-domain"; @@ - .where(eq(videos.id, Video.VideoId.make(notification.videoId))) + const videoId = Video.VideoId.make(notification.videoId); + .where(eq(videos.id, videoId))apps/web/app/(org)/dashboard/caps/page.tsx (4)
25-25
: Param nowVideo.VideoId[]
: nice. Consider typing the map with branded keys.Optional:
Record<Video.VideoId, ...>
improves type fidelity.-const sharedSpacesMap: Record<string, Array<{ id: string; name: string; organizationId: string; iconUrl: string; isOrg: boolean }>> = {}; +const sharedSpacesMap: Record<Video.VideoId, Array<{ id: string; name: string; organizationId: string; iconUrl: string; isOrg: boolean }>> = {};
139-142
: Bug:domainVerified
becomes true when value is false-but-not-null.Use strict boolean check.
- if (organizationData[0].domainVerified !== null) { - domainVerified = true; - } + domainVerified = organizationData[0].domainVerified === true;
228-229
: Avoid redundantmake
on already brandedvideo.id
.
videos.id
is$type<Video.VideoId>()
; reuse it directly.- id: Video.VideoId.make(video.id), + id: video.id,
239-246
: Replaceany
with safer type; prefer shared metadata type if available.If a shared
VideoMetadata
exists, use it; otherwise useRecord<string, unknown>
.- metadata: video.metadata as - | { - customCreatedAt?: string; - [key: string]: any; - } - | undefined, + metadata: video.metadata as + | { + customCreatedAt?: string; + [key: string]: unknown; + } + | undefined,apps/web/app/s/[videoId]/_components/tabs/Activity/index.tsx (1)
11-11
: Use type-only import to avoid bundling value.-import { Video } from "@cap/web-domain"; +import type { Video } from "@cap/web-domain";apps/web/app/s/[videoId]/_components/Sidebar.tsx (1)
10-10
: Use type-only import.-import { Video } from "@cap/web-domain"; +import type { Video } from "@cap/web-domain";apps/web/app/s/[videoId]/_components/PasswordOverlay.tsx (1)
9-14
: Type-only import and branded prop: good alignment with domain IDs.Minor nit: make the import type-only.
-import { Video } from "@cap/web-domain"; +import type { Video } from "@cap/web-domain";apps/web/app/embed/[videoId]/_components/PasswordOverlay.tsx (1)
9-9
: Use a type-only import for Video.Avoid bringing a runtime value into the client bundle; import the type only.
-import { Video } from "@cap/web-domain"; +import type { Video } from "@cap/web-domain";apps/web/actions/videos/edit-date.ts (3)
34-48
: Validate date parsing explicitly to avoid “Invalid time value”.
new Date(date)
can be invalid; comparing tocurrentDate
won’t catch it, andtoISOString()
will throw.- const newDate = new Date(date); + const newDate = new Date(date); + if (Number.isNaN(newDate.getTime())) { + throw new Error("Invalid date"); + } const currentDate = new Date(); // Prevent future dates if (newDate > currentDate) { throw new Error("Cannot set a date in the future"); }
57-60
: Revalidate the video page as well.Title action revalidates
/s/${videoId}
; date changes likely surface there too.// Revalidate paths to update the UI revalidatePath("/dashboard/caps"); revalidatePath("/dashboard/shared-caps"); + revalidatePath(`/s/${videoId}`);
21-28
: Remove redundant not-found check.After checking
query.length === 0
, the!video
check is unnecessary.- const video = query[0]; - if (!video) { - throw new Error("Video not found"); - } + const video = query[0];apps/web/hooks/use-transcript.ts (2)
36-39
: Align invalidation helper with domain-typed IDs.Keep the API consistent with the rest of the code that now uses
Video.VideoId
.-return (videoId: string) => { +return (videoId: Video.VideoId) => { queryClient.invalidateQueries({ queryKey: ["transcript", videoId] }); };
9-30
: Optional: avoid retry loops when transcript isn’t ready.If the server returns “not ready”, consider disabling retries for that specific error.
return useQuery({ queryKey: ["transcript", videoId], queryFn: async () => { @@ }, enabled: transcriptionStatus === "COMPLETE", staleTime: 0, refetchOnWindowFocus: false, + retry: (failureCount, error) => + // Don’t retry if backend explicitly says it's not ready yet + (error as Error)?.message !== "TRANSCRIPT_NOT_READY" && failureCount < 2, });apps/web/app/s/[videoId]/_components/tabs/Activity/Comments.tsx (3)
20-20
: Use a type-only import for Video.-import { Video } from "@cap/web-domain"; +import type { Video } from "@cap/web-domain";
67-68
: Stabilize imperative handle dependency (nit).Use the function as a dependency to avoid stale closures if it ever captures state.
-useImperativeHandle(ref, () => ({ scrollToBottom }), []); +useImperativeHandle(ref, () => ({ scrollToBottom }), [scrollToBottom]);
211-214
: Fix typo in omitted prop name.
"placholder"
should be"placeholder"
so consumers can’t pass it viacommentInputProps
.- ComponentProps<typeof CommentInput>, - "user" | "placholder" | "buttonLabel" + ComponentProps<typeof CommentInput>, + "user" | "placeholder" | "buttonLabel"apps/web/actions/videos/get-transcript.ts (1)
49-55
: Handle missing bucket row defensively.
createBucketProvider(result.bucket)
may receiveundefined
if no bucket is linked. Return a clear error.- const bucket = await createBucketProvider(result.bucket); + const bucket = await createBucketProvider(result.bucket); + if (!bucket) { + return { success: false, message: "Storage bucket is not configured" }; + }apps/web/actions/organizations/remove-videos.ts (1)
83-111
: Optional: wrap delete + update in a transaction.Ensures atomicity between removing shares and clearing folder assignments.
- await db() - .delete(sharedVideos) - .where( - and( - eq(sharedVideos.organizationId, organizationId), - inArray(sharedVideos.videoId, existingVideoIds), - ), - ); - // ... then update videos - if (organizationFolderIds.length > 0) { - await db() - .update(videos) - .set({ folderId: null }) - .where( - and( - inArray(videos.id, existingVideoIds), - inArray(videos.folderId, organizationFolderIds), - ), - ); - } + await db().transaction(async (tx) => { + await tx + .delete(sharedVideos) + .where( + and( + eq(sharedVideos.organizationId, organizationId), + inArray(sharedVideos.videoId, existingVideoIds), + ), + ); + if (organizationFolderIds.length > 0) { + await tx + .update(videos) + .set({ folderId: null }) + .where( + and( + inArray(videos.id, existingVideoIds), + inArray(videos.folderId, organizationFolderIds), + ), + ); + } + });apps/web/app/api/video/og/route.tsx (1)
6-7
: Add null check before creating VideoIdThe code doesn't validate that
videoId
is not null before passing it toVideo.VideoId.make()
. If the query parameter is missing, this could cause an error.Apply this fix to add null validation:
export async function GET(req: NextRequest) { const videoId = req.nextUrl.searchParams.get("videoId") as string; + if (!videoId) { + return new Response("Video ID is required", { status: 400 }); + } return generateVideoOgImage(Video.VideoId.make(videoId)); }apps/web/app/(org)/dashboard/spaces/[spaceId]/components/VideoCard.tsx (5)
9-13
: Decouple UI from sibling component’s type; move VideoData to shared module.Importing a type from AddVideosDialogBase creates tight coupling and risks circular deps. Export VideoData from a shared types module (e.g., apps/web/app/(org)/dashboard/spaces/[spaceId]/types.ts or a domain package) and import it here.
21-23
: Harden date parsing for customCreatedAt.new Date(string) can yield Invalid Date if the string isn’t ISO. Either guarantee ISO upstream or add a fallback when parsing fails.
27-41
: Make the clickable card accessible (keyboard + semantics).Add button semantics and keyboard handlers; expose pressed state for selection.
Apply this diff:
- <div - onClick={onToggle} + <div + onClick={onToggle} + role="button" + tabIndex={0} + aria-pressed={isSelected} + onKeyDown={(e) => { + if (e.key === "Enter" || e.key === " ") { + e.preventDefault(); + onToggle(); + } + }} className={clsx( - "flex relative flex-col p-3 w-full min-h-fit rounded-xl border transition-all duration-200 group", + "flex relative flex-col p-3 w-full min-h-fit rounded-xl border transition-all duration-200 group focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-gray-8 focus-visible:ring-offset-1", className,
4-4
: Drop moment; use Intl to reduce bundle size.Prefer Intl.DateTimeFormat for formatting.
-import moment from "moment"; +// moment removed - {moment(effectiveDate).format("MMM D, YYYY")} + {new Intl.DateTimeFormat("en-US", { + month: "short", + day: "numeric", + year: "numeric", + }).format(effectiveDate)}Also applies to: 123-124
1-1
: Rename file to kebab-case per repo guidelines.VideoCard.tsx → video-card.tsx.
apps/web/actions/videos/get-og-image.tsx (2)
39-46
: Guard missing bucket early and align the error message.Avoid constructing a bucket provider with null/undefined and keep the “not found/private” response unified.
- const { video } = videoData; + const { video, bucket } = videoData; - if (!video || video.public === false) { + if (!video || video.public === false || !bucket) { return new ImageResponse( ( <div style={{ width: "100%", height: "100%", display: "flex", alignItems: "center", justifyContent: "center", background: "radial-gradient(90.01% 80.01% at 53.53% 49.99%,#d3e5ff 30.65%,#4785ff 88.48%,#fff 100%)", }} > <h1 style={{ fontSize: "40px" }}>Video or bucket not found</h1> </div> ), { width: 1200, height: 630 }, ); } - const bucket = await createBucketProvider(videoData.bucket); + const bucketProvider = await createBucketProvider(bucket);And use bucketProvider below.
Also applies to: 41-63
68-74
: Avoid implicit any; annotate screenshotUrl.Type inference from null yields any. Annotate to keep strict TS.
- let screenshotUrl = null; + let screenshotUrl: string | null = null;apps/web/app/api/desktop/[...route]/video.ts (1)
217-227
: Harden S3 cleanup; don’t fail the request on storage errors.Wrap list/delete in try/catch; return success after DB delete to avoid a 500 and orphan-recovery regressions.
- const listedObjects = await bucket.listObjects({ - prefix: `${user.id}/${videoId}/`, - }); - if (listedObjects.Contents?.length) - await bucket.deleteObjects( - listedObjects.Contents.map((content: any) => ({ Key: content.Key })), - ); + try { + const listedObjects = await bucket.listObjects({ + prefix: `${user.id}/${videoId}/`, + }); + if (listedObjects.Contents?.length) { + await bucket.deleteObjects( + listedObjects.Contents.map((content: any) => ({ Key: content.Key })), + ); + } + } catch (err) { + console.error("S3 cleanup failed for video:", videoId, err); + // Intentionally continue; DB delete already committed. + }apps/web/app/(org)/dashboard/spaces/[spaceId]/components/VirtualizedVideoGrid.tsx (2)
88-93
: Optional: use Sets for O(1) membership checks.Avoid repeated array includes during rendering.
- <VideoCard - video={video} - isSelected={selectedVideos.includes(video.id)} - onToggle={() => handleVideoToggle(video.id)} - isAlreadyInEntity={entityVideoIds?.includes(video.id) || false} - /> + <VideoCard + video={video} + isSelected={selectedSet.has(video.id)} + onToggle={() => handleVideoToggle(video.id)} + isAlreadyInEntity={entitySet.has(video.id)} + />Add above return:
const selectedSet = useMemo(() => new Set(selectedVideos), [selectedVideos]); const entitySet = useMemo(() => new Set(entityVideoIds ?? []), [entityVideoIds]);
1-1
: Rename file to kebab-case per repo guidelines.VirtualizedVideoGrid.tsx → virtualized-video-grid.tsx.
apps/web/app/(org)/dashboard/spaces/[spaceId]/components/AddVideosDialogBase.tsx (4)
32-35
: Confirm mutations are Server Actions and update caches precisely.Guidelines: mutations should call Server Actions and do precise cache updates.
Consider updating the entity-video-ids cache with setQueryData instead of invalidating.
38-49
: Unify VideoData type across the app.There’s a similar VideoData in apps/web/app/(org)/dashboard/caps/Caps.tsx. Consolidate into a single shared export to avoid drift.
125-134
: Prefer precise cache updates over invalidation.Update the entity-video-ids cache locally; avoid round-trips.
- // Invalidate both queries to ensure UI updates - await Promise.all([ - queryClient.invalidateQueries({ queryKey: ["user-videos"] }), - queryClient.invalidateQueries({ queryKey: ["entity-video-ids", entityId] }), - ]); + // Precisely update entity-video-ids cache + queryClient.setQueryData<Video.VideoId[] | undefined>( + ["entity-video-ids", entityId], + (prev) => { + const current = new Set(prev ?? []); + for (const id of toAdd) current.add(id); + for (const id of toRemove) current.delete(id); + return Array.from(current); + }, + );If user-videos doesn’t change, skip invalidation; otherwise update it similarly.
1-1
: Rename file to kebab-case per repo guidelines.AddVideosDialogBase.tsx → add-videos-dialog-base.tsx.
packages/web-domain/src/Loom.ts (2)
29-31
: Use branded Video.VideoId for strong typing.Align with the repo’s typed IDs to prevent accidental cross‑ID mixups.
- id: Schema.String, + id: Video.VideoId, name: Schema.String, downloadUrl: Schema.String,
6-9
: Export LoomApiError for backend reuse.Prevents re‑declaring the same tagged error in multiple packages.
-class LoomApiError extends Schema.TaggedError<LoomApiError>("LoomApiError")( +export class LoomApiError extends Schema.TaggedError<LoomApiError>("LoomApiError")( "LoomApiError", { cause: Schema.Unknown }, ) {}packages/database/auth/auth-options.ts (2)
98-99
: Add explicit extension to dynamic import (ESM + exports policy).Stay consistent with explicit extensions to avoid resolution issues.
- const { OTPEmail } = await import("../emails/otp-email"); + const { OTPEmail } = await import("../emails/otp-email.tsx");
138-141
: Guard next/headers cookies() in NextAuth signIn eventauthOptions is used from apps/web/app/api/auth/[...nextauth]/route.ts (route handler), so cookies() will usually be available — still guard because events can run outside a request-scoped context on some runtimes/adapters.
- Location: packages/database/auth/auth-options.ts — signIn handler (cookies().get at ~lines 139–140; cookies().delete at ~156–158).
- Action: wrap the cookies() calls in try/catch and skip cookie ops if it throws, or read the cookie values at the route layer and pass them into the signIn flow instead of calling next/headers inside the event.
packages/database/package.json (1)
5-7
: Types/exports consistency for ESM TS sources.Pointing "types" to .ts and exporting TS entrypoints can confuse non-Next tooling. Consider building d.ts or using conditional exports.
- "types": "./index.ts", - "main": "index.js", + "types": "./index.ts", + "main": "./index.ts",Or produce build output and set main/types to built files. Keep private monorepo context in mind.
Also applies to: 55-65
apps/web/app/api/loom/[...route]/route.ts (2)
20-25
: Extend payload schema to carry Loom user/org identifiers (and optionally name).Matches the workflow payload and removes hard-coded values.
.setPayload( Schema.Struct({ loom: Schema.Struct({ downloadUrl: Schema.String, - videoId: Schema.String, + videoId: Schema.String, + userId: Schema.String, + orgId: Schema.String, + // Optional if backend will fetch it: + // videoName: Schema.optional(Schema.String), }), }), )
43-64
: Return a response and fix error mapping to HTTP.
- Currently the handler yields the workflow but returns nothing (implicit 204). Consider returning a simple ack with an invocation key/idem key.
- Error handler constructs an error object instead of failing; wrap with Effect.fail.
handlers.handle( "loom", Effect.fn( function* ({ payload }) { const { workflows } = yield* Workflows.HttpClient; const user = yield* CurrentUser; - yield* workflows.LoomImportVideo({ + const result = yield* workflows.LoomImportVideo({ payload: { cap: { userId: user.id, orgId: user.activeOrgId, }, loom: { - userId: "loomVideoId123", - orgId: "loomOrgId123", + userId: payload.loom.userId, + orgId: payload.loom.orgId, video: { id: payload.loom.videoId, - name: "loom video name", + name: "loom video", downloadUrl: payload.loom.downloadUrl, width: Option.none(), height: Option.none(), durationSecs: Option.none(), fps: Option.none(), }, }, }, }); + return { ok: true }; }, (e) => e.pipe( Effect.tapDefect(Effect.log), - Effect.catchAll(() => new HttpApiError.InternalServerError()), + Effect.catchAll(() => + Effect.fail(new HttpApiError.InternalServerError()), + ), ), ), ),packages/database/schema.ts (3)
245-246
: Make videos.orgId non-null (or add a clear migration plan).New features rely on org scoping; keeping orgId nullable risks orphaned rows and complicates joins.
- // TODO: make this non-null - orgId: nanoIdNullable("orgId"), + orgId: nanoId("orgId").notNull(),If immediate migration isn’t feasible, add an index and enforce NOT NULL at write boundaries now.
281-286
: Add an index on videos.orgId for org-scoped queries.Common query path; will help list and import flows.
(table) => [ index("id_idx").on(table.id), index("owner_id_idx").on(table.ownerId), + index("org_id_idx").on(table.orgId), index("is_public_idx").on(table.public), index("folder_id_idx").on(table.folderId), ],
647-658
: Rethink imported_videos PK vs surrogate id.You have both an id column and a composite PK on (orgId, source, sourceId). Consider either:
- Drop id and keep the composite PK, or
- Keep id as the PK and add a unique index on (orgId, source, sourceId).
export const importedVideos = mysqlTable( "imported_videos", { - id: nanoId("id").notNull(), + // Either drop id (composite PK only) or make it the PK. + // id: nanoId("id").notNull().primaryKey(), orgId: nanoIdNullable("orgId").notNull(), source: varchar("source", { length: 255, enum: ["loom"] }).notNull(), sourceId: varchar("source_id", { length: 255 }).notNull(), }, - (table) => [ - primaryKey({ columns: [table.orgId, table.source, table.sourceId] }), - ], + (table) => [ + primaryKey({ columns: [table.orgId, table.source, table.sourceId] }), + // If keeping id: uniqueIndex("imported_videos_id_idx").on(table.id), + ], );apps/web/lib/server.ts (1)
81-89
: Make Workflow HTTP base URL configurable.Hard-coding
http://localhost:42169
couples prod builds to dev infra. Read from env (and/or derive fromREMOTE_WORKFLOW_URL
by stripping/rpc
) with localhost as fallback.Example:
- const a = yield* HttpApiClient.make(Workflows.Api, { - baseUrl: "http://localhost:42169", - }); + const remote = serverEnv().REMOTE_WORKFLOW_URL; + const baseUrl = + remote + ? new URL(remote).origin // assumes remote RPC at {origin}/rpc + : "http://localhost:42169"; + const a = yield* HttpApiClient.make(Workflows.Api, { baseUrl });packages/web-domain/tsconfig.json (2)
4-4
: Include narrowed to src is good; add .d.ts to avoid missing ambient types.If you have any ambient declarations (e.g., globals or module augments), they’ll be skipped. Safer to include .d.ts.
Apply this diff:
- "include": ["src/**/*.ts", "src/**/*.tsx"], + "include": ["src/**/*.ts", "src/**/*.tsx", "src/**/*.d.ts"],
5-11
: Set rootDir to avoid dist/src nesting for declarations.With composite + emitDeclarationOnly, TS will mirror the source tree under dist; setting rootDir avoids an extra src segment if undesired.
Apply this diff:
"compilerOptions": { "moduleResolution": "bundler", "composite": true, "outDir": "dist", + "rootDir": "src", "noEmit": false, "emitDeclarationOnly": true }
interface VirtualizedVideoGridProps { | ||
videos: Video[]; | ||
videos: VideoData[]; | ||
selectedVideos: string[]; | ||
handleVideoToggle: (id: string) => void; | ||
entityVideoIds: string[]; | ||
handleVideoToggle: (id: Video.VideoId) => void; | ||
entityVideoIds: Video.VideoId[]; | ||
height?: number; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Type mismatch: selectedVideos must use Video.VideoId[].
Props still declare string[]. This breaks assignability with the new branded IDs and causes includes() type friction.
interface VirtualizedVideoGridProps {
- videos: VideoData[];
- selectedVideos: string[];
- handleVideoToggle: (id: Video.VideoId) => void;
- entityVideoIds: Video.VideoId[];
+ videos: VideoData[];
+ selectedVideos: Video.VideoId[];
+ handleVideoToggle: (id: Video.VideoId) => void;
+ entityVideoIds: Video.VideoId[];
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
interface VirtualizedVideoGridProps { | |
videos: Video[]; | |
videos: VideoData[]; | |
selectedVideos: string[]; | |
handleVideoToggle: (id: string) => void; | |
entityVideoIds: string[]; | |
handleVideoToggle: (id: Video.VideoId) => void; | |
entityVideoIds: Video.VideoId[]; | |
height?: number; | |
interface VirtualizedVideoGridProps { | |
videos: VideoData[]; | |
selectedVideos: Video.VideoId[]; | |
handleVideoToggle: (id: Video.VideoId) => void; | |
entityVideoIds: Video.VideoId[]; | |
height?: number; |
🤖 Prompt for AI Agents
In
apps/web/app/(org)/dashboard/spaces/[spaceId]/components/VirtualizedVideoGrid.tsx
around lines 7–12, the selectedVideos prop is declared as string[] but must be
the branded ID type Video.VideoId[]; change the prop type to Video.VideoId[] and
update any local variables, default values, or callers to pass/retain
Video.VideoId (not plain string), and fix any includes()/indexOf() usages to
accept Video.VideoId so the branded-ID comparisons compile.
.from(videos) | ||
.leftJoin(s3Buckets, eq(videos.bucket, s3Buckets.id)) | ||
.where(eq(videos.id, videoId)); | ||
.where(eq(videos.id, Video.VideoId.make(videoId))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate/guard malformed videoId before querying.
If make can throw or doesn’t validate, this can yield a 500 or unintended DB access. Return 400 on invalid IDs.
Apply within-range change and add a guard above:
- .where(eq(videos.id, Video.VideoId.make(videoId)));
+ .where(eq(videos.id, typedVideoId));
Add above (outside this range):
let typedVideoId: ReturnType<typeof Video.VideoId.make>;
try {
typedVideoId = Video.VideoId.make(videoId);
} catch {
return new Response(JSON.stringify({ error: true, message: "Invalid videoId" }), {
status: 400,
headers: getHeaders(origin),
});
}
🤖 Prompt for AI Agents
In apps/web/app/api/thumbnail/route.ts around line 39, the code calls
Video.VideoId.make(videoId) directly in the query which can throw or accept
malformed IDs; add a pre-query guard that attempts to create a typedVideoId with
Video.VideoId.make inside a try/catch, and on failure immediately return a 400
Response with JSON { error: true, message: "Invalid videoId" } and the same
getHeaders(origin) headers; then use the validated typedVideoId variable in the
.where(eq(videos.id, Video.VideoId.make(videoId))) call (replace the make call
with the typedVideoId) so the DB is only queried with validated IDs.
videoId: Video.VideoId; | ||
aiData?: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
🧩 Analysis chain
Branded videoId
prop: OK; verify nested components accept it.
🏁 Script executed:
#!/bin/bash
# Ensure nested components don't still expect string IDs
rg -nP --type=ts --type=tsx -C2 '\bvideoId\s*:\s*string\b' apps/web/app/s
Length of output: 27
🏁 Script executed:
#!/bin/bash
set -euo pipefail
ROOT="apps/web/app/s"
if [ ! -d "$ROOT" ]; then
echo "Directory '$ROOT' not found; aborting."
exit 0
fi
echo "=== explicit 'videoId: string' declarations ==="
rg -nP -C2 '\bvideoId\s*:\s*string\b' "$ROOT" || true
echo
echo "=== inline prop types like '{ videoId: string }' ==="
rg -nP -C2 '\{[^}]*\bvideoId\s*:\s*string[^}]*\}' "$ROOT" || true
echo
echo "=== any 'videoId' occurrences (context) ==="
rg -n -C2 '\bvideoId\b' "$ROOT" || true
echo
echo "=== usages of 'Video.VideoId' type ==="
rg -nP -C2 '\bVideo\.VideoId\b' "$ROOT" || true
Length of output: 29901
videoId branded — update child prop types to Video.VideoId.
Sidebar now uses videoId: Video.VideoId but these nested components still declare videoId as string — change their prop types to Video.VideoId:
- apps/web/app/s/[videoId]/_components/tabs/Summary.tsx — interface SummaryProps: videoId: string (line 14)
- apps/web/app/s/[videoId]/_components/tabs/Activity/Analytics.tsx — props: videoId: string (line 9)
🤖 Prompt for AI Agents
In apps/web/app/s/[videoId]/_components/Sidebar.tsx around lines 33-34 and in
the child components listed, the parent now passes videoId as Video.VideoId but
the children still type it as string; update the child prop types to use
Video.VideoId instead of string. Specifically, in
apps/web/app/s/[videoId]/_components/tabs/Summary.tsx (interface SummaryProps at
~line 14) change videoId: string to videoId: Video.VideoId and import or
reference the Video namespace/type as used by the project; and in
apps/web/app/s/[videoId]/_components/tabs/Activity/Analytics.tsx (props at ~line
9) change props typing from videoId: string to videoId: Video.VideoId and add
any necessary import for Video to match the parent’s branded type.
apps/web/lib/server.ts
Outdated
const WorkflowRpcClient = Layer.scoped( | ||
Workflows.RpcClient, | ||
Effect.gen(function* () { | ||
const envs = Option.zipWith( | ||
Option.fromNullable(serverEnv().REMOTE_WORKFLOW_URL), | ||
Option.fromNullable(serverEnv().REMOTE_WORKFLOW_SECRET), | ||
(l, r) => [l, r] as const, | ||
); | ||
|
||
return yield* Option.match(envs, { | ||
onNone: () => | ||
RpcClient.make(Workflows.RpcGroup).pipe( | ||
Effect.provide( | ||
RpcClient.layerProtocolHttp({ | ||
url: "http://localhost:42169/rpc", | ||
}).pipe(Layer.provide(Workflows.RpcSerialization)), | ||
), | ||
), | ||
onSome: ([url, secret]) => | ||
RpcClient.make(Workflows.RpcGroup).pipe( | ||
Effect.provide( | ||
RpcClient.layerProtocolHttp({ | ||
url, | ||
transformClient: HttpClient.mapRequest( | ||
HttpClientRequest.setHeader("Authorization", `Token ${secret}`), | ||
), | ||
}).pipe(Layer.provide(Workflows.RpcSerialization)), | ||
), | ||
), | ||
}); | ||
}), | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not send secrets over plain HTTP; enforce HTTPS for remote workflows.
If REMOTE_WORKFLOW_URL
is http://…
, the Authorization token is exposed. Enforce HTTPS (except in dev) or allowlist hosts.
Apply:
onSome: ([url, secret]) =>
- RpcClient.make(Workflows.RpcGroup).pipe(
+ RpcClient.make(Workflows.RpcGroup).pipe(
Effect.provide(
RpcClient.layerProtocolHttp({
- url,
+ url,
transformClient: HttpClient.mapRequest(
HttpClientRequest.setHeader("Authorization", `Token ${secret}`),
),
}).pipe(Layer.provide(Workflows.RpcSerialization)),
),
),
Add just before constructing the client:
+ Effect.sync(() => {
+ const u = new URL(url);
+ const isHttps = u.protocol === "https:";
+ const isDev = serverEnv().NODE_ENV === "development";
+ if (!isHttps && !isDev) {
+ throw new Error(`REMOTE_WORKFLOW_URL must use https in production: ${url}`);
+ }
+ }),
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const WorkflowRpcClient = Layer.scoped( | |
Workflows.RpcClient, | |
Effect.gen(function* () { | |
const envs = Option.zipWith( | |
Option.fromNullable(serverEnv().REMOTE_WORKFLOW_URL), | |
Option.fromNullable(serverEnv().REMOTE_WORKFLOW_SECRET), | |
(l, r) => [l, r] as const, | |
); | |
return yield* Option.match(envs, { | |
onNone: () => | |
RpcClient.make(Workflows.RpcGroup).pipe( | |
Effect.provide( | |
RpcClient.layerProtocolHttp({ | |
url: "http://localhost:42169/rpc", | |
}).pipe(Layer.provide(Workflows.RpcSerialization)), | |
), | |
), | |
onSome: ([url, secret]) => | |
RpcClient.make(Workflows.RpcGroup).pipe( | |
Effect.provide( | |
RpcClient.layerProtocolHttp({ | |
url, | |
transformClient: HttpClient.mapRequest( | |
HttpClientRequest.setHeader("Authorization", `Token ${secret}`), | |
), | |
}).pipe(Layer.provide(Workflows.RpcSerialization)), | |
), | |
), | |
}); | |
}), | |
); | |
const WorkflowRpcClient = Layer.scoped( | |
Workflows.RpcClient, | |
Effect.gen(function* () { | |
const envs = Option.zipWith( | |
Option.fromNullable(serverEnv().REMOTE_WORKFLOW_URL), | |
Option.fromNullable(serverEnv().REMOTE_WORKFLOW_SECRET), | |
(l, r) => [l, r] as const, | |
); | |
return yield* Option.match(envs, { | |
onNone: () => | |
RpcClient.make(Workflows.RpcGroup).pipe( | |
Effect.provide( | |
RpcClient.layerProtocolHttp({ | |
url: "http://localhost:42169/rpc", | |
}).pipe(Layer.provide(Workflows.RpcSerialization)), | |
), | |
), | |
onSome: ([url, secret]) => | |
Effect.sync(() => { | |
const u = new URL(url); | |
const isHttps = u.protocol === "https:"; | |
const isDev = serverEnv().NODE_ENV === "development"; | |
if (!isHttps && !isDev) { | |
throw new Error(`REMOTE_WORKFLOW_URL must use https in production: ${url}`); | |
} | |
}).pipe( | |
Effect.flatMap(() => | |
RpcClient.make(Workflows.RpcGroup).pipe( | |
Effect.provide( | |
RpcClient.layerProtocolHttp({ | |
url, | |
transformClient: HttpClient.mapRequest( | |
HttpClientRequest.setHeader("Authorization", `Token ${secret}`), | |
), | |
}).pipe(Layer.provide(Workflows.RpcSerialization)), | |
), | |
), | |
), | |
), | |
}); | |
}), | |
); |
🤖 Prompt for AI Agents
In apps/web/lib/server.ts around lines 48–79, the code builds an RPC HTTP client
using REMOTE_WORKFLOW_URL and REMOTE_WORKFLOW_SECRET but will send the secret
over plain HTTP if the URL starts with "http://"; before constructing the client
validate the URL: parse REMOTE_WORKFLOW_URL and if its scheme is "http" and
NODE_ENV !== "development" (or if the host is not in an explicit allowlist)
reject/throw or refuse to use remote credentials (fail fast or fall back to
local client), otherwise proceed; implement an allowlist option for trusted
hosts if required, and ensure the Authorization header is only set when the URL
is HTTPS (or when running in development/allowed-host), returning an error/log
and not attaching the secret on insecure endpoints.
const resp = yield* http | ||
.get(payload.loom.video.downloadUrl) | ||
.pipe(Effect.catchAll((cause) => new LoomApiError({ cause }))); | ||
const contentLength = Headers.get( | ||
resp.headers, | ||
"content-length", | ||
).pipe( | ||
Option.map((v) => Number(v)), | ||
Option.getOrUndefined, | ||
); | ||
yield* Effect.log(`Downloading ${contentLength} bytes`); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate HTTP status and add a timeout to avoid uploading error pages or hanging.
Guard non-2xx responses and prevent infinite waits on slow/broken links.
Apply:
- const resp = yield* http
- .get(payload.loom.video.downloadUrl)
- .pipe(Effect.catchAll((cause) => Effect.fail(new LoomApiError({ cause }))));
+ const resp = yield* http
+ .get(payload.loom.video.downloadUrl)
+ .pipe(
+ Effect.timeout("60 seconds"),
+ Effect.catchAll((cause) => Effect.fail(new LoomApiError({ cause }))),
+ );
+ if (resp.status < 200 || resp.status >= 300) {
+ yield* Effect.fail(
+ new LoomApiError({ cause: new Error(`HTTP ${resp.status}`) }),
+ );
+ }
Optional: propagate Content-Type
to S3 using resp.headers.get("content-type")
.
Committable suggestion skipped: line range outside the PR's diff.
const promises: MySqlInsertBase<any, any, any>[] = [ | ||
db.insert(Db.videos).values([ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect insert batching type; use Promises, not MySqlInsertBase
.
Drizzle insert(...).values(...)
returns a Promise (driver-bound). Typing as MySqlInsertBase
is wrong and fragile.
Apply:
- const promises: MySqlInsertBase<any, any, any>[] = [
+ const promises: Promise<unknown>[] = [
db.insert(Db.videos).values([
{
...data,
id,
orgId: Option.getOrNull(data.orgId ?? Option.none()),
bucket: Option.getOrNull(data.bucketId ?? Option.none()),
metadata: Option.getOrNull(data.metadata ?? Option.none()),
transcriptionStatus: Option.getOrNull(
data.transcriptionStatus ?? Option.none(),
),
folderId: Option.getOrNull(data.folderId ?? Option.none()),
width: Option.getOrNull(data.width ?? Option.none()),
height: Option.getOrNull(data.height ?? Option.none()),
duration: Option.getOrNull(data.duration ?? Option.none()),
},
]),
];
Also applies to: 87-88
🤖 Prompt for AI Agents
In packages/web-backend/src/Videos/VideosRepo.ts around lines 56-57 (and
similarly lines 87-88), the local variable is wrongly typed as MySqlInsertBase;
Drizzle's insert(...).values(...) returns a driver-bound Promise, so change the
array type to Promise<any>[] (or a more specific Promise<InsertResultType>[] if
you have the Insert result type) and ensure you await them with Promise.all;
update both occurrences to use Promise<> typing and keep the existing
db.insert(...).values(...) calls unchanged.
export const Workflows = [LoomImportVideo] as const; | ||
export const RpcGroup = WorkflowProxy.toRpcGroup(Workflows); | ||
export const RpcSerialization = Rpc.RpcSerialization.layerJson; | ||
|
||
export class RpcClient extends Context.Tag("Workflows/RpcClient")< | ||
RpcClient, | ||
Rpc.RpcClient.RpcClient< | ||
Rpc.RpcGroup.Rpcs<typeof RpcGroup>, | ||
Rpc.RpcClientError.RpcClientError | ||
> | ||
>() {} | ||
|
||
const ApiGroup = WorkflowProxy.toHttpApiGroup("workflows", Workflows); | ||
export const Api = HttpApi.make("workflow-api").add(ApiGroup); | ||
|
||
export class HttpClient extends Context.Tag("Workflows/HttpClient")< | ||
HttpClient, | ||
HttpApiClient.Client<typeof ApiGroup, never, never> | ||
>() {} | ||
|
||
export const WorkflowsLayer = Layer.mergeAll(LoomImportVideoLive); | ||
|
||
export const WorkflowsRpcLayer = | ||
WorkflowProxyServer.layerRpcHandlers(Workflows); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Workflows wiring looks good; please verify consumer imports after the namespace export.
No issues spotted in Rpc/HTTP groups, tags, or layers. Since this file is now consumed via the Workflows
namespace, ensure all call sites import from the namespace (not named re-exports).
Run to confirm no stale imports remain:
🏁 Script executed:
#!/bin/bash
# Find legacy imports of Workflows members not using the namespace
rg -nP -C2 $'from\\s+[\'"]@cap/web-backend[\'"];?\\s*$|from\\s+[\'"]@cap/web-backend/Workflows' \
| rg -nP -v 'import\\s+\\*\\s+as\\s+Workflows'
Length of output: 3264
Fix: Replace named 'Workflows' imports with namespace imports
Named imports of Workflows were found — convert them to namespace imports (e.g., import * as Workflows from '@cap/web-backend' or your package’s Workflows entrypoint).
Affected call sites:
- apps/workflow-runner/src/index.ts —
import { Database, S3Buckets, Videos, Workflows } from "@cap/web-backend";
- apps/web/lib/server.ts — import block includes
Workflows
from "@cap/web-backend" (lines ~11–13) - apps/web/app/api/loom/[...route]/route.ts —
import { Workflows } from "@cap/web-backend";
const LoomImportVideoError = Schema.Union( | ||
// DatabaseError, | ||
Video.NotFoundError, | ||
// S3Error, | ||
LoomApiError, | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix error-union drift with backend to avoid decode/runtime mismatches.
Domain union excludes DatabaseError and S3Error but backend includes them. If backend throws those, clients typed against this domain schema will fail to decode. Single-source the workflow schema and error types.
- Option A (preferred): keep the workflow (and error types) only in web-domain; in backend, import the domain’s LoomImportVideo and use .toLayer for the live impl.
- Option B: extend the domain union to include infra errors and remove the duplicate backend union.
Proposed minimal domain-side change (also export LoomApiError for reuse):
-class LoomApiError extends Schema.TaggedError<LoomApiError>("LoomApiError")(
+export class LoomApiError extends Schema.TaggedError<LoomApiError>("LoomApiError")(
"LoomApiError",
{ cause: Schema.Unknown },
) {}
-const LoomImportVideoError = Schema.Union(
- // DatabaseError,
- Video.NotFoundError,
- // S3Error,
- LoomApiError,
-);
+export const LoomImportVideoError = Schema.Union(
+ Video.NotFoundError,
+ LoomApiError
+ // TODO: add DatabaseError, S3Error here once moved to domain (or map them at the boundary)
+);
And backend-side (outside this file) adopt the domain workflow:
-// packages/web-backend/src/Loom/ImportVideo.ts
-export const LoomImportVideo = Workflow.make({ /* duplicated schema */ })
+import { LoomImportVideo, LoomImportVideoError, LoomApiError } from "@cap/web-domain/Loom.ts";
// keep only the Live layer/impl wiring here
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const LoomImportVideoError = Schema.Union( | |
// DatabaseError, | |
Video.NotFoundError, | |
// S3Error, | |
LoomApiError, | |
); | |
export const LoomImportVideoError = Schema.Union( | |
Video.NotFoundError, | |
LoomApiError | |
// TODO: add DatabaseError, S3Error here once moved to domain (or map them at the boundary) | |
); |
🤖 Prompt for AI Agents
In packages/web-domain/src/Loom.ts around lines 11–16, the LoomImportVideoError
union omits DatabaseError and S3Error while the backend includes them, causing
decoder/runtime mismatches; resolve by single-sourcing the workflow: export the
LoomImportVideo workflow and LoomApiError from this domain file and either
(preferred) have the backend import the domain's LoomImportVideo and call
.toLayer for the live implementation, or if you must keep the backend union,
extend this domain union to include DatabaseError and S3Error so domain types
match backend errors.
export const LoomImportVideo = Workflow.make({ | ||
name: "LoomImportVideo", | ||
payload: { | ||
cap: Schema.Struct({ | ||
userId: Schema.String, | ||
orgId: Schema.String, | ||
}), | ||
loom: Schema.Struct({ | ||
userId: Schema.String, | ||
orgId: Schema.String, | ||
video: Schema.Struct({ | ||
id: Schema.String, | ||
name: Schema.String, | ||
downloadUrl: Schema.String, | ||
width: Schema.OptionFromNullOr(Schema.Number), | ||
height: Schema.OptionFromNullOr(Schema.Number), | ||
fps: Schema.OptionFromNullOr(Schema.Number), | ||
durationSecs: Schema.OptionFromNullOr(Schema.Number), | ||
}), | ||
}), | ||
attempt: Schema.optional(Schema.Number), | ||
}, | ||
error: LoomImportVideoError, | ||
idempotencyKey: (p) => | ||
`${p.cap.userId}-${p.loom.orgId}-${p.loom.video.id}-${p.attempt ?? 0}`, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
🧩 Analysis chain
Avoid duplicating the Workflow schema across packages.
Having “the same” workflow defined in both domain and backend is a footgun. Keep schema here; implement only the Live layer in backend.
Run to confirm duplication:
🏁 Script executed:
#!/bin/bash
rg -nP -C2 'export\s+const\s+LoomImportVideo\s*=\s*Workflow\.make' -g '!**/node_modules/**'
Length of output: 148
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "PWD: $(pwd)"
echo "Checking for the referenced file..."
if [ -f packages/web-domain/src/Loom.ts ]; then
echo "FOUND: packages/web-domain/src/Loom.ts"
sed -n '1,200p' packages/web-domain/src/Loom.ts
else
echo "packages/web-domain/src/Loom.ts not found"
fi
echo "Searching repository for LoomImportVideo and Workflow.make (including hidden, excluding node_modules)..."
rg -nP -C2 --hidden -uu --glob '!**/node_modules/**' 'export\s+const\s+LoomImportVideo\s*=\s*Workflow\.make' || true
rg -nP -C2 --hidden -uu --glob '!**/node_modules/**' 'LoomImportVideo' || true
rg -nP -C2 --hidden -uu --glob '!**/node_modules/**' 'Workflow\.make' || true
Length of output: 1759
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "PWD: $(pwd)"
echo "--- tracked files count ---"
git ls-files | wc -l
echo "--- git grep -n 'LoomImportVideo' ---"
git grep -n "LoomImportVideo" || true
echo "--- git grep -n 'Workflow.make' ---"
git grep -n "Workflow.make" || true
echo "--- git grep -n '\\bLoom\\b' (broad) ---"
git grep -n "\bLoom\b" || true
echo "--- packages/ listing ---"
ls -la packages || true
Length of output: 38583
Single-source the LoomImportVideo Workflow; keep schema in web-domain and expose only the Live layer in backend.
Both packages/web-domain/src/Loom.ts and packages/web-backend/src/Loom/ImportVideo.ts define LoomImportVideo. Keep the canonical Workflow/Schema in packages/web-domain/src/Loom.ts; remove the duplicate Workflow.make from packages/web-backend/src/Loom/ImportVideo.ts and instead import LoomImportVideo and export LoomImportVideoLive = LoomImportVideo.toLayer(...). Update packages/web-backend/src/Workflows.ts and any callers (e.g., apps/web/app/api/loom/[...route]/route.ts) to use the imported Workflow and Live layer.
export const Workflows = [S3Bucket] as const; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Remove stray Workflows export from a domain model (name collision + layering leak).
A Workflows
symbol in this domain file will collide with packages/web-backend/src/Workflows.ts
and couples the domain layer to workflow wiring. Recommend removing.
-export const Workflows = [S3Bucket] as const;
Verify there are no multiple Workflows
exports across packages:
🏁 Script executed:
#!/bin/bash
rg -n '^export const Workflows\b' -C1 --type=ts
Length of output: 402
Remove stray Workflows export from domain model
packages/web-domain/src/S3Bucket.ts exports Workflows
, which collides with packages/web-backend/src/Workflows.ts (used to build RpcGroup) and leaks workflow wiring into the domain layer — remove the export.
-export const Workflows = [S3Bucket] as const;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export const Workflows = [S3Bucket] as const; |
🤖 Prompt for AI Agents
In packages/web-domain/src/S3Bucket.ts around lines 16 to 17, remove the stray
export `export const Workflows = [S3Bucket] as const;` because it leaks workflow
wiring into the domain layer and collides with
packages/web-backend/src/Workflows.ts; simply delete this export (or move any
necessary workflow wiring into the backend Workflows file) so the domain model
no longer exposes workflow registration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (24)
packages/web-backend/tsconfig.json (1)
2-2
: Enable noUncheckedIndexedAccess in base tsconfig
packages/config/base.tsconfig.json — compilerOptions currently: "strict": true, "noUncheckedIndexedAccess": null. Set "noUncheckedIndexedAccess": true under compilerOptions so downstream tsconfigs inherit it.apps/web/app/api/video/domain-info/route.ts (2)
62-71
: Bug:domainVerified
returns a Date, not boolean.
organization[0].domainVerified || false
yields a Date (truthy) orfalse
. Return a boolean with!!
.- domainVerified: organization[0].domainVerified || false, + domainVerified: !!organization[0].domainVerified,
88-91
: Bug:domainVerified
type again.Same fix needed here.
- domainVerified: ownerOrganizations[0].domainVerified || false, + domainVerified: !!ownerOrganizations[0].domainVerified,apps/web/actions/folders/updateFolder.ts (1)
63-71
: Authorization gap: updates can target folders outside the active org.The UPDATE filters only by folders.id; a caller could mutate a folder from another org if they know its id. Verify ownership before update and scope the UPDATE by organizationId.
Apply this diff:
@@ export async function updateFolder({ folderId, name, color, parentId, }: { folderId: Folder.FolderId; name?: string; color?: "normal" | "blue" | "red" | "yellow"; parentId?: Folder.FolderId | null; }) { const user = await getCurrentUser(); if (!user || !user.activeOrganizationId) throw new Error("Unauthorized or no active organization"); + // Verify target folder belongs to the active org + const [targetFolder] = await db() + .select() + .from(folders) + .where( + and(eq(folders.id, folderId), eq(folders.organizationId, user.activeOrganizationId)), + ); + if (!targetFolder) { + throw new Error("Folder not found or not accessible"); + } @@ await db() .update(folders) .set({ ...(name !== undefined ? { name } : {}), ...(color !== undefined ? { color } : {}), ...(parentId !== undefined ? { parentId } : {}), }) - .where(eq(folders.id, folderId)); + .where( + and(eq(folders.id, folderId), eq(folders.organizationId, user.activeOrganizationId)), + );Also applies to: 21-24
apps/web/actions/folders/moveVideoToFolder.ts (3)
26-31
: Authorize by owner/org when selecting and updating the video.Currently, anyone with a videoId could move it; scope both SELECT and UPDATE by ownership or active org.
-import { and, eq } from "drizzle-orm"; +import { and, eq, or } from "drizzle-orm"; @@ -// Get the current video to know its original folder -const [currentVideo] = await db() - .select({ folderId: videos.folderId, id: videos.id }) - .from(videos) - .where(eq(videos.id, videoId)); +// Get the current video and authorize by owner or org +const [currentVideo] = await db() + .select({ + id: videos.id, + folderId: videos.folderId, + ownerId: videos.ownerId, + orgId: videos.orgId, + }) + .from(videos) + .where( + and( + eq(videos.id, videoId), + or(eq(videos.ownerId, user.id), eq(videos.orgId, user.activeOrganizationId)), + ), + ); +if (!currentVideo) throw new Error("Video not found or not accessible"); @@ await db() .update(videos) .set({ folderId, updatedAt: new Date(), }) - .where(eq(videos.id, videoId)); + .where( + and( + eq(videos.id, videoId), + or(eq(videos.ownerId, user.id), eq(videos.orgId, user.activeOrganizationId)), + ), + );Also applies to: 60-67, 7-7
50-57
: Bug: space-level update affects all spaces for the video.WHERE filters only by videoId; this updates every spaceVideos row for that video.
if (spaceId) { - await db() - .update(spaceVideos) - .set({ - folderId: folderId === null ? null : folderId, - }) - .where(eq(spaceVideos.videoId, videoId)); + await db() + .update(spaceVideos) + .set({ folderId }) + .where(and(eq(spaceVideos.videoId, videoId), eq(spaceVideos.spaceId, spaceId))); }
71-74
: Avoid revalidating a null folder path.This can generate /folder/null. Guard the path and also refresh the space root.
-if (spaceId) { - revalidatePath(`/dashboard/spaces/${spaceId}/folder/${folderId}`); -} +if (spaceId) { + revalidatePath(`/dashboard/spaces/${spaceId}`); + if (folderId) { + revalidatePath(`/dashboard/spaces/${spaceId}/folder/${folderId}`); + } +}apps/web/lib/folder.ts (2)
21-33
: Add org-scoped authorization to getFolderById.Without scoping by organization, folder names/metadata can leak across orgs if ids are known.
export async function getFolderById(folderId: string | undefined) { if (!folderId) throw new Error("Folder ID is required"); + const user = await getCurrentUser(); + if (!user || !user.activeOrganizationId) + throw new Error("Unauthorized or no active organization"); const [folder] = await db() .select() .from(folders) - .where(eq(folders.id, Folder.FolderId.make(folderId))); + .where( + and( + eq(folders.id, Folder.FolderId.make(folderId)), + eq(folders.organizationId, user.activeOrganizationId), + ), + ); - if (!folder) throw new Error("Folder not found"); + if (!folder) throw new Error("Folder not found or not accessible");
144-201
: Authorize getVideosByFolderId by organization.The query returns videos for any folderId; add org scoping (join folders) and require an active org.
export async function getVideosByFolderId(folderId: Folder.FolderId) { if (!folderId) throw new Error("Folder ID is required"); + const user = await getCurrentUser(); + if (!user || !user.activeOrganizationId) + throw new Error("Unauthorized or no active organization"); const videoData = await db() .select({ @@ }) .from(videos) + .innerJoin(folders, eq(videos.folderId, folders.id)) .leftJoin(comments, eq(videos.id, comments.videoId)) .leftJoin(sharedVideos, eq(videos.id, sharedVideos.videoId)) .leftJoin(organizations, eq(sharedVideos.organizationId, organizations.id)) .leftJoin(users, eq(videos.ownerId, users.id)) - .where(eq(videos.folderId, folderId)) + .where( + and(eq(videos.folderId, folderId), eq(folders.organizationId, user.activeOrganizationId)), + )apps/web/app/(org)/dashboard/folder/[id]/components/UploadCapButtonWithFolder.tsx (1)
23-36
: Handle failure/cancel paths to avoid sticky uploading UI.UploadCapButton currently does not expose onError/onCancel — its failure path only logs/toasts instead of notifying the parent. Add error/cancel callbacks to UploadCapButton (call them from the conversion/upload catch and abort paths) and pass an onError/onCancel handler from apps/web/app/(org)/dashboard/folder/[id]/components/UploadCapButtonWithFolder.tsx that clears setIsUploading(false), setUploadingCapId(null), setUploadingThumbnailUrl(undefined), and setUploadProgress(0). Alternatively, ensure UploadCapButton itself resets/propagates state on failures/aborts.
apps/web/actions/spaces/remove-videos.ts (2)
21-24
: Missing space-level authorization check (security).Anyone who owns the videos can mutate any space by ID. Verify the user owns the space (or is a member with write access) before deleting.
Apply minimal owner check (extend to members if applicable):
+import { spaces } from "@cap/database/schema"; @@ - if (!spaceId || !videoIds || videoIds.length === 0) { + if (!spaceId || !videoIds || videoIds.length === 0) { throw new Error("Missing required data"); } + const [space] = await db() + .select() + .from(spaces) + .where(eq(spaces.id, spaceId)) + .limit(1); + if (!space) throw new Error("Space not found"); + if (space.ownerId !== user.id) { + // TODO: if your model supports members, allow authorized members here + throw new Error("You don't have permission to modify this space"); + }Also applies to: 37-45
37-66
: Make delete + folder cleanup atomic and return the actual deleted count.Race conditions and partial updates are possible. Also,
deletedCount
reflects valid inputs, not rows actually removed.@@ - // Remove from spaceVideos - await db() - .delete(spaceVideos) - .where( - and( - eq(spaceVideos.spaceId, spaceId), - inArray(spaceVideos.videoId, validVideoIds), - ), - ); - - // Set folderId to null for any removed videos that are in folders belonging to this space - // Find all folder IDs in this space - const folderRows = await db() - .select({ id: folders.id }) - .from(folders) - .where(eq(folders.spaceId, spaceId)); - const folderIds = folderRows.map((f) => f.id); - - if (folderIds.length > 0) { - await db() - .update(videos) - .set({ folderId: null }) - .where( - and( - inArray(videos.id, validVideoIds), - inArray(videos.folderId, folderIds), - ), - ); - } + let deletedCount = 0; + await db().transaction(async (tx) => { + const deleted = await tx + .delete(spaceVideos) + .where( + and( + eq(spaceVideos.spaceId, spaceId), + inArray(spaceVideos.videoId, validVideoIds), + ), + ) + .returning({ videoId: spaceVideos.videoId }); + const deletedIds = deleted.map((r) => r.videoId); + deletedCount = deletedIds.length; + if (deletedCount === 0) return; + const folderRows = await tx + .select({ id: folders.id }) + .from(folders) + .where(eq(folders.spaceId, spaceId)); + const folderIds = folderRows.map((f) => f.id); + if (folderIds.length > 0) { + await tx + .update(videos) + .set({ folderId: null }) + .where( + and(inArray(videos.id, deletedIds), inArray(videos.folderId, folderIds)), + ); + } + }); @@ - message: `Removed ${validVideoIds.length} video(s) from space and folders`, - deletedCount: validVideoIds.length, + message: `Removed ${deletedCount} video(s) from space and folders`, + deletedCount,Also applies to: 69-73
apps/web/actions/organizations/remove-videos.ts (1)
83-91
: Make delete + folder cleanup atomic and report actual count.Use a transaction and
returning()
for accurate counts and consistency.@@ - await db() - .delete(sharedVideos) - .where( - and( - eq(sharedVideos.organizationId, organizationId), - inArray(sharedVideos.videoId, existingVideoIds), - ), - ); - - // Clear folderId for videos that are being removed from the organization and are currently in folders within that organization - // First, get all folder IDs that belong to this organization - const organizationFolders = await db() - .select({ id: folders.id }) - .from(folders) - .where(eq(folders.organizationId, organizationId)); - - const organizationFolderIds = organizationFolders.map((f) => f.id); - - if (organizationFolderIds.length > 0) { - await db() - .update(videos) - .set({ folderId: null }) - .where( - and( - inArray(videos.id, existingVideoIds), - inArray(videos.folderId, organizationFolderIds), - ), - ); - } + let deletedCount = 0; + await db().transaction(async (tx) => { + const deleted = await tx + .delete(sharedVideos) + .where( + and( + eq(sharedVideos.organizationId, organizationId), + inArray(sharedVideos.videoId, existingVideoIds), + ), + ) + .returning({ videoId: sharedVideos.videoId }); + const deletedIds = deleted.map((d) => d.videoId); + deletedCount = deletedIds.length; + if (deletedCount === 0) return; + const organizationFolders = await tx + .select({ id: folders.id }) + .from(folders) + .where(eq(folders.organizationId, organizationId)); + const organizationFolderIds = organizationFolders.map((f) => f.id); + if (organizationFolderIds.length > 0) { + await tx + .update(videos) + .set({ folderId: null }) + .where( + and( + inArray(videos.id, deletedIds), + inArray(videos.folderId, organizationFolderIds), + ), + ); + } + }); @@ - message: `${existingVideoIds.length} video${ + message: `${deletedCount} video${ - existingVideoIds.length === 1 ? "" : "s" + deletedCount === 1 ? "" : "s" } removed from organization`,Also applies to: 101-111, 116-121
apps/web/actions/spaces/add-videos.ts (2)
26-34
: Add space-level authorization (security).You load the space but don’t check the caller can modify it.
const [space] = await db() .select() .from(spaces) .where(eq(spaces.id, spaceId)); if (!space) { throw new Error("Space not found"); } + if (space.ownerId !== user.id) { + // TODO: extend to members/roles if your model supports it + throw new Error("You don't have permission to modify this space"); + }
72-72
: Make insert idempotent with DB constraint.No unique index found for (spaceId, videoId) in packages/database/schema.ts — add the DB constraint and make the insert a no-op on conflict.
- Schema: packages/database/schema.ts — update space_videos table definition (around lines ~569-573) to add a unique index on (spaceId, videoId).
- Code: apps/web/actions/spaces/add-videos.ts — make the insert ignore conflicts.
Apply this change to the insert (unchanged suggestion still valid):
- await db().insert(spaceVideos).values(spaceVideoEntries); + await db() + .insert(spaceVideos) + .values(spaceVideoEntries) + .onConflictDoNothing({ target: [spaceVideos.spaceId, spaceVideos.videoId] });Add a unique index in the table callback, e.g.:
(table) => [ uniqueIndex("space_videos_spaceId_videoId_idx").on(table.spaceId, table.videoId), ]apps/web/actions/organizations/add-videos.ts (3)
104-111
: Unscoped folder reset can corrupt unrelated folder assignments.This clears
folderId
globally for those videos, not just within the org context. Scope it to the org’s folders and make the whole operation atomic.+import { folders } from "@cap/database/schema"; @@ - await db().insert(sharedVideos).values(sharedVideoEntries); - - // Clear folderId for videos added to organization so they appear in main view - await db() - .update(videos) - .set({ folderId: null }) - .where(inArray(videos.id, newVideoIds)); + await db().transaction(async (tx) => { + await tx + .insert(sharedVideos) + .values(sharedVideoEntries) + .onConflictDoNothing({ + target: [sharedVideos.organizationId, sharedVideos.videoId], + }); + const organizationFolders = await tx + .select({ id: folders.id }) + .from(folders) + .where(eq(folders.organizationId, organizationId)); + const organizationFolderIds = organizationFolders.map((f) => f.id); + if (organizationFolderIds.length > 0) { + await tx + .update(videos) + .set({ folderId: null }) + .where( + and( + inArray(videos.id, newVideoIds), + inArray(videos.folderId, organizationFolderIds), + ), + ); + } + });
97-105
: Add DB unique constraint and make insert idempotent
- packages/database/schema.ts (shared_videos): add a UNIQUE index/constraint on (organizationId, videoId).
- apps/web/actions/organizations/add-videos.ts (lines 97–105): change the insert to an idempotent upsert / ON CONFLICT DO NOTHING (ORM-equivalent) so concurrent requests cannot create duplicate shared_videos rows.
112-114
: Fix revalidation target — don't pass organizationId to a space routerevalidatePath(
/dashboard/spaces/${organizationId}
) is inconsistent with other code that revalidates/dashboard/spaces/${spaceId}
; update to revalidate the intended route (use the correct spaceId, revalidate/dashboard/spaces
, or revalidate an organization-specific page as appropriate).Files: apps/web/actions/organizations/add-videos.ts:112–113, apps/web/actions/organizations/remove-videos.ts:113–114
packages/database/auth/auth-options.ts (1)
97-106
: Do not log OTP codes or full email payloads in production.
console.log({ identifier, token })
andconsole.log({ email })
leak sensitive data to logs. Remove or gate with environment checks without printing the token.Apply:
- console.log({ identifier, token }); const { OTPEmail } = await import("../emails/otp-email"); const email = OTPEmail({ code: token, email: identifier }); - console.log({ email }); + if (serverEnv().NODE_ENV !== "production") { + console.debug("Sending verification email to %s", identifier); + } await sendEmail({ email: identifier, subject: `Your Cap Verification Code`, react: email, });apps/web/actions/video/upload.ts (1)
53-58
: CloudFront invalidation condition looks inverted (may invalidate on custom buckets)Invalidate CDN only when using the default CAP bucket, not when users upload to custom buckets.
Apply:
- if ( - !customBucket || - !s3Config || - customBucket.bucketName !== serverEnv().CAP_AWS_BUCKET - ) { + const isDefaultBucket = + (!customBucket && !s3Config) || + customBucket?.bucketName === serverEnv().CAP_AWS_BUCKET; + if (isDefaultBucket) {Optional: remove the now-redundant
s3Config
computation if unused elsewhere.apps/web/actions/videos/password.ts (1)
101-103
: Set secure cookie attributes for the password tokencookies().set("x-cap-password", ...) is missing httpOnly, secure, sameSite and path — this allows JS access and insecure transmission over non-HTTPS.
File: apps/web/actions/videos/password.ts (lines 101-103)
- cookies().set("x-cap-password", await encrypt(video.password)); + cookies().set("x-cap-password", await encrypt(video.password), { + httpOnly: true, + secure: true, + sameSite: "lax", + path: "/", + // consider a TTL; e.g., 30 days or shorter + // maxAge: 60 * 60 * 24 * 30, + });Confirm other places that read this cookie don’t rely on client-side access.
apps/web/app/api/thumbnail/route.ts (1)
15-17
: Derive ownerId from DB; stop trusting userId from query for S3 paths.Using a caller‑supplied
userId
to build the S3 key is unsafe and can generate wrong URLs. Usevideo.ownerId
and require onlyvideoId
.Apply this diff:
- const userId = searchParams.get("userId"); const videoId = searchParams.get("videoId"); @@ - if (!userId || !videoId) { + if (!videoId) { return new Response( JSON.stringify({ error: true, - message: "userId or videoId not supplied", + message: "videoId not supplied", }), { status: 400, headers: getHeaders(origin), }, ); } @@ - const prefix = `${userId}/${videoId}/`; + const prefix = `${video.ownerId}/${video.id}/`;Also applies to: 19-30, 63-64
apps/web/actions/videos/get-transcript.ts (1)
40-47
: Add access control before serving transcript.Currently any caller can fetch transcripts of private videos. Require owner or public visibility (and consider password/share checks if applicable).
const { video } = result; +// Authorization: allow owner or public video +if (!(video.public === true || (user && user.id === video.ownerId))) { + return { success: false, message: "Not authorized to access transcript" }; +} + if (video.transcriptionStatus !== "COMPLETE") { return { success: false, message: "Transcript is not ready yet", }; }apps/web/package.json (1)
98-106
: Fix Next.js/React version mismatch — Next 14 incompatible with React 19Confirmed: Next.js 15 is the first release that officially supports React 19; Next.js 14 expects React 18 and is not compatible with React 19.
File: apps/web/package.json (deps block, lines 98-106)
Option A (upgrade Next to a React 19–compatible version):
- "next": "^14", + "next": "^15",Option B (stay on Next 14; use React 18):
- "react": "^19.1.0", - "react-dom": "^19.1.0", + "react": "^18.3.1", + "react-dom": "^18.3.1",
♻️ Duplicate comments (4)
apps/web/app/api/loom/[...route]/route.ts (1)
50-51
: Replace hard-coded Loom user/org IDs with payload values and validate them.Hard-coded IDs will break multi-user imports.
- userId: "loomVideoId123", - orgId: "loomOrgId123", + userId: payload.loom.userId, + orgId: payload.loom.orgId,Also extend the payload schema:
.setPayload( Schema.Struct({ loom: Schema.Struct({ downloadUrl: Schema.String, videoId: Schema.String, + userId: Schema.String, + orgId: Schema.String, }), }), )packages/web-backend/src/Videos/VideosRepo.ts (2)
61-71
: RedundantOption.none()
fallbacks.You’re already passing
Option
values.-orgId: Option.getOrNull(data.orgId ?? Option.none()), -bucket: Option.getOrNull(data.bucketId ?? Option.none()), -metadata: Option.getOrNull(data.metadata ?? Option.none()), -transcriptionStatus: Option.getOrNull(data.transcriptionStatus ?? Option.none()), -folderId: Option.getOrNull(data.folderId ?? Option.none()), -width: Option.getOrNull(data.width ?? Option.none()), -height: Option.getOrNull(data.height ?? Option.none()), -duration: Option.getOrNull(data.duration ?? Option.none()), +orgId: Option.getOrNull(data.orgId), +bucket: Option.getOrNull(data.bucketId), +metadata: Option.getOrNull(data.metadata), +transcriptionStatus: Option.getOrNull(data.transcriptionStatus), +folderId: Option.getOrNull(data.folderId), +width: Option.getOrNull(data.width), +height: Option.getOrNull(data.height), +duration: Option.getOrNull(data.duration),
54-88
: Don’t run parallel queries on a single transaction connection.
Promise.all
inside a transaction can race queries on the same connection. Execute sequentially and simplify Option handling.- yield* db.execute((db) => - db.transaction(async (db) => { - const promises: MySqlInsertBase<any, any, any>[] = [ - db.insert(Db.videos).values([ - { - ...data, - id, - orgId: Option.getOrNull(data.orgId ?? Option.none()), - bucket: Option.getOrNull(data.bucketId ?? Option.none()), - metadata: Option.getOrNull(data.metadata ?? Option.none()), - transcriptionStatus: Option.getOrNull( - data.transcriptionStatus ?? Option.none(), - ), - folderId: Option.getOrNull(data.folderId ?? Option.none()), - width: Option.getOrNull(data.width ?? Option.none()), - height: Option.getOrNull(data.height ?? Option.none()), - duration: Option.getOrNull(data.duration ?? Option.none()), - }, - ]), - ]; - - if (data.importSource && Option.isSome(data.orgId)) - promises.push( - db.insert(Db.importedVideos).values([ - { - id, - orgId: data.orgId.value, - source: data.importSource.source, - sourceId: data.importSource.id, - }, - ]), - ); - - await Promise.all(promises); - }), - ); + yield* db.execute((db) => + db.transaction(async (db) => { + await db.insert(Db.videos).values([ + { + ...data, + id, + orgId: Option.getOrNull(data.orgId), + bucket: Option.getOrNull(data.bucketId), + metadata: Option.getOrNull(data.metadata), + transcriptionStatus: Option.getOrNull(data.transcriptionStatus), + folderId: Option.getOrNull(data.folderId), + width: Option.getOrNull(data.width), + height: Option.getOrNull(data.height), + duration: Option.getOrNull(data.duration), + }, + ]); + + if (data.importSource && Option.isSome(data.orgId)) { + await db.insert(Db.importedVideos).values([ + { + id, + orgId: data.orgId.value, + source: data.importSource.source, + sourceId: data.importSource.id, + }, + ]); + } + }), + );packages/web-backend/src/Loom/ImportVideo.ts (1)
65-68
: Don’tdie
; map to a typed, recoverable error.- const [_, customBucket] = yield* s3Buckets - .getProviderForUser(payload.cap.userId) - .pipe(Effect.catchAll(() => Effect.die(null))); + const [_, customBucket] = yield* s3Buckets + .getProviderForUser(payload.cap.userId) + .pipe(Effect.catchAll((cause) => Effect.fail(new LoomApiError({ cause }))));
🧹 Nitpick comments (71)
apps/web/lib/Notification.ts (2)
45-46
: Guard possible runtime throw fromVideo.VideoId.make(...)
.If
make
validates and throws on malformed IDs, this now fails before the DB query and bubbles as a 500 via the outer catch. Consider mapping invalid IDs to the existing “not found” path to preserve behavior and clearer logs.Apply this diff to precompute the ID with a local try/catch and use it in the query:
// First, get the video and owner data - const [videoResult] = await db() + let videoId: ReturnType<typeof Video.VideoId.make>; + try { + videoId = Video.VideoId.make(notification.videoId); + } catch { + throw new Error("Video or owner not found"); + } + + const [videoResult] = await db() .select({ videoId: videos.id, ownerId: users.id, activeOrganizationId: users.activeOrganizationId, preferences: users.preferences, }) .from(videos) .innerJoin(users, eq(users.id, videos.ownerId)) - .where(eq(videos.id, Video.VideoId.make(notification.videoId))) + .where(eq(videos.id, videoId)) .limit(1);
1-1
: Rename file to kebab-case per repo guideline.Rename to
notification.ts
to match “TypeScript file names should be kebab-case”.apps/web/hooks/use-transcript.ts (5)
17-24
: Avoid noisy logs for expected “not ready” state.Don’t log an error for a normal pending condition; log only unexpected failures.
- console.error( - "[useTranscript] Failed to fetch transcript:", - result.message, - ); - if (result.message === "Transcript is not ready yet") { - throw new Error("TRANSCRIPT_NOT_READY"); - } - throw new Error(result.message); + if (result.message === "Transcript is not ready yet") { + throw new Error("TRANSCRIPT_NOT_READY"); + } + console.error( + "[useTranscript] Failed to fetch transcript:", + result.message, + ); + throw new Error(result.message);
9-30
: Don’t retry on TRANSCRIPT_NOT_READY; cap generic retries.Prevents wasteful retries and UI thrash while the transcript is still processing.
return useQuery({ queryKey: ["transcript", videoId], queryFn: async () => { @@ }, enabled: transcriptionStatus === "COMPLETE", staleTime: 0, refetchOnWindowFocus: false, + retry: (failureCount, error) => { + if (error instanceof Error && error.message === "TRANSCRIPT_NOT_READY") { + return false; + } + return failureCount < 2; + }, });
10-10
: Nit: mark queryKey tuple as const.Improves literal inference and reduces accidental widening.
- queryKey: ["transcript", videoId], + queryKey: ["transcript", videoId] as const,
33-39
: Align invalidate helper with typed IDs.Keep types consistent with the hook signature.
export const useInvalidateTranscript = () => { const queryClient = useQueryClient(); - return (videoId: string) => { + return (videoId: Video.VideoId) => { queryClient.invalidateQueries({ queryKey: ["transcript", videoId] }); }; };
9-11
: Optional: centralize query keys to avoid drift.A small keys factory prevents mismatches between fetch and invalidation.
Add near the top:
export const transcriptKeys = { detail: (videoId: Video.VideoId) => ["transcript", videoId] as const, };Then:
- queryKey: ["transcript", videoId] as const, + queryKey: transcriptKeys.detail(videoId), - queryClient.invalidateQueries({ queryKey: ["transcript", videoId] }); + queryClient.invalidateQueries({ queryKey: transcriptKeys.detail(videoId) });Also applies to: 36-38
packages/web-backend/tsconfig.json (2)
10-11
: DX: add declaration maps; remove redundant noEmit.
- Add
"declarationMap": true
for better source navigation of d.ts."noEmit": false
is default; safe to drop for brevity.- "emitDeclarationOnly": true + "emitDeclarationOnly": true, + "declarationMap": true
4-4
: packages/web-backend/tsconfig.json — only include .ts (no .tsx files found)No .tsx files detected under packages/web-backend; narrow include to speed tooling and avoid accidental JSX drift.
- "include": ["**/*.ts", "**/*.tsx"], + "include": ["**/*.ts"],apps/web/app/api/video/domain-info/route.ts (5)
4-4
: Remove unused import.
and
isn’t used.-import { and, eq } from "drizzle-orm"; +import { eq } from "drizzle-orm";
17-25
: Validate/coercevideoId
once and return 400 on bad IDs.If
make()
throws, this currently 500s. Pre-validate and reuse the typed ID.- .where(eq(videos.id, Video.VideoId.make(videoId))) + .where(eq(videos.id, typedVideoId))Add near Line 15:
// before first DB call const typedVideoId = (() => { try { return Video.VideoId.make(videoId); } catch { return null; } })(); if (!typedVideoId) { return Response.json({ error: "Invalid videoId" }, { status: 400 }); }
26-32
: Drop unreachableownerId
null check.
videos.ownerId
is non-null per schema; this branch adds noise.- if (!video.ownerId) { - return Response.json({ error: "Invalid video data" }, { status: 500 }); - }
35-51
: Reusevideo.id
and simplify shared org lookup; avoid implicitany
.Reduce duplicate ID parsing, tighten types, and simplify array handling.
- const sharedVideo = await db() - .select({ - organizationId: sharedVideos.organizationId, - }) - .from(sharedVideos) - .where(eq(sharedVideos.videoId, Video.VideoId.make(videoId))) - .limit(1); - - let organizationId = null; - if ( - sharedVideo.length > 0 && - sharedVideo[0] && - sharedVideo[0].organizationId - ) { - organizationId = sharedVideo[0].organizationId; - } + const [shared] = await db() + .select({ organizationId: sharedVideos.organizationId }) + .from(sharedVideos) + .where(eq(sharedVideos.videoId, video.id)) + .limit(1); + const organizationId: string | null = shared?.organizationId ?? null;Also applies to: 40-41
7-13
: Add explicit cache policy and CORS headers per guidelines.Set precise revalidation and handle
OPTIONS
. Add shared CORS headers and pass them in responses.Add at top of file:
export const dynamic = "force-dynamic"; // explicit: always compute on request const corsHeaders = { "Access-Control-Allow-Origin": process.env.ALLOWED_ORIGIN ?? "*", "Access-Control-Allow-Methods": "GET, OPTIONS", "Access-Control-Allow-Headers": "Content-Type, Authorization", } as const; export function OPTIONS() { return new Response(null, { status: 204, headers: corsHeaders }); }Then, when returning JSON:
return Response.json(payload, { status: 200, headers: corsHeaders });apps/web/actions/folders/updateFolder.ts (1)
26-31
: Prefer explicit null/undefined check for parentId gate.if (parentId) skips validation for empty strings; be explicit to avoid edge cases.
- if (parentId) { + if (parentId !== null && parentId !== undefined) {apps/web/app/(org)/dashboard/folder/[id]/components/BreadcrumbItem.tsx (1)
8-8
: Use a type-only import to avoid bundling @cap/web-domain in the client.This reduces client bundle size since Folder is only used for typing.
-import { Folder } from "@cap/web-domain"; +import type { Folder } from "@cap/web-domain";apps/web/lib/folder.ts (1)
103-113
: Tighten typing for sharedSpacesMap keys.Use Video.VideoId for the key to preserve branding through the map.
- const sharedSpacesMap: Record< - string, + const sharedSpacesMap: Record< + Video.VideoId, Array<{ id: string; name: string; organizationId: string; iconUrl: string; isOrg: boolean; }> > = {};apps/web/app/(org)/dashboard/folder/[id]/components/UploadCapButtonWithFolder.tsx (3)
4-4
: Use a type‑only import to keep client bundle lean.You only use Folder for its types; importing it as a value can bloat the client bundle. Switch to a type‑only import.
-import { Folder } from "@cap/web-domain"; +import type { Folder } from "@cap/web-domain";
1-1
: Rename file to kebab‑case to match repo conventions.Guideline: TypeScript file names should be kebab-case. Suggest: upload-cap-button-with-folder.tsx (update imports accordingly).
8-12
: Approve — domain‑typed prop confirmed; optional Props alias recommended.UploadCapButton declares folderId?: Folder.FolderId, so the branded type is preserved.
-export function UploadCapButtonWithFolder({ - folderId, -}: { - folderId: Folder.FolderId; -}) { +type UploadCapButtonWithFolderProps = Readonly<{ folderId: Folder.FolderId }>; +export function UploadCapButtonWithFolder({ folderId }: UploadCapButtonWithFolderProps) {apps/web/app/(org)/dashboard/spaces/[spaceId]/components/VideoCard.tsx (2)
4-4
: Optional: Drop moment to reduce bundle size.Prefer
Intl.DateTimeFormat
or a lightweight lib (e.g.,date-fns
/dayjs
) for formatting.Also applies to: 123-124
1-1
: Filename style nit.Guidelines call for kebab-case file names; consider
video-card.tsx
on a follow-up PR.apps/web/app/(org)/dashboard/spaces/[spaceId]/components/VirtualizedVideoGrid.tsx (2)
3-3
: Extract shared types to avoid importing from a component.Same note as in VideoCard: move
VideoData
tovideo-types.ts
(or shared package) and import from there to prevent type-only cycles.
1-1
: Filename style nit.Consider
virtualized-video-grid.tsx
per guidelines.apps/web/app/(org)/dashboard/spaces/[spaceId]/components/AddVideosDialogBase.tsx (5)
26-36
: Removeany
from the public API surface; type the async results.Replace
Promise<any>
with a simpleApiResult<T>
to satisfy strict TS and the repo guidelines.import { Video } from "@cap/web-domain"; +type ApiResult<T> = { success: boolean; data: T; error?: string; message?: string }; + interface AddVideosDialogBaseProps { open: boolean; onClose: () => void; entityId: string; entityName: string; onVideosAdded?: () => void; - addVideos: (entityId: string, videoIds: Video.VideoId[]) => Promise<any>; - removeVideos: (entityId: string, videoIds: Video.VideoId[]) => Promise<any>; - getVideos: (limit?: number) => Promise<any>; - getEntityVideoIds: (entityId: string) => Promise<any>; + addVideos: (entityId: string, videoIds: Video.VideoId[]) => Promise<ApiResult<null>>; + removeVideos: (entityId: string, videoIds: Video.VideoId[]) => Promise<ApiResult<null>>; + getVideos: (limit?: number) => Promise<ApiResult<VideoData[]>>; + getEntityVideoIds: (entityId: string) => Promise<ApiResult<Video.VideoId[]>>; }
107-151
: Use precise cache updates instead of broad invalidations.Per guidelines, update the
entity-video-ids
cache withsetQueryData
and drop the blanket invalidations. Also capturevariables
inonSuccess
.const updateVideosMutation = useMutation({ mutationFn: async ({ toAdd, toRemove, }: { toAdd: Video.VideoId[]; toRemove: Video.VideoId[]; }) => { @@ - onSuccess: async (result) => { - const { addResult, removeResult } = result || {}; - - // Invalidate both queries to ensure UI updates - await Promise.all([ - queryClient.invalidateQueries({ queryKey: ["user-videos"] }), - queryClient.invalidateQueries({ - queryKey: ["entity-video-ids", entityId], - }), - ]); + onSuccess: async (result, variables) => { + const { addResult, removeResult } = result || {}; + const { toAdd = [], toRemove = [] } = variables || {}; + // Precise cache update for membership + queryClient.setQueryData<Video.VideoId[]>( + ["entity-video-ids", entityId], + (prev) => { + const next = new Set(prev ?? []); + for (const id of toAdd) next.add(id); + for (const id of toRemove) next.delete(id); + return Array.from(next); + }, + );
68-69
: Tighten tab typing withas const
and drop casts.Without
as const
,typeof filterTabs[number]
is juststring
. Make it a literal union and remove the manual cast.-const filterTabs = ["all", "added", "notAdded"]; +const filterTabs = ["all", "added", "notAdded"] as const; @@ -const [videoTab, setVideoTab] = useState<(typeof filterTabs)[number]>("all"); +const [videoTab, setVideoTab] = useState<(typeof filterTabs)[number]>("all"); @@ - onClick={() => setVideoTab(tab as "all" | "added" | "notAdded")} + onClick={() => setVideoTab(tab)}Also applies to: 154-155, 224-225
1-1
: Filename style nit.Consider
add-videos-dialog-base.tsx
per guidelines.
38-49
: Unify VideoData definitions — extract a shared type or rename the local one.
Two distinct VideoData types exist: apps/web/app/(org)/dashboard/spaces/[spaceId]/components/AddVideosDialogBase.tsx (detailed interface) and apps/web/app/(org)/dashboard/caps/Caps.tsx (minimal type). Extract a canonical shared VideoData (preferred) or rename the local type (e.g., AddVideosVideoData) to avoid confusion.apps/web/actions/videos/new-comment.ts (2)
11-16
: Make parentCommentId nullable/optional to avoid sentinel empty stringsPrevents inserting "" into DB and clarifies intent.
export async function newComment(data: { content: string; videoId: Video.VideoId; type: "text" | "emoji"; - parentCommentId: string; + parentCommentId?: string | null; }) { @@ - const parentCommentId = data.parentCommentId; + const parentCommentId = data.parentCommentId ?? null;Optionally, export an input type to reuse across callers:
export type NewCommentInput = { content: string; videoId: Video.VideoId; type: "text" | "emoji"; parentCommentId?: string | null; };Also applies to: 23-27
33-35
: Trim input before validationAvoids accepting whitespace‑only comments.
- const content = data.content; + const content = data.content?.trim(); if (!content || !videoId) { throw new Error("Content and videoId are required"); }apps/web/actions/spaces/remove-videos.ts (1)
74-79
: Avoidany
in catch and normalize error handling.Use
unknown
with a safe message extraction for strict TS.- } catch (error: any) { - return { - success: false, - message: error.message || "Failed to remove videos from space", - }; + } catch (error: unknown) { + const message = + error instanceof Error ? error.message : "Failed to remove videos from space"; + return { success: false, message }; }apps/web/actions/spaces/add-videos.ts (1)
56-59
: Use a Set for O(1) membership checks.Minor but clearer and faster.
- const newVideoIds = validVideoIds.filter( - (id) => !existingVideoIds.includes(id), - ); + const existingSet = new Set(existingVideoIds); + const newVideoIds = validVideoIds.filter((id) => !existingSet.has(id));apps/web/actions/organizations/add-videos.ts (1)
86-89
: Use a Set for O(1) membership checks.Small readability/perf win.
- const newVideoIds = validVideoIds.filter( - (id) => !existingVideoIds.includes(id), - ); + const existingSet = new Set(existingVideoIds); + const newVideoIds = validVideoIds.filter((id) => !existingSet.has(id));packages/database/auth/auth-options.ts (3)
33-36
: Gate NextAuth debug logs by environment.Avoid verbose/possibly sensitive debug logs in production.
- debug: true, + debug: serverEnv().NODE_ENV !== "production",
98-99
: Align dynamic import with your specifier strategy.If you’re standardizing on “.ts” specifiers, this dynamic import should match to avoid resolution inconsistencies.
- const { OTPEmail } = await import("../emails/otp-email"); + const { OTPEmail } = await import("../emails/otp-email.ts");If you adopt NodeNext “.js” specifiers instead, use
../emails/otp-email.js
.
9-9
: Avoid deep import path for Provider type.Prefer the stable entrypoint re-export.
-import type { Provider } from "next-auth/providers/index"; +import type { Provider } from "next-auth/providers";apps/web/app/api/upload/[...route]/multipart.ts (3)
91-98
: Harden request validation (partNumber, md5Sum, sizes, fps, etc.)Tighten Zod schemas to reject malformed inputs early (negative part numbers, non‑base64 MD5, etc.).
Apply:
- z - .object({ - uploadId: z.string(), - partNumber: z.number(), - md5Sum: z.string(), - }) + z + .object({ + uploadId: z.string(), + partNumber: z.number().int().min(1), + md5Sum: z + .string() + // Base64 MD5 is 24 chars with padding; adjust if your S3 client expects hex + .regex(/^[A-Za-z0-9+/=]{24}$/, "md5Sum must be base64-encoded MD5"), + })- .object({ - uploadId: z.string(), - parts: z.array( - z.object({ - partNumber: z.number(), - etag: z.string(), - size: z.number(), - }), - ), - durationInSecs: stringOrNumberOptional, - width: stringOrNumberOptional, - height: stringOrNumberOptional, - fps: stringOrNumberOptional, - }) + .object({ + uploadId: z.string(), + parts: z.array( + z.object({ + partNumber: z.number().int().min(1), + etag: z.string().min(1), + size: z.number().int().positive(), + }), + ).min(1), + durationInSecs: stringOrNumberOptional, + width: stringOrNumberOptional, + height: stringOrNumberOptional, + fps: stringOrNumberOptional, + })Also applies to: 156-170
279-290
: Coerce numeric metadata before persisting
updateIfDefined
may receive strings from the schema; coerce to numbers to avoid silently writing NaN.Apply:
- .set({ - duration: updateIfDefined(body.durationInSecs, videos.duration), - width: updateIfDefined(body.width, videos.width), - height: updateIfDefined(body.height, videos.height), - fps: updateIfDefined(body.fps, videos.fps), - }) + .set({ + duration: updateIfDefined( + body.durationInSecs != null ? Number(body.durationInSecs) : undefined, + videos.duration, + ), + width: updateIfDefined( + body.width != null ? Number(body.width) : undefined, + videos.width, + ), + height: updateIfDefined( + body.height != null ? Number(body.height) : undefined, + videos.height, + ), + fps: updateIfDefined( + body.fps != null ? Number(body.fps) : undefined, + videos.fps, + ), + })If
stringOrNumberOptional
already coerces to numbers, feel free to skip.
251-258
: Copy‑to‑self “metadata fix” can be avoided in many casesIf your S3 provider supports setting
ContentType
on multipart complete (some SDKs allow this via object tagging/metadata), prefer that over a second copy to reduce cost/latency. If not supported, keep this with an explanatory comment.apps/web/app/s/[videoId]/_components/Sidebar.tsx (1)
97-101
: Tabs list doesn’t include “settings”, but switch has a case for itEither add “Settings” to the tabs or remove the dead case.
Apply:
const tabs = [ { id: "activity", label: "Comments" }, { id: "summary", label: "Summary" }, { id: "transcript", label: "Transcript" }, + { id: "settings", label: "Settings" }, ];
Also applies to: 151-154
apps/web/actions/caps/share.ts (2)
51-60
: Param naming is misleading: spaceIds also accepts organization IDsCode treats some
spaceIds
as org IDs (direct org shares). Consider renaming totargetIds
or splitting intoorganizationIds
andspaceIds
for clarity.If API surface is already public, add a deprecation path before renaming.
24-28
: Validate inputs in the server actionConsider a lightweight Zod schema for
ShareCapParams
to reject invalid IDs early.apps/web/actions/videos/password.ts (1)
95-103
: Avoid detailed error messages for password flowReturning “No password set” vs “Invalid password” can enable enumeration. Consider a unified user‑facing error and log specifics server‑side.
apps/web/app/api/thumbnail/route.ts (4)
41-49
: Use 404 for “not found”, not 401.401 implies auth required; these are existence checks.
- status: 401, + status: 404,Apply to both responses.
Also applies to: 53-59
83-85
: Avoidany
for S3 listing items.Use a minimal structural type.
- const thumbnailKey = contents.find((item: any) => + const thumbnailKey = contents.find((item: { Key?: string }) => item.Key?.endsWith("screen-capture.jpg"), )?.Key;
67-73
: Prefer a single code path via bucket provider.Relying on legacy
video.awsBucket
creates drift. Consider always computing the key usingvideo.ownerId/video.id/...
and signing it withcreateBucketProvider(result.bucket)
; fall back to env default whenresult.bucket
is null. This removes special‑casing and avoids public URL mismatches.
77-86
: Skip listObjects; sign the known thumbnail key directly.You know the exact key structure; listing is slower and may hit large prefixes.
- const listResponse = await bucketProvider.listObjects({ - prefix: prefix, - }); - const contents = listResponse.Contents || []; - const thumbnailKey = contents.find((item: { Key?: string }) => - item.Key?.endsWith("screen-capture.jpg"), - )?.Key; - if (!thumbnailKey) { + const thumbnailKey = `${prefix}screenshot/screen-capture.jpg`; + // Optionally: perform a cheap HEAD if your provider supports it to confirm existence. + const exists = await bucketProvider.headObject?.(thumbnailKey).catch(() => false); + if (exists === false) { return new Response( JSON.stringify({ error: true, message: "No thumbnail found for this video", }), { status: 404, headers: getHeaders(origin), }, ); } - thumbnailUrl = await bucketProvider.getSignedObjectUrl(thumbnailKey); + thumbnailUrl = await bucketProvider.getSignedObjectUrl(thumbnailKey);Also applies to: 100-101
apps/web/actions/videos/edit-date.ts (1)
35-41
: Validate date input to avoidInvalid Date
runtime errors.
toISOString()
will throw on invalid dates.const newDate = new Date(date); const currentDate = new Date(); +if (Number.isNaN(newDate.getTime())) { + throw new Error("Invalid date"); +} + // Prevent future dates if (newDate > currentDate) { throw new Error("Cannot set a date in the future"); }apps/web/app/(org)/dashboard/caps/components/PasswordDialog.tsx (1)
106-114
: Disable “Save” when password is empty.Prevents accidental empty passwords; “Remove” already handles clearing.
- <Button + <Button size="sm" variant="dark" onClick={() => updatePassword.mutate()} spinner={pending} - disabled={pending} + disabled={pending || !password.trim()} > Save </Button>apps/web/actions/videos/edit-title.ts (1)
13-15
: Trim and validate title before saving.Avoids saving whitespace‑only names.
-if (!user || !title || !videoId) { +const newTitle = title?.trim(); +if (!user || !newTitle || !videoId) { throw new Error("Missing required data for updating video title"); } @@ - .set({ name: title }) + .set({ name: newTitle })Also applies to: 36-36
apps/web/actions/videos/edit-transcript.ts (2)
56-66
: VTT edit only replaces the first text line; multi‑line cues aren’t handled.Current logic sets
isNextLineText
for a single line. Multi‑line cue text (common in VTT) will be partially updated, leaving stale lines. Replace the “next line only” approach with “replace until blank line”:@@ - if (trimmedLine.includes("-->")) { - updatedLines.push(line); - isNextLineText = true; - continue; - } + if (trimmedLine.includes("-->")) { + updatedLines.push(line); + // If this is the target cue, replace the whole text block until the next blank line + if (currentEntryId === entryId && !foundEntry) { + updatedLines.push(newText.trim()); + foundEntry = true; + isNextLineText = true; // consume lines until blank line + } else { + isNextLineText = true; + } + continue; + } @@ - if (currentEntryId === entryId && isNextLineText && !foundEntry) { - updatedLines.push(newText.trim()); - foundEntry = true; - isNextLineText = false; - } else { - updatedLines.push(line); - if (isNextLineText) { - isNextLineText = false; - } - } + if (isNextLineText) { + // consume original text lines until the blank separator + if (!trimmedLine) { + updatedLines.push(line); + isNextLineText = false; + } + // skip original text lines + continue; + } + updatedLines.push(line);Also applies to: 88-149
58-60
: Confirm object encoding from S3 provider.If
bucket.getObject()
can return aBuffer
, ensure it’s decoded (toString("utf8")
) beforesplit("\n")
to avoid runtime errors.apps/web/app/(org)/dashboard/caps/components/SharingDialog.tsx (1)
164-171
: Nit:useCallback
dependency contains a freshly created Set each render.
sharedSpaceIds
is recreated on every render, making the memoization moot. Either compute inline or memoize the Set.apps/web/app/(org)/dashboard/caps/components/UploadCapButton.tsx (2)
14-15
: Avoid pulling runtime code into the client bundle for types.Change to a type‑only import to prevent bundling
@cap/web-domain
at runtime in a client component.-import { Folder } from "@cap/web-domain"; +import type { Folder } from "@cap/web-domain";
285-293
: Revoke the generated thumbnail object URL to avoid leaks.
URL.createObjectURL(thumbnailBlob)
isn’t revoked. Either:
- Return a revoker via
onStart(id, url)
and revoke in parent when unneeded, or- Track it locally and revoke in
finally
.Example (local revoke):
-const thumbnailUrl = thumbnailBlob ? URL.createObjectURL(thumbnailBlob) : undefined; +const thumbnailUrl = thumbnailBlob ? URL.createObjectURL(thumbnailBlob) : undefined; try { onStart?.(uploadId, thumbnailUrl); ... } finally { + if (thumbnailUrl) URL.revokeObjectURL(thumbnailUrl); }apps/web/app/api/desktop/[...route]/video.ts (1)
30-34
: Coerce numeric query params to numbers to avoid string writes to numeric columns.
durationInSecs/width/height/fps
may be strings. Preferz.coerce.number().optional()
:- durationInSecs: stringOrNumberOptional, - width: stringOrNumberOptional, - height: stringOrNumberOptional, - fps: stringOrNumberOptional, + durationInSecs: z.coerce.number().optional(), + width: z.coerce.number().optional(), + height: z.coerce.number().optional(), + fps: z.coerce.number().optional(),packages/database/package.json (1)
31-31
: Avoid depending on Next.js inside the database package."next": "^14" in a DB package couples layers and inflates installs. Prefer devDependency or peerDependency if only for types.
- "next": "^14", + "next": "^14", // move to "devDependencies" or "peerDependencies" if only used for typesapps/web/actions/videos/get-og-image.tsx (2)
70-74
: Don’t log presigned URLs or detailed errors.Presigned S3 URLs may leak in logs. Log a sanitized message.
- console.error("Error generating URL for screenshot:", error); + console.error("Error generating screenshot URL");
132-144
: OG image remotecan be flaky; consider embedding as data URL.
Satori/OG sometimes struggles fetching remote images. Pre-fetch and embed as data URL for reliability.
- {screenshotUrl && ( - <img style={{ ... }} src={screenshotUrl} /> - )} + {screenshotUrl && (() => { + /* prefetch outside and pass in as screenshotDataUrl */ + return <img style={{ ... }} src={screenshotDataUrl} />; + })()}Support code (before returning ImageResponse):
let screenshotDataUrl: string | null = null; if (screenshotUrl) { try { const res = await fetch(screenshotUrl); const buf = Buffer.from(await res.arrayBuffer()); screenshotDataUrl = `data:image/jpeg;base64,${buf.toString("base64")}`; } catch {} }apps/web/app/api/loom/[...route]/route.ts (1)
33-64
: Return a typed acknowledgment; don’t silently 204.Define a response schema and return an enqueue acknowledgment (and optionally a workflow id).
class Api extends HttpApi.make("CapWebApi") .add( HttpApiGroup.make("root").add( HttpApiEndpoint.post("loom")`/import-video` ... + .setResponse( + Schema.Struct({ enqueued: Schema.Boolean, workflowId: Schema.optional(Schema.String) }), + ) ... ), ) ... handlers.handle( "loom", Effect.fn( function* ({ payload }) { const { workflows } = yield* Workflows.HttpClient; const user = yield* CurrentUser; - yield* workflows.LoomImportVideo({ payload: { ... } }); + const wf = yield* workflows.LoomImportVideo({ payload: { ... } }); + return { enqueued: true, workflowId: wf?.id ?? undefined };packages/database/schema.ts (3)
245-246
: videos.orgId is nullable but marked TODO; workflows depend on it—make it non-nullable.Nullable orgId complicates joins and idempotency (see imported_videos PK). Consider enforcing NOT NULL with a backfill/migration.
- // TODO: make this non-null - orgId: nanoIdNullable("orgId"), + orgId: nanoId("orgId").notNull(),Plan the migration: backfill orgId for existing rows, then alter column NOT NULL.
264-265
: Type of videos.folderId should include null.Column is nullable (nanoIdNullable) but $type omits null, causing unsound typings.
- folderId: nanoIdNullable("folderId").$type<Folder.FolderId>(), + folderId: nanoIdNullable("folderId").$type<Folder.FolderId | null>(),
281-286
: Add an index on orgId (videos).Given new orgId usage, add an index for common filters/joins.
(table) => [ index("id_idx").on(table.id), index("owner_id_idx").on(table.ownerId), index("is_public_idx").on(table.public), index("folder_id_idx").on(table.folderId), + index("org_id_idx").on(table.orgId), ],
apps/web/package.json (1)
125-126
: Remove unused Babel devDependencies or add a Babel configNo Babel config found; only references are the devDeps in apps/web/package.json and commented lines in apps/web/next.config.mjs — remove these devDeps to avoid bloat or add a .babelrc/babel.config.js if you intend to use Babel.
Locations: apps/web/package.json (lines 125–126, 139); apps/web/next.config.mjs (commented refs at ~118, 122–123).
"devDependencies": { - "@babel/plugin-transform-private-methods": "^7.27.1", - "@babel/plugin-transform-private-property-in-object": "^7.27.1", - "babel-loader": "^10.0.0",packages/web-domain/src/Loom.ts (3)
6-9
: Export the error type for reuse.Without exporting
LoomApiError
, downstream code can’t pattern‑match or re‑use the error.-class LoomApiError extends Schema.TaggedError<LoomApiError>("LoomApiError")( +export class LoomApiError extends Schema.TaggedError<LoomApiError>("LoomApiError")( "LoomApiError", { cause: Schema.Unknown }, ) {}
11-16
: Export the union to keep client/server error types consistent.This enables consumers to reference the same error shape as the workflow.
-const LoomImportVideoError = Schema.Union( +export const LoomImportVideoError = Schema.Union( // DatabaseError, Video.NotFoundError, // S3Error, LoomApiError, );
1-44
: Rename file to kebab‑case per guidelines.
Loom.ts
→loom.ts
.packages/web-backend/src/Videos/VideosRepo.ts (1)
5-5
: Remove unused type import after the refactor.
MySqlInsertBase
isn’t needed.-import type { MySqlInsertBase } from "drizzle-orm/mysql-core";
packages/web-backend/src/Workflows.ts (1)
1-32
: Rename file to kebab‑case per guidelines.
Workflows.ts
→workflows.ts
.packages/web-backend/src/Loom/ImportVideo.ts (1)
1-161
: Rename files to kebab‑case per guidelines.
ImportVideo.ts
→import-video.ts
.
|
||
import { db } from "@cap/database"; | ||
import { getCurrentUser } from "@cap/database/auth/session"; | ||
import { videos } from "@cap/database/schema"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: ignoring per‑video bucket causes broken URLs in multi‑bucket setups.
Join s3Buckets
and pass the matched bucket to createBucketProvider
, like other actions.
-import { videos } from "@cap/database/schema";
+import { s3Buckets, videos } from "@cap/database/schema";
@@
-const query = await db().select().from(videos).where(eq(videos.id, videoId));
+const query = await db()
+ .select({ video: videos, bucket: s3Buckets })
+ .from(videos)
+ .leftJoin(s3Buckets, eq(videos.bucket, s3Buckets.id))
+ .where(eq(videos.id, videoId));
@@
-const video = query[0];
+const { video, bucket } = query[0];
@@
-const bucketProvider = await createBucketProvider();
-const videoKey = `${video.ownerId}/${videoId}/result.mp4`;
+const bucketProvider = await createBucketProvider(bucket);
+const videoKey = `${video.ownerId}/${video.id}/result.mp4`;
Also applies to: 18-19, 24-25, 33-37
🤖 Prompt for AI Agents
In apps/web/actions/videos/download.ts around lines 5 and additionally 18-19,
24-25, 33-37, the code ignores per-video bucket selection causing broken URLs in
multi-bucket setups; update the logic to join s3Buckets and locate the specific
bucket matching the video's bucketId, then pass that matched bucket object into
createBucketProvider (same pattern as other actions) so the provider constructs
correct URLs; ensure you handle the case where no matching bucket is found
(error or fallback) and use the matched bucket for all createBucketProvider
calls in the listed lines.
if (!video || video.public === false) { | ||
return new ImageResponse( | ||
<div | ||
style={{ | ||
width: "100%", | ||
height: "100%", | ||
display: "flex", | ||
alignItems: "center", | ||
justifyContent: "center", | ||
background: | ||
"radial-gradient(90.01% 80.01% at 53.53% 49.99%,#d3e5ff 30.65%,#4785ff 88.48%,#fff 100%)", | ||
}} | ||
> | ||
<h1 style={{ fontSize: "40px" }}>Video or bucket not found</h1> | ||
</div>, | ||
( | ||
<div | ||
style={{ | ||
width: "100%", | ||
height: "100%", | ||
display: "flex", | ||
alignItems: "center", | ||
justifyContent: "center", | ||
background: | ||
"radial-gradient(90.01% 80.01% at 53.53% 49.99%,#d3e5ff 30.65%,#4785ff 88.48%,#fff 100%)", | ||
}} | ||
> | ||
<h1 style={{ fontSize: "40px" }}>Video or bucket not found</h1> | ||
</div> | ||
), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle missing bucket before using it.
You build the "Video or bucket not found" UI but only check video/public. If bucket is null, createBucketProvider will throw.
- if (!video || video.public === false) {
+ if (!video || video.public === false || !videoData.bucket) {
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (!video || video.public === false) { | |
return new ImageResponse( | |
<div | |
style={{ | |
width: "100%", | |
height: "100%", | |
display: "flex", | |
alignItems: "center", | |
justifyContent: "center", | |
background: | |
"radial-gradient(90.01% 80.01% at 53.53% 49.99%,#d3e5ff 30.65%,#4785ff 88.48%,#fff 100%)", | |
}} | |
> | |
<h1 style={{ fontSize: "40px" }}>Video or bucket not found</h1> | |
</div>, | |
( | |
<div | |
style={{ | |
width: "100%", | |
height: "100%", | |
display: "flex", | |
alignItems: "center", | |
justifyContent: "center", | |
background: | |
"radial-gradient(90.01% 80.01% at 53.53% 49.99%,#d3e5ff 30.65%,#4785ff 88.48%,#fff 100%)", | |
}} | |
> | |
<h1 style={{ fontSize: "40px" }}>Video or bucket not found</h1> | |
</div> | |
), | |
if (!video || video.public === false || !videoData.bucket) { | |
return new ImageResponse( | |
( | |
<div | |
style={{ | |
width: "100%", | |
height: "100%", | |
display: "flex", | |
alignItems: "center", | |
justifyContent: "center", | |
background: | |
"radial-gradient(90.01% 80.01% at 53.53% 49.99%,#d3e5ff 30.65%,#4785ff 88.48%,#fff 100%)", | |
}} | |
> | |
<h1 style={{ fontSize: "40px" }}>Video or bucket not found</h1> | |
</div> | |
), |
🤖 Prompt for AI Agents
In apps/web/actions/videos/get-og-image.tsx around lines 41 to 57, the code
returns the "Video or bucket not found" UI only when video is missing or not
public but still proceeds to use bucket (calling createBucketProvider) which
will throw if bucket is null; update the early-exit condition to also check for
a missing bucket (e.g., if (!video || video.public === false || !bucket) return
the ImageResponse) or move the createBucketProvider call after validating bucket
exists, ensuring you never call createBucketProvider with a null/undefined
bucket and keep the same fallback UI when bucket is absent.
import { Tooltip } from "@/components/Tooltip"; | ||
import { VideoThumbnail } from "@/components/VideoThumbnail"; | ||
import type { Video } from "./AddVideosDialogBase"; | ||
import type { VideoData } from "./AddVideosDialogBase"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
🧩 Analysis chain
Avoid importing types from a component file; extract to a shared types module.
Importing VideoData
from AddVideosDialogBase
creates a type-only cycle across these three components and risks future runtime cycles. Extract VideoData
into a colocated video-types.ts
(or a shared package) and import it in all consumers.
Run to find duplicate/conflicting VideoData
definitions:
🏁 Script executed:
#!/bin/bash
rg -nP --type=ts --hidden -C2 '(^|\s)(interface|type)\s+VideoData\b'
Length of output: 845
Extract VideoData to a shared type module and update imports
Duplicate VideoData found in:
- apps/web/app/(org)/dashboard/caps/Caps.tsx
- apps/web/app/(org)/dashboard/spaces/[spaceId]/components/AddVideosDialogBase.tsx
Actionable steps:
- Create a single source: apps/web/app/(org)/dashboard/video-types.ts exporting
export type VideoData = { id: Video.VideoId; ownerId: string }
. - Remove the local
VideoData
definitions from the two files above. - Replace local imports to use the shared type (use
import type
):- apps/web/app/(org)/dashboard/spaces/[spaceId]/components/VideoCard.tsx — change
import type { VideoData } from "./AddVideosDialogBase";
to
import type { VideoData } from "../../../video-types";
- apps/web/app/(org)/dashboard/spaces/[spaceId]/components/AddVideosDialogBase.tsx — remove
export interface VideoData
and add
import type { VideoData } from "../../../video-types";
- apps/web/app/(org)/dashboard/caps/Caps.tsx — remove local
export type VideoData
and add
import type { VideoData } from "../video-types";
- apps/web/app/(org)/dashboard/spaces/[spaceId]/components/VideoCard.tsx — change
- Run the TypeScript build/tests to confirm no import/runtime cycles remain.
🤖 Prompt for AI Agents
In apps/web/app/(org)/dashboard/spaces/[spaceId]/components/VideoCard.tsx around
line 9, the file imports a locally defined VideoData from AddVideosDialogBase
causing duplicate type definitions; create a shared type file at
apps/web/app/(org)/dashboard/video-types.ts exporting "export type VideoData = {
id: Video.VideoId; ownerId: string }", remove the local VideoData definitions
from AddVideosDialogBase.tsx and Caps.tsx, update imports to use the shared type
(VideoCard.tsx: change import to import type { VideoData } from
"../../../video-types"; AddVideosDialogBase.tsx: remove its export interface and
add import type { VideoData } from "../../../video-types"; Caps.tsx: remove its
local type and add import type { VideoData } from "../video-types"; then run the
TypeScript build/tests to ensure no import/runtime cycles remain.
videos: VideoData[]; | ||
selectedVideos: string[]; | ||
handleVideoToggle: (id: string) => void; | ||
entityVideoIds: string[]; | ||
handleVideoToggle: (id: Video.VideoId) => void; | ||
entityVideoIds: Video.VideoId[]; | ||
height?: number; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix type mismatch: selectedVideos should be Video.VideoId[].
selectedVideos
is string[]
while video.id
is Video.VideoId
. This will either fail type-checking (with branded IDs) or silently weaken type guarantees. Align the prop type.
Apply:
interface VirtualizedVideoGridProps {
- videos: VideoData[];
- selectedVideos: string[];
+ videos: VideoData[];
+ selectedVideos: Video.VideoId[];
handleVideoToggle: (id: Video.VideoId) => void;
entityVideoIds: Video.VideoId[];
height?: number;
columnCount?: number;
rowHeight?: number;
}
Also applies to: 90-93
🤖 Prompt for AI Agents
In
apps/web/app/(org)/dashboard/spaces/[spaceId]/components/VirtualizedVideoGrid.tsx
around lines 8-12 (and similarly at lines 90-93), the prop selectedVideos is
typed as string[] but should be Video.VideoId[] to match video.id and preserve
branded ID typing; change the prop type declarations and any related usages/prop
forwarding to use Video.VideoId[] (adjust imports/types if necessary) and update
any places that construct or compare selected IDs to ensure they use the
Video.VideoId type rather than plain string.
.setPayload( | ||
Schema.Struct({ | ||
loom: Schema.Struct({ | ||
downloadUrl: Schema.String, | ||
videoId: Schema.String, | ||
}), | ||
}), | ||
) | ||
.middleware(HttpAuthMiddleware) | ||
.addError(HttpApiError.InternalServerError), | ||
), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Validate downloadUrl host to prevent SSRF.
You pass an arbitrary URL to a workflow that downloads it. Restrict to Loom domains and HTTPS.
HttpApiEndpoint.post("loom")`/import-video`
.setPayload(
Schema.Struct({
loom: Schema.Struct({
downloadUrl: Schema.String,
videoId: Schema.String,
+ userId: Schema.String,
+ orgId: Schema.String,
}),
}),
)
.middleware(HttpAuthMiddleware)
- .addError(HttpApiError.InternalServerError),
+ .addError(HttpApiError.BadRequest)
+ .addError(HttpApiError.InternalServerError),
Add validation in handler:
handlers.handle(
"loom",
Effect.fn(
function* ({ payload }) {
+ const url = new URL(payload.loom.downloadUrl);
+ const allowed = ["loom.com", "cdn.loom.com"];
+ const ok = allowed.some(
+ (d) => url.hostname === d || url.hostname.endsWith(`.${d}`),
+ );
+ if (!ok || url.protocol !== "https:") {
+ return new HttpApiError.BadRequest();
+ }
Also applies to: 66-69
🤖 Prompt for AI Agents
In apps/web/app/api/loom/[...route]/route.ts around lines 19-29 (and similarly
update lines ~66-69), the Schema currently accepts any downloadUrl which can
enable SSRF; add server-side validation in the request handler after parsing: 1)
parse the downloadUrl with URL constructor, 2) require protocol === 'https:', 3)
allow only hostnames that match Loom domains (e.g., exact hostnames and/or a
whitelist like loom.com, cdn.loom.com, *.loom.com — use a strict whitelist or
exact-match list), 4) reject requests with invalid URLs or non-allowed hosts by
returning a 4xx error (e.g., BadRequest) before calling the workflow. Ensure the
same validation is applied to the other handler block at lines ~66-69.
export const LoomImportVideo = Workflow.make({ | ||
name: "LoomImportVideo", | ||
payload: { | ||
cap: Schema.Struct({ | ||
userId: Schema.String, | ||
orgId: Schema.String, | ||
}), | ||
loom: Schema.Struct({ | ||
userId: Schema.String, | ||
orgId: Schema.String, | ||
video: Schema.Struct({ | ||
id: Schema.String, | ||
name: Schema.String, | ||
downloadUrl: Schema.String, | ||
width: Schema.OptionFromNullOr(Schema.Number), | ||
height: Schema.OptionFromNullOr(Schema.Number), | ||
fps: Schema.OptionFromNullOr(Schema.Number), | ||
durationSecs: Schema.OptionFromNullOr(Schema.Number), | ||
}), | ||
}), | ||
attempt: Schema.optional(Schema.Number), | ||
}, | ||
error: LoomImportVideoError, | ||
idempotencyKey: (p) => | ||
`${p.cap.userId}-${p.loom.orgId}-${p.loom.video.id}-${p.attempt ?? 0}`, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
🧩 Analysis chain
Single source of truth for the workflow.
Ensure only one Workflow.make({ name: "LoomImportVideo" ...})
exists (domain). Backend should only provide .toLayer(...)
.
🏁 Script executed:
#!/bin/bash
# Find duplicates of the Loom workflow / error across the repo
rg -nP --type=ts -C2 'Workflow\.make\(\s*{\s*name:\s*"LoomImportVideo"'
rg -nP --type=ts -C2 'class\s+LoomApiError\b'
rg -nP --type=ts -C2 'Schema\.Union\([^)]*LoomApiError'
Length of output: 750
Consolidate Loom workflow and error into web-domain (single source of truth).
- Move the Workflow.make("LoomImportVideo") definition out of packages/web-backend/src/Loom/ImportVideo.ts into packages/web-domain and have the backend only provide .toLayer / import the workflow.
- Remove the duplicate LoomApiError in packages/web-backend/src/Loom/ImportVideo.ts and import the canonical class from packages/web-domain/src/Loom.ts.
🤖 Prompt for AI Agents
In packages/web-backend/src/Loom/ImportVideo.ts around lines 22 to 47, the
Workflow.make("LoomImportVideo") and the duplicate LoomApiError must be removed
and replaced by the canonical definitions in packages/web-domain: move the
entire Workflow.make(...) payload and idempotencyKey definition into
packages/web-domain (e.g., packages/web-domain/src/Loom.ts) and export the
workflow factory from there; in this backend file import the workflow (or its
.toLayer) from packages/web-domain and import the single LoomApiError class from
packages/web-domain/src/Loom.ts instead of redefining it; ensure all imports and
usages (including the idempotencyKey composition) are updated to reference the
domain export and remove the duplicate class definition here.
const videoId = yield* videos.create({ | ||
ownerId: payload.cap.userId, | ||
orgId: Option.some(payload.cap.orgId), | ||
bucketId: customBucketId, | ||
source: { type: "desktopMP4" as const }, | ||
name: payload.loom.video.name, | ||
duration: loomVideo.durationSecs, | ||
width: loomVideo.width, | ||
height: loomVideo.height, | ||
public: true, | ||
metadata: Option.none(), | ||
folderId: Option.none(), | ||
transcriptionStatus: Option.none(), | ||
importSource: new Video.ImportSource({ | ||
source: "loom", | ||
id: loomVideo.id, | ||
}), | ||
}); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Map videos.create
failures into the domain error.
Prevents backend‑specific errors from leaking beyond the declared union.
- const videoId = yield* videos.create({
+ const videoId = yield* videos.create({
ownerId: payload.cap.userId,
orgId: Option.some(payload.cap.orgId),
bucketId: customBucketId,
source: { type: "desktopMP4" as const },
name: payload.loom.video.name,
duration: loomVideo.durationSecs,
width: loomVideo.width,
height: loomVideo.height,
public: true,
metadata: Option.none(),
folderId: Option.none(),
transcriptionStatus: Option.none(),
importSource: new Video.ImportSource({
source: "loom",
id: loomVideo.id,
}),
- });
+ }).pipe(Effect.catchAll((cause) => Effect.fail(new LoomApiError({ cause }))));
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
const videoId = yield* videos.create({ | |
ownerId: payload.cap.userId, | |
orgId: Option.some(payload.cap.orgId), | |
bucketId: customBucketId, | |
source: { type: "desktopMP4" as const }, | |
name: payload.loom.video.name, | |
duration: loomVideo.durationSecs, | |
width: loomVideo.width, | |
height: loomVideo.height, | |
public: true, | |
metadata: Option.none(), | |
folderId: Option.none(), | |
transcriptionStatus: Option.none(), | |
importSource: new Video.ImportSource({ | |
source: "loom", | |
id: loomVideo.id, | |
}), | |
}); | |
const videoId = yield* videos.create({ | |
ownerId: payload.cap.userId, | |
orgId: Option.some(payload.cap.orgId), | |
bucketId: customBucketId, | |
source: { type: "desktopMP4" as const }, | |
name: payload.loom.video.name, | |
duration: loomVideo.durationSecs, | |
width: loomVideo.width, | |
height: loomVideo.height, | |
public: true, | |
metadata: Option.none(), | |
folderId: Option.none(), | |
transcriptionStatus: Option.none(), | |
importSource: new Video.ImportSource({ | |
source: "loom", | |
id: loomVideo.id, | |
}), | |
}).pipe(Effect.catchAll((cause) => Effect.fail(new LoomApiError({ cause })))); |
return yield* Effect.gen(function* () { | ||
const s3Bucket = yield* S3BucketAccess; | ||
|
||
const resp = yield* http | ||
.get(payload.loom.video.downloadUrl) | ||
.pipe(Effect.catchAll((cause) => new LoomApiError({ cause }))); | ||
const contentLength = Headers.get( | ||
resp.headers, | ||
"content-length", | ||
).pipe( | ||
Option.map((v) => Number(v)), | ||
Option.getOrUndefined, | ||
); | ||
yield* Effect.log(`Downloading ${contentLength} bytes`); | ||
|
||
let downloadedBytes = 0; | ||
|
||
const key = source.getFileKey(); | ||
|
||
yield* s3Bucket | ||
.putObject( | ||
key, | ||
resp.stream.pipe( | ||
Stream.tap((bytes) => { | ||
downloadedBytes += bytes.length; | ||
return Effect.void; | ||
}), | ||
), | ||
{ contentLength }, | ||
) | ||
.pipe( | ||
Effect.race( | ||
// TODO: Connect this with upload progress | ||
Effect.repeat( | ||
Effect.gen(function* () { | ||
const bytes = yield* Effect.succeed(downloadedBytes); | ||
yield* Effect.log(`Downloaded ${bytes} bytes`); | ||
}), | ||
Schedule.forever.pipe(Schedule.delayed(() => "2 seconds")), | ||
).pipe(Effect.delay("100 millis")), | ||
), | ||
); | ||
|
||
yield* Effect.log( | ||
`Uploaded video for user '${payload.cap.userId}' at key '${key}'`, | ||
); | ||
|
||
return { fileKey: key }; | ||
}).pipe(Effect.provide(bucketProvider)); | ||
}), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Harden the download: SSRF allowlist, timeout, size guard, proper failing, content‑type, and S3 error mapping.
Currently any URL can be fetched, there’s no timeout or max size, and HTTP/S3 failures aren’t mapped to the workflow error.
return yield* Effect.gen(function* () {
const s3Bucket = yield* S3BucketAccess;
- const resp = yield* http
- .get(payload.loom.video.downloadUrl)
- .pipe(Effect.catchAll((cause) => new LoomApiError({ cause })));
+ // SSRF guard: only allow Loom hosts
+ const url = new URL(payload.loom.video.downloadUrl);
+ if (!/(\.|^)loom\.com$/i.test(url.hostname)) {
+ return yield* Effect.fail(
+ new LoomApiError({ cause: new Error(`Disallowed host: ${url.hostname}`) }),
+ );
+ }
+
+ // Fetch with timeout and proper failure mapping
+ const resp = yield* http
+ .get(url.toString())
+ .pipe(
+ Effect.timeout("60 seconds"),
+ Effect.catchAll((cause) => Effect.fail(new LoomApiError({ cause }))),
+ );
+
const contentLength = Headers.get(
resp.headers,
"content-length",
).pipe(
Option.map((v) => Number(v)),
Option.getOrUndefined,
);
+ const contentType = Headers.get(resp.headers, "content-type").pipe(
+ Option.getOrUndefined,
+ );
yield* Effect.log(`Downloading ${contentLength} bytes`);
- let downloadedBytes = 0;
+ let downloadedBytes = 0;
+ const MAX_BYTES = 800 * 1024 * 1024; // 800 MiB cap
const key = source.getFileKey();
- yield* s3Bucket
+ yield* s3Bucket
.putObject(
key,
resp.stream.pipe(
Stream.tap((bytes) => {
downloadedBytes += bytes.length;
+ if (downloadedBytes > MAX_BYTES) {
+ return Effect.fail(
+ new LoomApiError({ cause: new Error("Download exceeds limit") }),
+ );
+ }
return Effect.void;
}),
),
- { contentLength },
+ { contentLength, contentType: contentType ?? "video/mp4" },
)
.pipe(
Effect.race(
// TODO: Connect this with upload progress
Effect.repeat(
Effect.gen(function* () {
const bytes = yield* Effect.succeed(downloadedBytes);
yield* Effect.log(`Downloaded ${bytes} bytes`);
}),
Schedule.forever.pipe(Schedule.delayed(() => "2 seconds")),
).pipe(Effect.delay("100 millis")),
),
+ Effect.catchAll((cause) => Effect.fail(new LoomApiError({ cause }))),
);
yield* Effect.log(
`Uploaded video for user '${payload.cap.userId}' at key '${key}'`,
);
return { fileKey: key };
}).pipe(Effect.provide(bucketProvider));
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
return yield* Effect.gen(function* () { | |
const s3Bucket = yield* S3BucketAccess; | |
const resp = yield* http | |
.get(payload.loom.video.downloadUrl) | |
.pipe(Effect.catchAll((cause) => new LoomApiError({ cause }))); | |
const contentLength = Headers.get( | |
resp.headers, | |
"content-length", | |
).pipe( | |
Option.map((v) => Number(v)), | |
Option.getOrUndefined, | |
); | |
yield* Effect.log(`Downloading ${contentLength} bytes`); | |
let downloadedBytes = 0; | |
const key = source.getFileKey(); | |
yield* s3Bucket | |
.putObject( | |
key, | |
resp.stream.pipe( | |
Stream.tap((bytes) => { | |
downloadedBytes += bytes.length; | |
return Effect.void; | |
}), | |
), | |
{ contentLength }, | |
) | |
.pipe( | |
Effect.race( | |
// TODO: Connect this with upload progress | |
Effect.repeat( | |
Effect.gen(function* () { | |
const bytes = yield* Effect.succeed(downloadedBytes); | |
yield* Effect.log(`Downloaded ${bytes} bytes`); | |
}), | |
Schedule.forever.pipe(Schedule.delayed(() => "2 seconds")), | |
).pipe(Effect.delay("100 millis")), | |
), | |
); | |
yield* Effect.log( | |
`Uploaded video for user '${payload.cap.userId}' at key '${key}'`, | |
); | |
return { fileKey: key }; | |
}).pipe(Effect.provide(bucketProvider)); | |
}), | |
return yield* Effect.gen(function* () { | |
const s3Bucket = yield* S3BucketAccess; | |
// SSRF guard: only allow Loom hosts | |
const url = new URL(payload.loom.video.downloadUrl); | |
if (!/(\.|^)loom\.com$/i.test(url.hostname)) { | |
return yield* Effect.fail( | |
new LoomApiError({ cause: new Error(`Disallowed host: ${url.hostname}`) }), | |
); | |
} | |
// Fetch with timeout and proper failure mapping | |
const resp = yield* http | |
.get(url.toString()) | |
.pipe( | |
Effect.timeout("60 seconds"), | |
Effect.catchAll((cause) => Effect.fail(new LoomApiError({ cause }))), | |
); | |
const contentLength = Headers.get( | |
resp.headers, | |
"content-length", | |
).pipe( | |
Option.map((v) => Number(v)), | |
Option.getOrUndefined, | |
); | |
const contentType = Headers.get(resp.headers, "content-type").pipe( | |
Option.getOrUndefined, | |
); | |
yield* Effect.log(`Downloading ${contentLength} bytes`); | |
let downloadedBytes = 0; | |
const MAX_BYTES = 800 * 1024 * 1024; // 800 MiB cap | |
const key = source.getFileKey(); | |
yield* s3Bucket | |
.putObject( | |
key, | |
resp.stream.pipe( | |
Stream.tap((bytes) => { | |
downloadedBytes += bytes.length; | |
if (downloadedBytes > MAX_BYTES) { | |
return Effect.fail( | |
new LoomApiError({ cause: new Error("Download exceeds limit") }), | |
); | |
} | |
return Effect.void; | |
}), | |
), | |
{ contentLength, contentType: contentType ?? "video/mp4" }, | |
) | |
.pipe( | |
Effect.race( | |
// TODO: Connect this with upload progress | |
Effect.repeat( | |
Effect.gen(function* () { | |
const bytes = yield* Effect.succeed(downloadedBytes); | |
yield* Effect.log(`Downloaded ${bytes} bytes`); | |
}), | |
Schedule.forever.pipe(Schedule.delayed(() => "2 seconds")), | |
).pipe(Effect.delay("100 millis")), | |
), | |
Effect.catchAll((cause) => Effect.fail(new LoomApiError({ cause }))), | |
); | |
yield* Effect.log( | |
`Uploaded video for user '${payload.cap.userId}' at key '${key}'`, | |
); | |
return { fileKey: key }; | |
}).pipe(Effect.provide(bucketProvider)); |
return Option.fromNullable(video).pipe( | ||
Option.map( | ||
(v) => | ||
[ | ||
Video.Video.decodeSync({ | ||
...v, | ||
bucketId: v.bucket, | ||
createdAt: v.createdAt.toISOString(), | ||
updatedAt: v.updatedAt.toISOString(), | ||
metadata: v.metadata as any, | ||
}), | ||
Option.fromNullable(video?.password), | ||
] as const, | ||
), | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t stringify dates; drop as any
on metadata; use v.password
.
Schema.Date
expects Date
. Also avoid any
, and prefer the row in scope.
return Option.fromNullable(video).pipe(
Option.map(
(v) =>
[
Video.Video.decodeSync({
...v,
bucketId: v.bucket,
- createdAt: v.createdAt.toISOString(),
- updatedAt: v.updatedAt.toISOString(),
- metadata: v.metadata as any,
+ createdAt: v.createdAt,
+ updatedAt: v.updatedAt,
+ metadata: v.metadata,
}),
- Option.fromNullable(video?.password),
+ Option.fromNullable(v?.password),
] as const,
),
);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
return Option.fromNullable(video).pipe( | |
Option.map( | |
(v) => | |
[ | |
Video.Video.decodeSync({ | |
...v, | |
bucketId: v.bucket, | |
createdAt: v.createdAt.toISOString(), | |
updatedAt: v.updatedAt.toISOString(), | |
metadata: v.metadata as any, | |
}), | |
Option.fromNullable(video?.password), | |
] as const, | |
), | |
); | |
return Option.fromNullable(video).pipe( | |
Option.map( | |
(v) => | |
[ | |
Video.Video.decodeSync({ | |
...v, | |
bucketId: v.bucket, | |
createdAt: v.createdAt, | |
updatedAt: v.updatedAt, | |
metadata: v.metadata, | |
}), | |
Option.fromNullable(v?.password), | |
] as const, | |
), | |
); |
🤖 Prompt for AI Agents
In packages/web-backend/src/Videos/VideosRepo.ts around lines 30 to 44, the code
is converting dates to strings, casting metadata to any, and reading password
from the outer scope; change createdAt and updatedAt to pass the original Date
objects (do not call toISOString), remove the "as any" cast and pass v.metadata
with the correct type (or cast to the proper metadata type instead of any), and
use v.password (the row in scope) when building the Option for password; keep
bucketId: v.bucket as-is and ensure the types align with Video.Video.decodeSync.
const LoomImportVideoError = Schema.Union( | ||
// DatabaseError, | ||
Video.NotFoundError, | ||
// S3Error, | ||
LoomApiError, | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid defining a different error union in the backend; use this one canonically.
The backend currently re‑declares a broader union (adds DB/S3). Divergent unions will break RPC/HTTP serialization and client decoding.
I’ve proposed backend diffs below to import and map internal errors to this union.
Also applies to: 18-43
🤖 Prompt for AI Agents
In packages/web-domain/src/Loom.ts around lines 11-16 (and also affecting
18-43), you currently re-declare a different error union locally; replace that
by importing the canonical LoomImportVideoError union from the shared/domain
module and remove the local Schema.Union definition. Where internal errors
(DatabaseError, S3Error, etc.) are produced, map/convert them to the canonical
union variants before returning or serializing (e.g., translate DB/S3/internal
error cases into the appropriate LoomImportVideoError branch) so the RPC/HTTP
payload shape matches the client-visible union and preserves
serialization/decoding compatibility.
The latest updates on your projects. Learn more about Vercel for GitHub.
|
1f347c8
to
de8182a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (7)
apps/web/app/api/videos/[videoId]/retry-transcription/route.ts (1)
8-56
: Convert this route to the project's HttpApi/HttpApiBuilder pattern and export the handler via apiToHandler(ApiLive).
The file currently exports a Next.js-style POST handler; change it to use HttpApi/HttpApiBuilder and export only the handler from apiToHandler(ApiLive).
Location: apps/web/app/api/videos/[videoId]/retry-transcription/route.ts
See examples: apps/web/app/api/video/delete/route.ts, apps/web/app/api/playlist/route.tsapps/web/app/api/desktop/[...route]/video.ts (2)
82-97
: Authorize by owner when reusing an existing videoId (avoid ID existence leak).The lookup only filters by
videos.id
, so any user can learn whether an ID exists. Filter by bothid
andownerId
.- const [video] = await db() - .select() - .from(videos) - .where(eq(videos.id, Video.VideoId.make(videoId))); + const [video] = await db() + .select() + .from(videos) + .where( + and( + eq(videos.id, Video.VideoId.make(videoId)), + eq(videos.ownerId, user.id), + ), + );
206-246
: Fix auth gap in delete flow (ID-only lookup; cross-user progress row deletion possible).
- The initial select filters by
id
only, leaking existence of others’ videos.await db().delete(videoUploads).where(eq(videoUploads.videoId, videoId))
will delete progress rows for another user’s video if a colliding ID is passed.Filter by both
id
andownerId
up front to gate the whole flow.- const [result] = await db() - .select({ video: videos, bucket: s3Buckets }) - .from(videos) - .leftJoin(s3Buckets, eq(videos.bucket, s3Buckets.id)) - .where(eq(videos.id, videoId)); + const [result] = await db() + .select({ video: videos, bucket: s3Buckets }) + .from(videos) + .leftJoin(s3Buckets, eq(videos.bucket, s3Buckets.id)) + .where(and(eq(videos.id, videoId), eq(videos.ownerId, user.id)));Optional: wrap the three mutations (delete from
videoUploads
, delete fromvideos
, S3 delete) in a DB transaction + best-effort S3 cleanup to avoid partial deletions.apps/web/actions/organizations/remove-videos.ts (1)
113-115
: Fix revalidate targets for organization actions.Both organization actions call revalidatePath(
/dashboard/spaces/${organizationId}
) which targets a space page (wrong). Update to revalidate the organization page (e.g./dashboard/organizations/${organizationId}
) or whichever route actually shows the organization's videos so the correct pages refresh.Locations:
- apps/web/actions/organizations/add-videos.ts — revalidatePath(
/dashboard/spaces/${organizationId}
);- apps/web/actions/organizations/remove-videos.ts — revalidatePath(
/dashboard/spaces/${organizationId}
);apps/web/app/(org)/dashboard/spaces/[spaceId]/components/AddVideosDialogBase.tsx (3)
125-146
: Avoid broad invalidations; update cache with setQueryData using mutation variablesInvalidate only when necessary. Here, directly reconcile ["entity-video-ids", entityId] using toAdd/toRemove from the mutate variables for deterministic UI updates.
Apply this diff:
- onSuccess: async (result) => { - const { addResult, removeResult } = result || {}; - - // Invalidate both queries to ensure UI updates - await Promise.all([ - queryClient.invalidateQueries({ queryKey: ["user-videos"] }), - queryClient.invalidateQueries({ - queryKey: ["entity-video-ids", entityId], - }), - ]); + onSuccess: async (result, variables) => { + const { addResult, removeResult } = result || {}; + const { toAdd = [], toRemove = [] } = variables ?? {}; + queryClient.setQueryData<Video.VideoId[]>( + ["entity-video-ids", entityId], + (prev) => { + const base = prev ?? []; + let next = base; + if (addResult?.success && toAdd.length) { + next = Array.from(new Set([...base, ...toAdd])); + } + if (removeResult?.success && toRemove.length) { + next = next.filter((id) => !toRemove.includes(id)); + } + return next; + }, + );
153-153
: Remove inline comments to comply with repo guideline (no comments in TS/TSX)Strip these comments; self-explanatory code is required in this repo.
Apply this diff:
- // Tab state: 'all', 'added', or 'notAdded' - // Memoize filtered videos for stable reference - // Memoize handleVideoToggle for stable reference - // To add: selected and not already in entity - // To remove: selected and already in entity - {/* Tabs for filtering */}Also applies to: 156-156, 174-174, 185-185, 187-187, 213-213
16-16
: Replace direct @tanstack/react-query hooks with useEffectQuery/useEffectMutation from "@/lib/EffectRuntime"Replace
import { useMutation, useQuery, useQueryClient } from "@tanstack/react-query"
and migrate query/mutation usages in AddVideosDialogBase.tsx touseEffectQuery
/useEffectMutation
(usewithRpc
oruseRpcClient
for RPCs) following existing patterns in apps/web (e.g. ProgressCircle.tsx, Caps.tsx).
🧹 Nitpick comments (10)
apps/web/actions/video/upload.ts (1)
215-215
: Avoid re-branding an already branded ID.Only construct when missing; prevents redundant validation/casting.
-const idToUse = Video.VideoId.make(videoId || nanoId()); +const idToUse = videoId ?? Video.VideoId.make(nanoId());apps/web/app/api/desktop/[...route]/video.ts (1)
234-239
: Avoidany
and guard undefined keys when deleting S3 objects.Use a typed projection and skip entries without a
Key
.- await bucket.deleteObjects( - listedObjects.Contents.map((content: any) => ({ - Key: content.Key, - })), - ); + await bucket.deleteObjects( + listedObjects.Contents.flatMap(({ Key }: { Key?: string }) => + Key ? [{ Key }] : [], + ), + );apps/web/actions/organizations/remove-videos.ts (3)
83-111
: Make delete + folderId nulling atomic (wrap in a single transaction).Prevents partial state (e.g., delete succeeds but folder clear fails).
Apply this diff:
- await db() - .delete(sharedVideos) - .where( - and( - eq(sharedVideos.organizationId, organizationId), - inArray(sharedVideos.videoId, existingVideoIds), - ), - ); - - // Clear folderId for videos that are being removed from the organization and are currently in folders within that organization - // First, get all folder IDs that belong to this organization - const organizationFolders = await db() - .select({ id: folders.id }) - .from(folders) - .where(eq(folders.organizationId, organizationId)); - - const organizationFolderIds = organizationFolders.map((f) => f.id); - - if (organizationFolderIds.length > 0) { - await db() - .update(videos) - .set({ folderId: null }) - .where( - and( - inArray(videos.id, existingVideoIds), - inArray(videos.folderId, organizationFolderIds), - ), - ); - } + await db().transaction(async (tx) => { + await tx + .delete(sharedVideos) + .where( + and( + eq(sharedVideos.organizationId, organizationId), + inArray(sharedVideos.videoId, existingVideoIds), + ), + ); + + const organizationFolders = await tx + .select({ id: folders.id }) + .from(folders) + .where(eq(folders.organizationId, organizationId)); + + const organizationFolderIds = organizationFolders.map((f) => f.id); + + if (organizationFolderIds.length > 0) { + await tx + .update(videos) + .set({ folderId: null }) + .where( + and( + inArray(videos.id, existingVideoIds), + inArray(videos.folderId, organizationFolderIds), + ), + ); + } + })
63-64
: Remove inline comments to comply with repo guidelines.Repo rules disallow inline/block comments in TS.
Apply this diff:
- // Only allow removing videos that are currently shared with the organization @@ - // Clear folderId for videos that are being removed from the organization and are currently in folders within that organization - // First, get all folder IDs that belong to this organizationAlso applies to: 92-94
16-19
: Confirmed call sites use Video.VideoId[]; consider adding runtime ID validation.
- Call sites: apps/web/app/(org)/dashboard/spaces/[spaceId]/components/AddVideosToOrganizationDialog.tsx (passes removeVideosFromOrganization into AddVideosDialogBase) and apps/web/app/(org)/dashboard/spaces/[spaceId]/components/AddVideosDialogBase.tsx (calls removeVideos(entityId, toRemove) where selectedVideos and entityVideoIds are typed Video.VideoId[]).
- Optional: validate/parse IDs in apps/web/actions/organizations/remove-videos.ts (use your domain guard/parser) before DB queries to enforce branded ID invariants at runtime.
apps/web/app/(org)/dashboard/spaces/[spaceId]/components/AddVideosDialogBase.tsx (3)
32-35
: Tighten prop types to eliminate anySpecify result shapes to improve safety and DX; align upstream implementations accordingly.
Apply this diff:
- addVideos: (entityId: string, videoIds: Video.VideoId[]) => Promise<any>; - removeVideos: (entityId: string, videoIds: Video.VideoId[]) => Promise<any>; - getVideos: (limit?: number) => Promise<any>; - getEntityVideoIds: (entityId: string) => Promise<any>; + addVideos: ( + entityId: string, + videoIds: Video.VideoId[], + ) => Promise<{ success: boolean; message?: string; error?: string }>; + removeVideos: ( + entityId: string, + videoIds: Video.VideoId[], + ) => Promise<{ success: boolean; message?: string; error?: string }>; + getVideos: ( + limit?: number, + ) => Promise<{ success: boolean; data: VideoData[]; error?: string }>; + getEntityVideoIds: ( + entityId: string, + ) => Promise<{ success: boolean; data: Video.VideoId[]; error?: string }>;
323-331
: Guard the Update button when no selectionPrevent no-op mutations and accidental clicks by disabling when nothing is selected.
Apply this diff:
- disabled={updateVideosMutation.isPending} - spinner={updateVideosMutation.isPending} + disabled={updateVideosMutation.isPending || selectedVideos.length === 0} + spinner={updateVideosMutation.isPending}
159-161
: Minor: avoid recomputing lowercase per itemCompute once to reduce repeated string ops in large lists.
Apply this diff:
- videosData?.filter((video: VideoData) => - video.name.toLowerCase().includes(searchTerm.toLowerCase()), - ) || []; + (() => { + const q = searchTerm.toLowerCase(); + return videosData?.filter((video) => video.name.toLowerCase().includes(q)) || []; + })();apps/web/lib/server.ts (1)
27-27
: Deduplicate TracingLayer definition to a single source.You also export TracingLayer in tracing-server.ts. Prefer importing and reusing it here to avoid divergent configs/init.
Example:
-import * as NodeSdk from "@effect/opentelemetry/NodeSdk"; -... -export const TracingLayer = NodeSdk.layer(getTracingConfig); +import { TracingLayer } from "./tracing-server";apps/workflow-runner/package.json (1)
1-3
: Optional: Pin Node engine.Declare Node runtime to match platform-node expectations.
Example:
{ "name": "@cap/workflow-runner", "type": "module", + "engines": { "node": ">=20" },
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (84)
apps/desktop/package.json
(2 hunks)apps/desktop/src-tauri/tauri.conf.json
(1 hunks)apps/discord-bot/package.json
(1 hunks)apps/tasks/package.json
(0 hunks)apps/web/actions/caps/share.ts
(1 hunks)apps/web/actions/folders/moveVideoToFolder.ts
(3 hunks)apps/web/actions/folders/updateFolder.ts
(2 hunks)apps/web/actions/organizations/add-videos.ts
(1 hunks)apps/web/actions/organizations/remove-videos.ts
(1 hunks)apps/web/actions/screenshots/get-screenshot.ts
(0 hunks)apps/web/actions/spaces/add-videos.ts
(1 hunks)apps/web/actions/spaces/remove-videos.ts
(1 hunks)apps/web/actions/video/upload.ts
(3 hunks)apps/web/actions/videos/download.ts
(1 hunks)apps/web/actions/videos/edit-date.ts
(1 hunks)apps/web/actions/videos/edit-title.ts
(1 hunks)apps/web/actions/videos/edit-transcript.ts
(1 hunks)apps/web/actions/videos/get-og-image.tsx
(2 hunks)apps/web/actions/videos/get-transcript.ts
(1 hunks)apps/web/actions/videos/new-comment.ts
(1 hunks)apps/web/actions/videos/password.ts
(3 hunks)apps/web/app/(org)/dashboard/admin/AdminDashboardClient.tsx
(0 hunks)apps/web/app/(org)/dashboard/admin/UserLookup.tsx
(0 hunks)apps/web/app/(org)/dashboard/admin/actions.ts
(0 hunks)apps/web/app/(org)/dashboard/admin/dateRangeUtils.ts
(0 hunks)apps/web/app/(org)/dashboard/admin/loading.tsx
(0 hunks)apps/web/app/(org)/dashboard/admin/page.tsx
(0 hunks)apps/web/app/(org)/dashboard/caps/components/PasswordDialog.tsx
(2 hunks)apps/web/app/(org)/dashboard/caps/components/SharingDialog.tsx
(3 hunks)apps/web/app/(org)/dashboard/caps/components/UploadCapButton.tsx
(3 hunks)apps/web/app/(org)/dashboard/caps/page.tsx
(1 hunks)apps/web/app/(org)/dashboard/folder/[id]/components/BreadcrumbItem.tsx
(1 hunks)apps/web/app/(org)/dashboard/folder/[id]/components/ClientMyCapsLink.tsx
(3 hunks)apps/web/app/(org)/dashboard/spaces/[spaceId]/components/AddVideosDialogBase.tsx
(7 hunks)apps/web/app/(org)/dashboard/spaces/[spaceId]/components/VideoCard.tsx
(1 hunks)apps/web/app/(org)/dashboard/spaces/[spaceId]/components/VirtualizedVideoGrid.tsx
(1 hunks)apps/web/app/(org)/verify-otp/form.tsx
(1 hunks)apps/web/app/api/desktop/[...route]/session.ts
(0 hunks)apps/web/app/api/desktop/[...route]/video.ts
(3 hunks)apps/web/app/api/playlist/route.ts
(1 hunks)apps/web/app/api/screenshot/route.ts
(0 hunks)apps/web/app/api/thumbnail/route.ts
(2 hunks)apps/web/app/api/upload/[...route]/multipart.ts
(2 hunks)apps/web/app/api/upload/[...route]/signed.ts
(2 hunks)apps/web/app/api/video/delete/route.ts
(0 hunks)apps/web/app/api/video/domain-info/route.ts
(4 hunks)apps/web/app/api/video/og/route.tsx
(1 hunks)apps/web/app/api/videos/[videoId]/retry-transcription/route.ts
(1 hunks)apps/web/app/embed/[videoId]/_components/PasswordOverlay.tsx
(2 hunks)apps/web/app/embed/[videoId]/page.tsx
(1 hunks)apps/web/app/s/[videoId]/_components/PasswordOverlay.tsx
(2 hunks)apps/web/app/s/[videoId]/_components/Sidebar.tsx
(2 hunks)apps/web/app/s/[videoId]/_components/tabs/Activity/Comments.tsx
(3 hunks)apps/web/app/s/[videoId]/_components/tabs/Activity/index.tsx
(2 hunks)apps/web/app/s/[videoId]/page.tsx
(2 hunks)apps/web/hooks/use-transcript.ts
(1 hunks)apps/web/lib/Notification.ts
(2 hunks)apps/web/lib/folder.ts
(3 hunks)apps/web/lib/server.ts
(3 hunks)apps/web/lib/tracing-server.ts
(1 hunks)apps/web/lib/tracing.ts
(1 hunks)apps/web/next.config.mjs
(2 hunks)apps/web/package.json
(5 hunks)apps/workflow-manager/package.json
(1 hunks)apps/workflow-manager/src/index.ts
(1 hunks)apps/workflow-runner/package.json
(1 hunks)apps/workflow-runner/src/index.ts
(1 hunks)biome.json
(1 hunks)package.json
(1 hunks)packages/database/auth/auth-options.ts
(2 hunks)packages/database/auth/drizzle-adapter.ts
(1 hunks)packages/database/index.ts
(1 hunks)packages/database/package.json
(3 hunks)packages/database/schema.ts
(6 hunks)packages/database/types/index.ts
(1 hunks)packages/env/index.ts
(1 hunks)packages/env/package.json
(1 hunks)packages/env/server.ts
(1 hunks)packages/local-docker/docker-compose.yml
(2 hunks)packages/utils/package.json
(1 hunks)packages/utils/src/index.ts
(1 hunks)packages/utils/src/index.tsx
(0 hunks)packages/web-api-contract-effect/package.json
(1 hunks)packages/web-backend/package.json
(1 hunks)
💤 Files with no reviewable changes (12)
- apps/web/actions/screenshots/get-screenshot.ts
- apps/web/app/api/desktop/[...route]/session.ts
- apps/web/app/api/screenshot/route.ts
- apps/web/app/(org)/dashboard/admin/loading.tsx
- apps/web/app/(org)/dashboard/admin/UserLookup.tsx
- apps/web/app/(org)/dashboard/admin/page.tsx
- packages/utils/src/index.tsx
- apps/web/app/(org)/dashboard/admin/dateRangeUtils.ts
- apps/tasks/package.json
- apps/web/app/(org)/dashboard/admin/AdminDashboardClient.tsx
- apps/web/app/(org)/dashboard/admin/actions.ts
- apps/web/app/api/video/delete/route.ts
🚧 Files skipped from review as they are similar to previous changes (60)
- apps/web/app/embed/[videoId]/_components/PasswordOverlay.tsx
- apps/web/app/api/video/og/route.tsx
- apps/web/app/api/playlist/route.ts
- packages/database/index.ts
- biome.json
- packages/web-api-contract-effect/package.json
- apps/web/app/(org)/dashboard/caps/components/UploadCapButton.tsx
- apps/web/app/s/[videoId]/_components/PasswordOverlay.tsx
- apps/web/app/api/upload/[...route]/multipart.ts
- apps/web/app/embed/[videoId]/page.tsx
- apps/web/app/(org)/dashboard/caps/components/PasswordDialog.tsx
- apps/web/lib/tracing.ts
- apps/web/actions/spaces/add-videos.ts
- apps/web/app/s/[videoId]/_components/Sidebar.tsx
- apps/web/next.config.mjs
- apps/workflow-manager/package.json
- packages/env/index.ts
- apps/web/lib/tracing-server.ts
- apps/web/actions/videos/edit-date.ts
- apps/web/app/s/[videoId]/_components/tabs/Activity/Comments.tsx
- apps/web/app/api/upload/[...route]/signed.ts
- apps/web/app/s/[videoId]/page.tsx
- packages/database/auth/auth-options.ts
- apps/web/actions/videos/download.ts
- apps/web/app/(org)/dashboard/folder/[id]/components/BreadcrumbItem.tsx
- apps/web/app/(org)/dashboard/caps/components/SharingDialog.tsx
- apps/web/actions/videos/new-comment.ts
- apps/web/app/(org)/dashboard/spaces/[spaceId]/components/VirtualizedVideoGrid.tsx
- apps/web/hooks/use-transcript.ts
- packages/local-docker/docker-compose.yml
- packages/utils/src/index.ts
- apps/web/app/(org)/verify-otp/form.tsx
- packages/utils/package.json
- apps/web/actions/folders/updateFolder.ts
- apps/web/lib/folder.ts
- apps/web/app/api/thumbnail/route.ts
- apps/web/lib/Notification.ts
- apps/web/actions/spaces/remove-videos.ts
- apps/web/app/(org)/dashboard/spaces/[spaceId]/components/VideoCard.tsx
- apps/web/app/api/video/domain-info/route.ts
- package.json
- packages/env/package.json
- apps/web/app/s/[videoId]/_components/tabs/Activity/index.tsx
- packages/database/auth/drizzle-adapter.ts
- apps/desktop/src-tauri/tauri.conf.json
- packages/env/server.ts
- apps/desktop/package.json
- apps/web/actions/caps/share.ts
- apps/web/app/(org)/dashboard/folder/[id]/components/ClientMyCapsLink.tsx
- apps/web/actions/videos/edit-title.ts
- packages/web-backend/package.json
- apps/web/actions/videos/get-og-image.tsx
- apps/workflow-manager/src/index.ts
- apps/web/actions/folders/moveVideoToFolder.ts
- apps/web/app/(org)/dashboard/caps/page.tsx
- apps/web/actions/videos/edit-transcript.ts
- apps/discord-bot/package.json
- apps/web/actions/organizations/add-videos.ts
- apps/web/actions/videos/get-transcript.ts
- apps/web/package.json
🧰 Additional context used
📓 Path-based instructions (8)
apps/web/app/api/**/route.ts
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/app/api/**/route.ts
: Place API routes only under apps/web/app/api and implement each route in a route.ts file
Construct API routes with @effect/platform HttpApi/HttpApiBuilder and export only the handler from apiToHandler(ApiLive)
Map domain errors to transport errors with HttpApiError.* and keep error translation exhaustive
Use HttpAuthMiddleware for required auth and provideOptionalAuth for guest routes; avoid duplicate session lookups
Provide dependencies via Layer.provide in API routes instead of manual provideService calls
Files:
apps/web/app/api/videos/[videoId]/retry-transcription/route.ts
apps/web/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
apps/web/**/*.{ts,tsx}
: Use TanStack Query v5 for all client-side server state and data fetching in the web app
Web mutations should call Server Actions directly and perform targeted cache updates with setQueryData/setQueriesData rather than broad invalidations
Client code should use useEffectQuery/useEffectMutation and useRpcClient from apps/web/lib/EffectRuntime.ts; do not create ManagedRuntime inside components
Files:
apps/web/app/api/videos/[videoId]/retry-transcription/route.ts
apps/web/actions/organizations/remove-videos.ts
apps/web/actions/video/upload.ts
apps/web/lib/server.ts
apps/web/app/api/desktop/[...route]/video.ts
apps/web/actions/videos/password.ts
apps/web/app/(org)/dashboard/spaces/[spaceId]/components/AddVideosDialogBase.tsx
apps/web/app/**/*.{tsx,ts}
📄 CodeRabbit inference engine (CLAUDE.md)
Prefer Server Components for initial data in the Next.js App Router and pass initialData to client components
Files:
apps/web/app/api/videos/[videoId]/retry-transcription/route.ts
apps/web/app/api/desktop/[...route]/video.ts
apps/web/app/(org)/dashboard/spaces/[spaceId]/components/AddVideosDialogBase.tsx
**/*.{ts,tsx,js,jsx,rs}
📄 CodeRabbit inference engine (CLAUDE.md)
Do not add inline, block, or docstring comments in any language; code must be self-explanatory
Files:
apps/web/app/api/videos/[videoId]/retry-transcription/route.ts
packages/database/types/index.ts
apps/workflow-runner/src/index.ts
apps/web/actions/organizations/remove-videos.ts
apps/web/actions/video/upload.ts
apps/web/lib/server.ts
apps/web/app/api/desktop/[...route]/video.ts
apps/web/actions/videos/password.ts
packages/database/schema.ts
apps/web/app/(org)/dashboard/spaces/[spaceId]/components/AddVideosDialogBase.tsx
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Use strict TypeScript and avoid any; leverage shared types from packages
**/*.{ts,tsx}
: Use a 2-space indent for TypeScript code.
Use Biome for formatting and linting TypeScript/JavaScript files by runningpnpm format
.
Files:
apps/web/app/api/videos/[videoId]/retry-transcription/route.ts
packages/database/types/index.ts
apps/workflow-runner/src/index.ts
apps/web/actions/organizations/remove-videos.ts
apps/web/actions/video/upload.ts
apps/web/lib/server.ts
apps/web/app/api/desktop/[...route]/video.ts
apps/web/actions/videos/password.ts
packages/database/schema.ts
apps/web/app/(org)/dashboard/spaces/[spaceId]/components/AddVideosDialogBase.tsx
**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
**/*.{ts,tsx,js,jsx}
: Use kebab-case for filenames for TypeScript/JavaScript modules (e.g.,user-menu.tsx
).
Use PascalCase for React/Solid components.
Files:
apps/web/app/api/videos/[videoId]/retry-transcription/route.ts
packages/database/types/index.ts
apps/workflow-runner/src/index.ts
apps/web/actions/organizations/remove-videos.ts
apps/web/actions/video/upload.ts
apps/web/lib/server.ts
apps/web/app/api/desktop/[...route]/video.ts
apps/web/actions/videos/password.ts
packages/database/schema.ts
apps/web/app/(org)/dashboard/spaces/[spaceId]/components/AddVideosDialogBase.tsx
apps/web/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
On the client, always use
useEffectQuery
oruseEffectMutation
from@/lib/EffectRuntime
; never callEffectRuntime.run*
directly in components.
Files:
apps/web/app/api/videos/[videoId]/retry-transcription/route.ts
apps/web/actions/organizations/remove-videos.ts
apps/web/actions/video/upload.ts
apps/web/lib/server.ts
apps/web/app/api/desktop/[...route]/video.ts
apps/web/actions/videos/password.ts
apps/web/app/(org)/dashboard/spaces/[spaceId]/components/AddVideosDialogBase.tsx
apps/web/actions/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
All Groq/OpenAI calls must be implemented in Next.js Server Actions under apps/web/actions; do not place AI calls elsewhere
Files:
apps/web/actions/organizations/remove-videos.ts
apps/web/actions/video/upload.ts
apps/web/actions/videos/password.ts
🧠 Learnings (2)
📚 Learning: 2025-09-22T14:19:55.985Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: AGENTS.md:0-0
Timestamp: 2025-09-22T14:19:55.985Z
Learning: Applies to apps/web/app/api/* : Acquire backend services inside `Effect.gen` and wire them using `Layer.provide`/`HttpApiBuilder.group`, translating errors to `HttpApiError` as appropriate.
Applied to files:
apps/workflow-runner/src/index.ts
📚 Learning: 2025-09-22T14:17:47.380Z
Learnt from: CR
PR: CapSoftware/Cap#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-22T14:17:47.380Z
Learning: Applies to **/*.{ts,tsx} : Use strict TypeScript and avoid any; leverage shared types from packages
Applied to files:
packages/database/package.json
🧬 Code graph analysis (6)
apps/workflow-runner/src/index.ts (5)
packages/web-backend/src/Workflows.ts (1)
Workflows
(8-8)apps/web/lib/server.ts (1)
TracingLayer
(27-27)packages/web-backend/src/Videos/index.ts (1)
Videos
(12-149)packages/web-backend/src/S3Buckets/index.ts (1)
S3Buckets
(13-199)packages/web-backend/src/Database.ts (1)
Database
(9-19)
apps/web/actions/video/upload.ts (2)
packages/web-domain/src/Folder.ts (1)
Folder
(19-27)packages/database/helpers.ts (1)
nanoId
(6-9)
apps/web/lib/server.ts (3)
apps/web/lib/tracing-server.ts (1)
TracingLayer
(5-5)apps/web/lib/tracing.ts (1)
getTracingConfig
(5-29)packages/web-backend/src/Database.ts (1)
Database
(9-19)
apps/web/app/api/desktop/[...route]/video.ts (3)
packages/database/schema.ts (1)
videos
(240-288)packages/web-domain/src/Video.ts (1)
Video
(14-59)packages/database/helpers.ts (1)
nanoId
(6-9)
packages/database/schema.ts (2)
packages/web-domain/src/Folder.ts (3)
Folder
(19-27)FolderId
(8-8)FolderId
(9-9)packages/web-domain/src/Video.ts (3)
Video
(14-59)VideoId
(10-10)VideoId
(11-11)
apps/web/app/(org)/dashboard/spaces/[spaceId]/components/AddVideosDialogBase.tsx (1)
apps/web/app/(org)/dashboard/caps/Caps.tsx (1)
VideoData
(29-50)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Clippy
- GitHub Check: Build Desktop (aarch64-apple-darwin, macos-latest)
- GitHub Check: Build Desktop (x86_64-pc-windows-msvc, windows-latest)
- GitHub Check: Analyze (rust)
🔇 Additional comments (20)
packages/database/schema.ts (8)
11-11
: LGTM!The
primaryKey
import fromdrizzle-orm/mysql-core
is correctly added to support the new composite primary key definition in theimportedVideos
table.
20-21
: LGTM!The explicit
.ts
extensions in imports follow TypeScript ESM conventions and align with the coding guidelines.
245-246
: LGTM!The TODO comment correctly identifies the need to make
orgId
non-null in the future. The current nullable implementation aligns with the domain model inVideo.ts
whereorgId
is defined asSchema.OptionFromNullOr(Schema.String)
.
265-265
: LGTM!The explicit type annotation
.$type<Folder.FolderId>()
correctly enforces type safety for the folder ID field, ensuring consistency with the domain model.
282-287
: LGTM!The migration from object-based index definitions to array syntax is consistent with Drizzle ORM conventions and improves code clarity.
294-294
: LGTM!The explicit type annotation
.$type<Video.VideoId>()
correctly enforces type safety for the video ID field, ensuring consistency with the domain model.
320-320
: LGTM!The explicit type annotation
.$type<Video.VideoId>()
correctly enforces type safety for the video ID field, ensuring consistency with the domain model.
656-667
: Fix orgId type and remove unused id or make it primary.This is the same issue identified in the previous review. The table has two issues:
orgId
usesnanoIdNullable(...).notNull()
instead of the cleanernanoId(...)
- The standalone
id
field is redundant given the composite primary keyApply this diff to fix the issues:
export const importedVideos = mysqlTable( "imported_videos", { - id: nanoId("id").notNull(), - orgId: nanoIdNullable("orgId").notNull(), + orgId: nanoId("orgId").notNull(), source: varchar("source", { length: 255, enum: ["loom"] }).notNull(), sourceId: varchar("source_id", { length: 255 }).notNull(), }, (table) => [ primaryKey({ columns: [table.orgId, table.source, table.sourceId] }), ], );Optional: Add a foreign key constraint from
orgId
toorganizations.id
for referential integrity.apps/web/app/api/videos/[videoId]/retry-transcription/route.ts (2)
4-4
: LGTM: Type import added for Video domain.The addition of the Video type import supports the parameter type refinement and aligns with the broader type consistency improvements across the codebase.
10-10
: LGTM: Parameter type tightened to Video.VideoId.The parameter type has been appropriately tightened from
string
toVideo.VideoId
, which provides better type safety and consistency with the domain model. This aligns with similar type refinements across other video-related routes in the PR.apps/web/actions/videos/password.ts (1)
7-7
: Type-branded VideoId import: LGTMAligns this action with domain-typed IDs. Good move for type safety.
apps/web/actions/video/upload.ts (2)
13-13
: Good: type-only import for Folder; value import for Video is correct.This keeps runtime bundle clean while enabling branded types.
160-168
: Branded ID params look good — callers updated.
Searched createVideoAndGetUploadUrl usages: all call sites use Video.VideoId / Folder.FolderId (e.g., apps/web/app/(org)/dashboard/caps/components/UploadCapButton.tsx — calls around ~111 and ~441); no raw-string videoId/folderId args found.apps/web/app/api/desktop/[...route]/video.ts (1)
99-99
: LGTM: brand new IDs at creation.Good move aligning inserts with the branded VideoId type.
Confirm whether
Video.VideoId.make
performs runtime validation (vs. a type cast). If it doesn’t validate, consider adding input validation for inbound IDs to return 400 on bad shapes rather than 500s from downstream failures.apps/web/actions/organizations/remove-videos.ts (1)
12-12
: LGTM: Stronger ID typing.Importing the Video domain type to use Video.VideoId[] improves type safety and consistency.
apps/web/lib/server.ts (1)
47-49
: LGTM: Properly layering FetchHttpClient with tracing and defaults.Merging TracingLayer and FetchHttpClient into Dependencies looks correct and keeps HTTP instrumentation consistent.
apps/workflow-runner/src/index.ts (2)
56-66
: LGTM: Runner wiring, HTTP serve, and observability layers.Composition reads clean and aligns with the rest of the stack.
26-36
: Make runner address configurable; avoid hard-coding localhost:42069.Hard-coded address limits deployments and containerized setups.
Apply:
-const ClusterWorkflowLive = ClusterWorkflowEngine.layer.pipe( - Layer.provide( - NodeClusterRunnerSocket.layer({ - storage: "sql", - shardingConfig: { - runnerAddress: Option.some(RunnerAddress.make("localhost", 42069)), - }, - }), - ), - Layer.provide(SqlLayer), -); +const ClusterWorkflowLive = Layer.unwrapEffect( + Effect.gen(function* () { + const host = yield* Config.string("RUNNER_HOST").pipe(Config.withDefault("localhost")); + const port = yield* Config.number("RUNNER_PORT").pipe(Config.withDefault(42069)); + return ClusterWorkflowEngine.layer.pipe( + Layer.provide( + NodeClusterRunnerSocket.layer({ + storage: "sql", + shardingConfig: { + runnerAddress: Option.some(RunnerAddress.make(host, port)), + }, + }), + ), + Layer.provide(SqlLayer), + ); + }), +);packages/database/package.json (2)
55-64
: Exports map points to .ts sources; align exports to built artifacts or drop map.This will break Node resolution at runtime. See prior comment—same issue persists.
Minimal fix (build to dist and export JS + DTS):
- "types": "./index.ts", - "main": "index.js", + "types": "./dist/index.d.ts", + "main": "./dist/index.js", @@ - "exports": { - ".": "./index.ts", - "./auth/auth-options": "./auth/auth-options.ts", - "./auth/session": "./auth/session.ts", - "./emails/config": "./emails/config.ts", - "./emails/*": "./emails/*.tsx", - "./schema": "./schema.ts", - "./crypto": "./crypto.ts", - "./helpers": "./helpers.ts", - "./types": "./types/index.ts" - } + "exports": { + ".": { "types": "./dist/index.d.ts", "default": "./dist/index.js" }, + "./auth/auth-options": { "types": "./dist/auth/auth-options.d.ts", "default": "./dist/auth/auth-options.js" }, + "./auth/session": { "types": "./dist/auth/session.d.ts", "default": "./dist/auth/session.js" }, + "./emails/config": { "types": "./dist/emails/config.d.ts", "default": "./dist/emails/config.js" }, + "./emails/*": "./dist/emails/*.js", + "./schema": { "types": "./dist/schema.d.ts", "default": "./dist/schema.js" }, + "./crypto": { "types": "./dist/crypto.d.ts", "default": "./dist/crypto.js" }, + "./helpers": { "types": "./dist/helpers.d.ts", "default": "./dist/helpers.js" }, + "./types": { "types": "./dist/types/index.d.ts", "default": "./dist/types/index.js" } + }Add build/prepare (example with tsup):
"scripts": { + "build": "tsup --dts --format esm,cjs --out-dir dist index.ts", + "prepare": "pnpm build", "db:generate": "drizzle-kit generate --config=drizzle.config.ts",
5-6
: “types” and “main” currently point to non-existent/distless files.With ESM enabled, pointing “types” to .ts and “main” to index.js (not emitted) will break consumers. Align to dist as in the exports fix.
Apply:
-"types": "./index.ts", -"main": "index.js", +"types": "./dist/index.d.ts", +"main": "./dist/index.js",
export async function setVideoPassword( | ||
videoId: Video.VideoId, | ||
password: string, | ||
) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid TOCTOU: bind UPDATE to ownerId in the same statement
You check ownership, but the UPDATE is scoped only by id. Enforce authorization at write-time to prevent a time-of-check/time-of-use gap.
Proposed change inside this function (and add and
to the drizzle import):
import { and, eq } from "drizzle-orm";
await db()
.update(videos)
.set({ password: hashed })
.where(and(eq(videos.id, videoId), eq(videos.ownerId, user.id)));
🤖 Prompt for AI Agents
In apps/web/actions/videos/password.ts around lines 12 to 15, the function
currently checks ownership separately but performs the UPDATE only by video id,
exposing a TOCTOU risk; update the query to enforce ownerId at write-time by
changing the update.where clause to include both id and ownerId combined with
the drizzle 'and' operator, and add 'and' to the drizzle import alongside 'eq'
so the UPDATE becomes conditional on both eq(videos.id, videoId) and
eq(videos.ownerId, user.id).
} | ||
|
||
export async function removeVideoPassword(videoId: string) { | ||
export async function removeVideoPassword(videoId: Video.VideoId) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apply the same owner-bound UPDATE here
Mirror the authorization-at-write fix in removeVideoPassword to close the same TOCTOU window.
Proposed change:
import { and, eq } from "drizzle-orm";
await db()
.update(videos)
.set({ password: null })
.where(and(eq(videos.id, videoId), eq(videos.ownerId, user.id)));
🤖 Prompt for AI Agents
In apps/web/actions/videos/password.ts around line 49, removeVideoPassword
currently updates the video password without constraining the update to the
current owner, leaving a TOCTOU authorization window; import and and eq from
"drizzle-orm" and change the update to include a WHERE that combines
eq(videos.id, videoId) and eq(videos.ownerId, user.id) so the DB enforces
owner-bound writes (ensuring the password is only cleared when the current user
is the owner).
export async function verifyVideoPassword( | ||
videoId: Video.VideoId, | ||
password: string, | ||
) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Do not store the hashed password in a client cookie; set secure cookie options
Current code sets an encrypted version of the hash in a cookie without httpOnly/secure flags. This risks leakage and replay.
Minimum hardening now:
- Use an opaque, short‑lived token (not the password hash).
- Scope the cookie to the share path, set httpOnly, secure, sameSite, and maxAge.
Example replacement for the cookie line:
const token = await encrypt(`${videoId}:${Date.now()}`);
cookies().set("x-cap-access", token, {
httpOnly: true,
secure: process.env.NODE_ENV === "production",
sameSite: "lax",
path: `/s/${videoId}`,
maxAge: 60 * 60,
});
Preferred design:
- Issue an opaque, server‑signed token mapped to {videoId, viewer/session} in a server store with TTL.
- Validate the token server‑side; avoid placing password material client‑side entirely.
Run this to find all usages of the existing cookie so we can update readers and confirm no assumptions about storing the hash:
🏁 Script executed:
#!/bin/bash
rg -n -C2 -S "x-cap-password|x-cap-access|cookies\(\)\.get\(" --type ts --type tsx
Length of output: 27
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Searching for x-cap and cookies usage..."
rg -n -S -C2 -e 'x-cap-password' -e 'x-cap-access' -e 'x-cap-' -e 'cookies\(\)\.get\(' -e 'cookies\(\)\.set\(' -e 'cookies\(\)' || true
echo
echo "Attempting to display apps/web/actions/videos/password.ts (if present)..."
if [ -f "apps/web/actions/videos/password.ts" ]; then
sed -n '1,220p' "apps/web/actions/videos/password.ts"
else
echo "File not found at apps/web/actions/videos/password.ts"
echo "Searching for verifyVideoPassword function..."
rg -n "verifyVideoPassword" || true
fi
Length of output: 7774
Do NOT store the video password hash in a client cookie — issue an opaque, server‑validated token instead
Current code writes encrypted password material to a cookie (apps/web/actions/videos/password.ts → cookies().set("x-cap-password", ...)) and apps/web/lib/server.ts reads/decrypts it; this exposes password material and lacks secure cookie flags.
- Change apps/web/actions/videos/password.ts (line ~101): stop setting "x-cap-password". Issue an opaque, server‑signed token mapped to {videoId, viewer/session} in a server store with TTL and set that cookie with { httpOnly: true, secure: process.env.NODE_ENV === "production", sameSite: "lax", path:
/s/${videoId}
, maxAge: 60*60 }. - Change apps/web/lib/server.ts (lines ~32–35): stop reading/decrypting "x-cap-password"; validate the opaque token server‑side against your token store and derive access without exposing password material.
- Migrate usages (rename cookie to e.g., "x-cap-access"), invalidate existing cookies, and audit the codebase for remaining x-cap-password reads.
🤖 Prompt for AI Agents
In apps/web/actions/videos/password.ts around lines 82–101 and
apps/web/lib/server.ts around lines ~32–35, the code currently writes and later
decrypts a password-derived cookie ("x-cap-password"); stop storing password
material client‑side — instead generate an opaque, server‑signed token (mapped
in a server store to { videoId, viewer/session } with a TTL), set that token
cookie (rename to "x-cap-access") with httpOnly: true, secure:
process.env.NODE_ENV === "production", sameSite: "lax", path: `/s/${videoId}`,
maxAge: 60*60, and stop writing any encrypted password; update
apps/web/lib/server.ts to stop reading/decrypting "x-cap-password" and validate
the opaque token against the server store to derive access; invalidate existing
"x-cap-password" cookies, migrate usages to "x-cap-access", and audit the
codebase for any remaining reads of "x-cap-password".
"name": "@cap/workflow-runner", | ||
"type": "module", | ||
"scripts": { | ||
"dev": "pnpm dotenv -e ../../.env -- deno run --allow-all --watch ./src/index.ts" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dev script uses Deno to run Node-only code; switch to tsx (or ts-node).
index.ts imports node:http and @effect/platform-node; Deno will fail here.
Apply:
- "dev": "pnpm dotenv -e ../../.env -- deno run --allow-all --watch ./src/index.ts"
+ "dev": "pnpm dotenv -e ../../.env -- tsx watch ./src/index.ts"
Add tsx:
"devDependencies": {
- "dotenv-cli": "^10.0.0"
+ "dotenv-cli": "^10.0.0",
+ "tsx": "^4.19.2"
}
Also applies to: 20-22
🤖 Prompt for AI Agents
In apps/workflow-runner/package.json around line 5 (and similarly lines 20-22),
the "dev" script currently uses Deno which fails because index.ts imports
Node-only modules; replace the deno-based command with a Node-based runner
(e.g., tsx or ts-node) and update package.json accordingly: add tsx as a
devDependency (or ensure ts-node is installed), change the script to use pnpm
dotenv -e ../../.env -- tsx watch ./src/index.ts (or tsx ./src/index.ts
--watch), and update any other scripts on lines 20-22 to use tsx instead of deno
so Node APIs like node:http and @effect/platform-node work correctly.
const SqlLayer = Layer.unwrapEffect( | ||
Effect.gen(function* () { | ||
const url = yield* Config.string("DATABASE_URL").pipe((v) => | ||
Config.redacted(v), | ||
); | ||
return MysqlClient.layer({ url }); | ||
}), | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Confirm Config.redacted typing matches MysqlClient.layer({ url }).
If Config.redacted brands the type, you may need a cast or to avoid redaction at the value level.
Try:
🏁 Script executed:
#!/bin/bash
# Grep for Config.redacted usage patterns in repo to ensure consistent typing
rg -nP 'Config\.redacted\(' -C2
Length of output: 339
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== apps/workflow-runner/src/index.ts (first 160 lines) ==="
sed -n '1,160p' apps/workflow-runner/src/index.ts || true
echo
echo "=== Search for where Config is imported or defined ==="
rg -n --hidden -S --no-ignore '^\s*import .*Config|^\s*(export )?(const|class|namespace|interface|type)\s+Config\b' || true
echo
echo "=== Find any redacted symbol definitions/usages across repo ==="
rg -n --hidden -S --no-ignore 'redacted' -g '!**/node_modules/**' || true
echo
echo "=== Find MysqlClient occurrences and layer definition/usages ==="
rg -n --hidden -S --no-ignore 'MysqlClient|\.layer\(' -g '!**/node_modules/**' || true
echo
echo "=== Show files that mention Config.redacted specifically ==="
rg -n --hidden -S --no-ignore 'Config\.redacted' -g '!**/node_modules/**' || true
Length of output: 5619
🌐 Web query:
Config.redacted effect TypeScript
💡 Result:
Short answer: In the Effect (effect.ts) config API, Config.redacted reads an env/config value and returns a Redacted (usually Redacted) rather than a raw string. A Redacted value hides its contents when logged and must be explicitly unwrapped with Redacted.value (or Redacted.make/unwrap APIs) to get the underlying string — which makes accidental leaks harder and TypeScript will not let you pass Redacted where a string is expected without an explicit extraction. [1][2][3]
Small example (TypeScript):
import { Effect, Config, Redacted } from "effect";
const program = Effect.gen(function* () {
const secret: Redacted = yield* Config.redacted("API_KEY");
// console.log(secret) -> prints ""
const actual: string = Redacted.value(secret); // explicit unwrap
return actual;
});
References:
- Effect configuration docs: Config.redacted and examples. [1]
- Guide: Redacted vs string and using Redacted.value. [2]
- Redacted data-type reference (make, value, unsafeWipe). [3]
Unwrap Config.redacted before passing to MysqlClient.layer
Config.redacted returns Redacted, not a raw string — MysqlClient.layer({ url }) expects a plain string. Fix by unwrapping (import { Redacted } from "effect" and use Redacted.value(url) or const urlStr = Redacted.value(url)), or stop redacting (use Config.string("DATABASE_URL") directly). apps/workflow-runner/src/index.ts lines ~17–24
🤖 Prompt for AI Agents
In apps/workflow-runner/src/index.ts around lines 17 to 24, Config.redacted
returns a Redacted<string> but MysqlClient.layer expects a plain string; replace
the redacted value with a raw string before calling MysqlClient.layer — either
import { Redacted } from "effect" and call Redacted.value(url) (e.g. const
urlStr = Redacted.value(url)) and pass urlStr, or remove the Config.redacted
call and use Config.string("DATABASE_URL") directly so MysqlClient.layer
receives a plain string.
"effect": "^3.17.13", | ||
"nanoid": "^5.0.4", | ||
"next": "14.2.9", | ||
"next": "^14", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify Next 14 vs React 19 compatibility across the monorepo.
Next.js 14 generally targets React 18. Ensure no peer/version conflicts (this is the database package, but shared workspace versions can cascade).
Run:
Also applies to: 46-47
🏁 Script executed:
#!/bin/bash
# Check Next/React versions across packages
rg -n --glob '**/package.json' -C2 '"next":|"react":|"react-dom":'
Length of output: 2177
🌐 Web query:
Does Next.js 14 support React 19? What React versions are listed as peer dependencies for next@14?
💡 Result:
Short answer: No — Next.js 14 targets React 18. Next.js v14's docs and packages list React 18 (>= 18.2.0) / react-dom 18 as the expected peer deps, not React 19. [1][2]
Sources:
- Next.js v14 upgrade docs (shows install commands using react@18 / react-dom@18). [1]
- Next.js repo / package peerDependencies (example: @next/third-parties lists "react": "^18.2.0"). [2]
Fix Next 14 vs React 19 mismatch — align workspace to React 18
Next.js 14 targets React 18; the repo currently declares React 19 in several packages, causing peer dependency conflicts.
- Next 14 present: packages/web-backend/package.json, packages/database/package.json, apps/web/package.json
- React 19 present: packages/utils/package.json, packages/ui/package.json, packages/database/package.json, apps/web/package.json
- Action: standardize react + react-dom to React 18 (e.g., ^18.2.0) across the workspace (recommended for Next 14), or upgrade Next only if a release officially supports React 19. Remove
next
from non-Next packages (e.g., database) if accidental.
🤖 Prompt for AI Agents
In packages/database/package.json around line 31, a "next": "^14" entry exists
in a non-Next package and the workspace contains mixed React versions (React 19
in some packages) causing peer dependency conflicts with Next 14 (which targets
React 18); remove the accidental "next" dependency from
packages/database/package.json and standardize react and react-dom to React 18
(e.g., "^18.2.0") across all workspace package.json files (update dependencies,
devDependencies and peerDependencies where applicable in packages/utils,
packages/ui, packages/database, apps/web and packages/web-backend) so every
package that relies on React references the same ^18.2.0, or alternatively
upgrade Next only if you plan to move to a Next release that officially supports
React 19.
@@ -1 +1,3 @@ | |||
export * from "./metadata"; | |||
|
|||
import "./next-auth"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid runtime side effects in types entrypoint; use a type-only import for NextAuth augmentation.
Side‑effect import will emit a JS import and can pull runtime code unexpectedly. Use a type‑only import to include declaration merging without runtime cost.
Apply:
-import "./next-auth";
+import type {} from "./next-auth";
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
import "./next-auth"; | |
import type {} from "./next-auth"; |
🤖 Prompt for AI Agents
In packages/database/types/index.ts around line 3, the plain side-effect import
`import "./next-auth";` causes a runtime JS import; replace it with a type-only
import to keep the declaration merging but emit no runtime code — e.g. change to
a type-only import (import type {} from "next-auth") so the NextAuth
augmentation is included only in type space and no JS import is emitted.
Starts work on the Loom importer backend, introducing Effect Cluster and Workflow to run the Import Video workflow, which creates some db records then downloads the video, streaming the result to S3.
Summary by CodeRabbit