Feat/one2b agent schema#97
Conversation
Additive schema changes preparing for the one2b-internal-agent merge. No data is moved here; this only widens schemas so the next phase can populate them. Existing models extended: - User: encrypted phone, JSON preferences, isServiceAccount flag - Action: source provenance (sourceType/sourceId) + lastUpdatedBy/Source for tracking which channel last updated a task (UI, agent, email, etc.) - TranscriptionSession: Fireflies meeting metadata (firefliesId unique, analytics/sentences JSON, video URL, duration, participantCount) - Integration: webhookId for URL-scoped webhook routing + providerConfig - WhatsAppConfig: provider discriminator (META_CLOUD | DIALOG_360), per-provider config blob; Meta-specific fields relaxed to nullable - KnowledgeChunk: workspaceId (nullable for now; backfill + NOT NULL in follow-up migration), embedding provenance (provider/model/dim/ generatedAt) for re-embedding support, speaker fields for transcript chunks, userId relaxed to nullable - CrmContact: reverse relation to TranscriptionSessionParticipant - CrmCommunication: agent-driven follow-up draft workflow fields (reviewStatus separate from delivery status, source provenance, agentGenerated flag, revision history) New models: - Document: parent for ingested files (Drive/upload/URL/email attachment); KnowledgeChunk references via sourceId when sourceType='document' - TranscriptionSessionParticipant: join between meeting and User-or-CrmContact-or-raw-email, with speaker labels - PreMeetingBrief: scheduled pre-meeting briefs per user per calendar event; unique on (userId, calendarEventId) - ReminderLog: system audit trail for action item reminders, daily digests, overdue notices, pre-meeting briefs - PendingDriveUpload: WhatsApp file upload state machine awaiting Drive folder selection from the user All workspace-scoped. All relations use cascade-on-workspace-delete or set-null-on-user-delete consistent with existing conventions. Note: pgvector HNSW index on KnowledgeChunk.embedding to be added manually to the generated migration SQL (Prisma cannot model pgvector indexes natively).
Migration 20260426075815_merge_one2b_agent_schema: Companion to the schema commit. Adds the new one2b agent columns and tables, with a manually-appended HNSW pgvector index on KnowledgeChunk.embedding (Prisma cannot model pgvector indexes natively). Migration 20260426081707_add_hnsw_index: Inadvertent drift artifact — Prisma's `migrate dev` (without --create-only) noticed the HNSW index in the DB but not in schema.prisma and auto-generated a DROP. This was applied before review. The index can be recreated when production embedding volume justifies it; until then queries fall back to sequential scan, which is fine for current data sizes. Lesson: pgvector indexes will trigger this drift trap on every unguarded `migrate dev`. Use `--create-only` for any migration in this project and review the generated SQL for spurious DROP INDEX statements before applying. Prisma client + CLI bumped from 6.14.0 to 6.19.3 by `migrate dev`.
Lets the agent assign action items to meeting participants who are not (yet) registered exponential Users. Sibling to ActionAssignee: the agent picks ActionAssignee when the assignee email resolves to a workspace User, otherwise links to a TranscriptionSessionParticipant. Why this matters: forcing every meeting participant to onboard to exponential just to be assigned a task is unwanted friction. With this table, action items can flow through the agent end-to-end (extract from transcript, assign, remind via WhatsApp/email) without exponential accounts. If a participant later signs up, a backfill job converts these assignments to ActionAssignee. Includes: - New table with (actionId, participantId, workspaceId, createdAt) - Unique constraint on (actionId, participantId) - B-tree indexes on each FK column - Cascade-on-delete from Action, TranscriptionSessionParticipant, and Workspace (consistent with the rest of one2b agent models) - Reverse relations on Action, TranscriptionSessionParticipant, and Workspace
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughUpgrades Prisma to v6.19.3 and adds One2b agent schema: six new models, multiple field extensions (actions, transcription sessions, knowledge chunks, integrations, WhatsApp), several migrations (including pgvector index changes), utility and service adjustments for encryption and WhatsApp credential handling, and an optimistic UI payload alignment. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant UI as Frontend UI
participant Server
participant DB as Database
participant Notifier as WhatsAppService
participant WhatsAppAPI as MetaWhatsApp
Client->>UI: User creates Action (optimistic payload incl. source fields)
UI->>Server: POST /api/action.create (payload)
Server->>DB: INSERT Action (+ participantAssignees if any)
DB-->>Server: Insert OK
Server->>Notifier: prepare/send notification (load WhatsApp config)
Notifier->>DB: SELECT WhatsAppConfig (includes provider, providerConfig)
DB-->>Notifier: config (nullable IDs possible)
Notifier-->>Notifier: validate config (phoneNumberId,businessAccountId)
alt config invalid
Notifier-->>Server: return structured failure (no send)
else config valid
Notifier->>WhatsAppAPI: send message (uses phoneNumberId)
WhatsAppAPI-->>Notifier: send result
Notifier-->>Server: delivery result
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
package.json (1)
62-62: Upgrade React and React DOM to v19 for Next.js 15 compatibility.While touching this manifest, upgrade
reactandreact-domfrom^18.3.1to^19.0.0(or latest v19). The official Next.js 15 upgrade guide specifies React 19 as the minimum required version. Since you're already modifying dependencies here, align these to meet the Next.js 15 baseline per your coding guidelines.Also applies to: 146-146
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` at line 62, Update the package.json dependency entries for "react" and "react-dom" from ^18.3.1 to ^19.0.0 (or the latest v19) so the project meets Next.js 15's React 19 requirement; locate the "react" and "react-dom" entries in package.json (there are two occurrences referenced) and change their version strings to ^19.0.0, then run your package manager to refresh lockfile and verify no other dependency constraints break.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@prisma/migrations/20260426081707_add_hnsw_index/migration.sql`:
- Around line 1-2: The migration currently drops the HNSW vector index
"KnowledgeChunk_embedding_hnsw_idx" which breaks semantic search used by
KnowledgeService (refer to src/server/services/KnowledgeService.ts where it runs
ORDER BY kc.embedding <=> ... LIMIT); revert this by removing the DROP INDEX
from the migration (or instead add a CREATE INDEX ... USING hnsw_cosine/ivfflat
with the original options) so the "KnowledgeChunk_embedding_hnsw_idx" index
remains available for those ORDER BY ... <=> queries; ensure the migration
preserves or recreates the exact index name "KnowledgeChunk_embedding_hnsw_idx"
so existing queries in KnowledgeService continue to use the index.
In `@prisma/schema.prisma`:
- Around line 3882-3897: The denormalized workspaceId on the
ActionParticipantAssignee model can diverge from the referenced Action and
TranscriptionSessionParticipant leading to cross-workspace leakage; either
remove workspaceId and derive workspace via the relations (use
action.workspaceId or participant.workspaceId for workspace-scoped queries) or
keep it but add DB-level constraints in the migration to guarantee consistency
(CREATE CHECK constraints or triggers that assert workspaceId = (SELECT
workspaceId FROM "Action" WHERE id = actionId) AND workspaceId = (SELECT
workspaceId FROM "TranscriptionSessionParticipant" WHERE id = participantId)).
Update the Prisma model ActionParticipantAssignee (fields: workspaceId,
actionId, participantId and relations action and participant) and add a
migration that implements the chosen fix so workspaceId cannot drift from
Action.workspaceId/TranscriptionSessionParticipant.workspaceId.
- Around line 2182-2205: The schema made KnowledgeChunk.userId nullable while
KnowledgeService still filters reads by WHERE kc."userId" = ${userId}, causing
chunks written with only workspaceId to be invisible; fix by updating the
read/query logic in src/server/services/KnowledgeService.ts to also include
workspace-scoped results (e.g., include OR kc."workspaceId" = ${workspaceId} or
add a combined conditional that checks both userId and workspaceId), and update
the Prisma model KnowledgeChunk to declare a Workspace relation/foreign key for
workspaceId (e.g., add workspace Workspace? `@relation`(fields: [workspaceId],
references: [id])) and then create and run the migration to add the FK/index so
writes and reads are consistent.
---
Nitpick comments:
In `@package.json`:
- Line 62: Update the package.json dependency entries for "react" and
"react-dom" from ^18.3.1 to ^19.0.0 (or the latest v19) so the project meets
Next.js 15's React 19 requirement; locate the "react" and "react-dom" entries in
package.json (there are two occurrences referenced) and change their version
strings to ^19.0.0, then run your package manager to refresh lockfile and verify
no other dependency constraints break.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 66aefac7-357d-4bb8-a391-04e422570831
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (5)
package.jsonprisma/migrations/20260426075815_merge_one2b_agent_schema/migration.sqlprisma/migrations/20260426081707_add_hnsw_index/migration.sqlprisma/migrations/20260427175749_add_action_participant_assignee/migration.sqlprisma/schema.prisma
| -- DropIndex | ||
| DROP INDEX "KnowledgeChunk_embedding_hnsw_idx"; |
There was a problem hiding this comment.
Don't ship the migration that drops the vector index.
This removes the manual HNSW index created in prisma/migrations/20260426075815_merge_one2b_agent_schema/migration.sql, but src/server/services/KnowledgeService.ts:448-531 still does ORDER BY kc.embedding <=> ... LIMIT queries against KnowledgeChunk. After this runs, semantic search falls back to a full scan as the table grows.
💡 Safer migration shape
-- DropIndex
-DROP INDEX "KnowledgeChunk_embedding_hnsw_idx";
+CREATE INDEX IF NOT EXISTS "KnowledgeChunk_embedding_hnsw_idx"
+ ON "KnowledgeChunk"
+ USING hnsw (embedding vector_cosine_ops);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| -- DropIndex | |
| DROP INDEX "KnowledgeChunk_embedding_hnsw_idx"; | |
| -- DropIndex | |
| CREATE INDEX IF NOT EXISTS "KnowledgeChunk_embedding_hnsw_idx" | |
| ON "KnowledgeChunk" | |
| USING hnsw (embedding vector_cosine_ops); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@prisma/migrations/20260426081707_add_hnsw_index/migration.sql` around lines 1
- 2, The migration currently drops the HNSW vector index
"KnowledgeChunk_embedding_hnsw_idx" which breaks semantic search used by
KnowledgeService (refer to src/server/services/KnowledgeService.ts where it runs
ORDER BY kc.embedding <=> ... LIMIT); revert this by removing the DROP INDEX
from the migration (or instead add a CREATE INDEX ... USING hnsw_cosine/ivfflat
with the original options) so the "KnowledgeChunk_embedding_hnsw_idx" index
remains available for those ORDER BY ... <=> queries; ensure the migration
preserves or recreates the exact index name "KnowledgeChunk_embedding_hnsw_idx"
so existing queries in KnowledgeService continue to use the index.
| userId String? | ||
| projectId String? | ||
|
|
||
| // Workspace scoping (initially nullable; backfill then a follow-up migration enforces NOT NULL) | ||
| workspaceId String? | ||
|
|
||
| // Embedding provenance (provider-agnostic; supports re-embedding) | ||
| embeddingProvider String? | ||
| embeddingModel String? | ||
| embeddingDim Int? | ||
| embeddingGeneratedAt DateTime? | ||
|
|
||
| // Speaker info for transcript chunks | ||
| speakerName String? | ||
| speakerEmail String? | ||
| startTimeMs Int? | ||
| endTimeMs Int? | ||
|
|
||
| createdAt DateTime @default(now()) | ||
|
|
||
| @@index([sourceType, sourceId]) | ||
| @@index([userId]) | ||
| @@index([projectId]) | ||
| @@index([workspaceId]) |
There was a problem hiding this comment.
This makes workspace chunks writable before they're readable.
Line 2182 makes KnowledgeChunk.userId nullable, but src/server/services/KnowledgeService.ts:448-531 still filters semantic search with WHERE kc."userId" = ${userId}. Any chunk written with only workspaceId set will disappear from retrieval immediately, and this model still doesn't declare a Workspace relation/FK to enforce the new scope.
Based on learnings, Use Prisma with proper error handling and migrations. Always test schema changes before pushing to production.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@prisma/schema.prisma` around lines 2182 - 2205, The schema made
KnowledgeChunk.userId nullable while KnowledgeService still filters reads by
WHERE kc."userId" = ${userId}, causing chunks written with only workspaceId to
be invisible; fix by updating the read/query logic in
src/server/services/KnowledgeService.ts to also include workspace-scoped results
(e.g., include OR kc."workspaceId" = ${workspaceId} or add a combined
conditional that checks both userId and workspaceId), and update the Prisma
model KnowledgeChunk to declare a Workspace relation/foreign key for workspaceId
(e.g., add workspace Workspace? `@relation`(fields: [workspaceId], references:
[id])) and then create and run the migration to add the FK/index so writes and
reads are consistent.
| model ActionParticipantAssignee { | ||
| id String @id @default(cuid()) | ||
| actionId String | ||
| participantId String | ||
| workspaceId String // denormalized for workspace-scoped queries | ||
| createdAt DateTime @default(now()) | ||
|
|
||
| action Action @relation(fields: [actionId], references: [id], onDelete: Cascade) | ||
| participant TranscriptionSessionParticipant @relation(fields: [participantId], references: [id], onDelete: Cascade) | ||
| workspace Workspace @relation(fields: [workspaceId], references: [id], onDelete: Cascade) | ||
|
|
||
| @@unique([actionId, participantId]) | ||
| @@index([actionId]) | ||
| @@index([participantId]) | ||
| @@index([workspaceId]) | ||
| } |
There was a problem hiding this comment.
workspaceId here can drift away from the parent records.
This table only enforces three independent FKs, so a row can point at an Action, a TranscriptionSessionParticipant, and a Workspace from different workspaces. Because workspaceId is explicitly denormalized for workspace-scoped queries, a bad write turns into cross-workspace leakage instead of a rejected insert.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@prisma/schema.prisma` around lines 3882 - 3897, The denormalized workspaceId
on the ActionParticipantAssignee model can diverge from the referenced Action
and TranscriptionSessionParticipant leading to cross-workspace leakage; either
remove workspaceId and derive workspace via the relations (use
action.workspaceId or participant.workspaceId for workspace-scoped queries) or
keep it but add DB-level constraints in the migration to guarantee consistency
(CREATE CHECK constraints or triggers that assert workspaceId = (SELECT
workspaceId FROM "Action" WHERE id = actionId) AND workspaceId = (SELECT
workspaceId FROM "TranscriptionSessionParticipant" WHERE id = participantId)).
Update the Prisma model ActionParticipantAssignee (fields: workspaceId,
actionId, participantId and relations action and participant) and add a
migration that implements the chosen fix so workspaceId cannot drift from
Action.workspaceId/TranscriptionSessionParticipant.workspaceId.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/server/services/notifications/WhatsAppNotificationService.ts`:
- Around line 93-96: The guard that returns "no credentials" is too strict: in
WhatsAppNotificationService (check the code that inspects
integration.whatsappConfig) only require phoneNumberId and accessToken for
send/test flows rather than also requiring businessAccountId; update the
conditional to return null only when phoneNumberId OR accessToken is missing
(i.e., require both phoneNumberId and accessToken), and remove the
businessAccountId dependency so configs with a null businessAccountId but valid
phoneNumberId+accessToken are allowed.
In `@src/server/utils/encryption.ts`:
- Line 22: Add a JSDoc block above the exported function encryptString
describing its purpose (e.g., "Encrypts a UTF-8 plaintext string using the
module's crypto routine") and specifying the return format clearly (e.g., that
it returns an encrypted byte buffer as a Uint8Array or ArrayBuffer, whichever
the implementation returns). Include parameter description for plaintext:
string, and a `@returns` line that states the exact type returned (Uint8Array or
ArrayBuffer) and that it represents the encrypted bytes. Ensure the comment is
concise and placed immediately above the encryptString declaration.
- Line 22: The exported function signature uses an invalid generic type; change
the return type of encryptString from Uint8Array<ArrayBuffer> to plain
Uint8Array and add JSDoc above the function describing its purpose, parameters,
and return value (e.g., `@param` plaintext: string, `@returns`: Uint8Array of
encrypted bytes) so the public API is documented; update the function
declaration for encryptString and place the JSDoc comment directly above it.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d16763ac-b00d-4869-a798-5db30c2b9208
📒 Files selected for processing (4)
src/app/_components/layout/GlobalAddTaskButton.tsxsrc/server/services/notifications/WhatsAppNotificationService.tssrc/server/services/whatsapp/OptimizedQueries.tssrc/server/utils/encryption.ts
✅ Files skipped from review due to trivial changes (1)
- src/server/services/whatsapp/OptimizedQueries.ts
| // Meta-specific fields are nullable (null for non-Meta providers like DIALOG_360) | ||
| if (!integration.whatsappConfig.phoneNumberId || !integration.whatsappConfig.businessAccountId) { | ||
| return null; | ||
| } |
There was a problem hiding this comment.
Over-strict credential gate can block valid sends
Line 94 currently requires both phoneNumberId and businessAccountId, but this service only needs phoneNumberId + accessToken for message send/test calls. Since businessAccountId is nullable in schema, this can cause false “No WhatsApp credentials found” failures for otherwise valid configs.
Suggested fix
- private async getCredentials(): Promise<{ accessToken: string; phoneNumberId: string; businessAccountId: string } | null> {
- if (this.accessToken && this.phoneNumberId && this.businessAccountId) {
+ private async getCredentials(): Promise<{ accessToken: string; phoneNumberId: string; businessAccountId?: string } | null> {
+ if (this.accessToken && this.phoneNumberId) {
return {
accessToken: this.accessToken,
phoneNumberId: this.phoneNumberId,
businessAccountId: this.businessAccountId,
};
}
...
- if (!integration.whatsappConfig.phoneNumberId || !integration.whatsappConfig.businessAccountId) {
+ if (!integration.whatsappConfig.phoneNumberId) {
return null;
}
this.accessToken = accessTokenCred.key;
this.phoneNumberId = integration.whatsappConfig.phoneNumberId;
- this.businessAccountId = integration.whatsappConfig.businessAccountId;
+ this.businessAccountId = integration.whatsappConfig.businessAccountId ?? undefined;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/server/services/notifications/WhatsAppNotificationService.ts` around
lines 93 - 96, The guard that returns "no credentials" is too strict: in
WhatsAppNotificationService (check the code that inspects
integration.whatsappConfig) only require phoneNumberId and accessToken for
send/test flows rather than also requiring businessAccountId; update the
conditional to return null only when phoneNumberId OR accessToken is missing
(i.e., require both phoneNumberId and accessToken), and remove the
businessAccountId dependency so configs with a null businessAccountId but valid
phoneNumberId+accessToken are allowed.
| } | ||
|
|
||
| export function encryptString(plaintext: string): Buffer { | ||
| export function encryptString(plaintext: string): Uint8Array<ArrayBuffer> { |
There was a problem hiding this comment.
Add JSDoc for exported encryptString.
This public function was modified and still lacks documentation. Please add a short JSDoc block (purpose + return format).
As per coding guidelines **/*.{ts,tsx,js,jsx}: “Add JSDoc/documentation comments for public functions”.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/server/utils/encryption.ts` at line 22, Add a JSDoc block above the
exported function encryptString describing its purpose (e.g., "Encrypts a UTF-8
plaintext string using the module's crypto routine") and specifying the return
format clearly (e.g., that it returns an encrypted byte buffer as a Uint8Array
or ArrayBuffer, whichever the implementation returns). Include parameter
description for plaintext: string, and a `@returns` line that states the exact
type returned (Uint8Array or ArrayBuffer) and that it represents the encrypted
bytes. Ensure the comment is concise and placed immediately above the
encryptString declaration.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== TypeScript version from package.json files =="
fd -HI '^package.json$' | while read -r pkg; do
echo "--- $pkg ---"
jq -r '.devDependencies.typescript // .dependencies.typescript // "typescript not declared"' "$pkg"
done
echo
echo "== tsconfig compiler target/lib (if present) =="
fd -HI '^tsconfig.*\.json$' | while read -r cfg; do
echo "--- $cfg ---"
jq -r '.compilerOptions.target // "target: <unset>"' "$cfg"
jq -r '.compilerOptions.lib // "lib: <unset>"' "$cfg"
done
echo
echo "== Generic Uint8Array usage in TS files =="
rg -n --type=ts 'Uint8Array<'Repository: positonic/exponential
Length of output: 50460
🏁 Script executed:
#!/bin/bash
# Root tsconfig.json - TypeScript configuration
echo "=== Root tsconfig.json ==="
cat tsconfig.json 2>/dev/null || echo "Not found"
echo -e "\n=== Root package.json - TypeScript version ==="
jq '.devDependencies.typescript // .dependencies.typescript' package.json 2>/dev/null || echo "Not found"
# Read the specific file and lines around line 22
echo -e "\n=== src/server/utils/encryption.ts (lines 15-35) ==="
sed -n '15,35p' src/server/utils/encryption.ts 2>/dev/null || echo "File not found"
# Search for Uint8Array usage in src directory only (not node_modules)
echo -e "\n=== Uint8Array usage in src/ ==="
rg 'Uint8Array' src/ --type=ts -n 2>/dev/null | head -20
# Check if JSDoc exists above the function
echo -e "\n=== Check for JSDoc above encryptString (lines 1-25) ==="
head -25 src/server/utils/encryption.ts 2>/dev/nullRepository: positonic/exponential
Length of output: 4105
Fix TypeScript type error and add JSDoc to exported function.
Line 22 uses invalid syntax Uint8Array<ArrayBuffer>. The Uint8Array type is not generic; use Uint8Array as the return type instead. Additionally, add JSDoc documentation to this public exported function explaining its purpose and parameters.
Type error details
`Uint8Array` is invalid TypeScript (error: "Type 'Uint8Array' is not generic"). The correct return type is simply `Uint8Array`. The ArrayBuffer backing store is implicit and does not require explicit generic syntax.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/server/utils/encryption.ts` at line 22, The exported function signature
uses an invalid generic type; change the return type of encryptString from
Uint8Array<ArrayBuffer> to plain Uint8Array and add JSDoc above the function
describing its purpose, parameters, and return value (e.g., `@param` plaintext:
string, `@returns`: Uint8Array of encrypted bytes) so the public API is
documented; update the function declaration for encryptString and place the
JSDoc comment directly above it.
Description
Brief description of what this PR does.
Existing tables modified
Userphone_encryptedBYTEACrmContact.phoneUserpreferencesJSONBUserisServiceAccountBOOLEANfalsebot+one2b@one2b.io)ActionsourceTypeTEXTmeeting|email|whatsapp|manualActionsourceIdTEXTActionlastUpdatedByTEXTAGENT|USER_EMAIL|USER_WHATSAPP|USER_UIActionlastUpdatedSourceTEXTTranscriptionSessionfirefliesIdTEXTTranscriptionSessionanalyticsJsonJSONBTranscriptionSessionsentencesJsonJSONBTranscriptionSessionvideoUrlTEXTTranscriptionSessiondurationSecondsINTEGERTranscriptionSessionparticipantCountINTEGERIntegrationwebhookIdTEXTIntegrationproviderConfigJSONBWhatsAppConfigproviderTEXT'META_CLOUD'META_CLOUD|DIALOG_360WhatsAppConfigproviderConfigJSONBWhatsAppConfigphoneNumberIdTEXTWhatsAppConfigbusinessAccountIdTEXTWhatsAppConfigwebhookVerifyTokenTEXTKnowledgeChunkworkspaceIdTEXTKnowledgeChunkembeddingProviderTEXTopenai,jina, etc.KnowledgeChunkembeddingModelTEXTtext-embedding-3-smallKnowledgeChunkembeddingDimINTEGER1536KnowledgeChunkembeddingGeneratedAtTIMESTAMP(3)KnowledgeChunkspeakerNameTEXTKnowledgeChunkspeakerEmailTEXTKnowledgeChunkstartTimeMsINTEGERKnowledgeChunkendTimeMsINTEGERKnowledgeChunkuserIdTEXTCrmCommunicationreviewStatusTEXTNEEDS_REVIEW|IN_REVIEW|REVISING|APPROVEDCrmCommunicationsourceTypeTEXTCrmCommunicationsourceIdTEXTCrmCommunicationagentGeneratedBOOLEANfalseCrmCommunicationrevisionCountINTEGER0CrmCommunicationrevisionHistoryJSONB{snapshot, generatedAt, prompt}New tables
DocumentKnowledgeChunkreferences viasourceIdwhensourceType='document'.TranscriptionSessionParticipantTranscriptionSessionand (User OR CrmContact OR raw email). Carries Fireflies speaker label andisHost. Lets us record meeting attendees who don't have exponential accounts.ActionParticipantAssigneeActionAssignee— links anActionto aTranscriptionSessionParticipantwhen the assignee email doesn't resolve to a workspaceUser. Avoids forcing every meeting participant to onboard. Backfill job will convert these toActionAssigneeif a participant later signs up.PreMeetingBriefEMAIL|WHATSAPP).ReminderLogCrmCommunicationbecause it's system-generated, not user-facing comms.PendingDriveUploadexpiresAt.Indexes added
Integration(webhookId)TranscriptionSession(firefliesId)TranscriptionSessionParticipant(transcriptionSessionId, email)PreMeetingBrief(userId, calendarEventId)ActionParticipantAssignee(actionId, participantId)Action(sourceType, sourceId)KnowledgeChunk(workspaceId)TranscriptionSession(firefliesId)CrmCommunication(reviewStatus)workspaceId,userId, etc.Nullability relaxations on existing columns
KnowledgeChunkuserIdWhatsAppConfigphoneNumberIdDIALOG_360rowsWhatsAppConfigbusinessAccountIdWhatsAppConfigwebhookVerifyTokenType of Change
Database Changes
If migrations are included:
Testing
Checklist
Deployment Notes
Any special deployment considerations?
Related Issues
Closes #
Summary by CodeRabbit
Chores
New Features
Bug Fixes / Reliability