Skip to content

feat: implement Timeline page with activity log and infinite scroll functionality#122

Merged
JoachimLK merged 14 commits intomainfrom
feat/timeline-view
Mar 26, 2026
Merged

feat: implement Timeline page with activity log and infinite scroll functionality#122
JoachimLK merged 14 commits intomainfrom
feat/timeline-view

Conversation

@JoachimLK
Copy link
Copy Markdown
Contributor

@JoachimLK JoachimLK commented Mar 24, 2026

Summary

  • What does this PR change?
  • Why is this needed?

Type of change

  • Bug fix
  • Feature
  • Refactor
  • Docs
  • Chore

Validation

  • I tested locally
  • I added/updated relevant documentation
  • I verified multi-tenant scoping and auth behavior for affected API paths

DCO

  • All commits in this PR are signed off (Signed-off-by) via git commit -s

Summary by CodeRabbit

  • New Features

    • Timeline page with filters, day-grouped events, collapsible sections, infinite scroll, and a dashboard nav link.
    • Client timeline composable and API endpoint returning timeline items plus upcoming events.
    • Blocking startup script to apply dark mode before first paint.
  • Chores

    • Seed and demo-org deletion scripts plus new npm db:reseed command.
    • Added local dev trusted origins and workspace config.
  • Telemetry

    • Widespread analytics and server-side tracking enhancements, tracking helpers, and improved error/exception capture.

@railway-app railway-app Bot temporarily deployed to applirank / reqcore-pr-122 March 24, 2026 09:52 Destroyed
@railway-app
Copy link
Copy Markdown

railway-app Bot commented Mar 24, 2026

🚅 Deployed to the reqcore-pr-122 environment in applirank

Service Status Web Updated (UTC)
applirank ✅ Success (View Logs) Web Mar 26, 2026 at 11:15 am

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 24, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a Timeline feature (UI, composable, API, seeding), broad PostHog/server-side tracking helpers and middleware, analytics calls across several pages/components, a dark-mode blocking head script, a db reseed/delete-demo-org workflow, and small dev-origin and workspace updates.

Changes

Cohort / File(s) Summary
Timeline UI & Nav
app/components/AppTopBar.vue, app/pages/dashboard/timeline.vue
Added "Timeline" nav item and new protected dashboard page implementing filters, collapsible day/section views, infinite scroll, skeleton/error/empty states, and event rendering.
Timeline Composable
app/composables/useTimeline.ts
New composable with types, reactive state, loadInitial/loadMore cursor pagination, upcoming items, dayGroups grouping and totalEvents.
Activity Log API
server/api/activity-log/timeline.get.ts
New GET endpoint with zod validation, permission check, Drizzle queries, actor join, batch enrichment (jobs, candidates, applications, interviews), cursor pagination, and synthetic upcoming interviews when no cursor provided.
Seeding & Demo scripts
server/scripts/seed.ts, server/scripts/delete-demo-org.ts, package.json (scripts)
Seed now records many activity_log entries (jobs, candidates, applications, status changes, interviews); added delete-demo-org script and db:reseed npm script.
Server analytics infra
server/utils/trackEvent.ts, server/middleware/posthog-api-tracking.ts, server/utils/posthog.ts
Added server PostHog helpers (trackEvent, trackApiError, trackServerError), API-tracking middleware that logs slow/errors, and server-side PostHog super-properties/exception autocapture.
Instrumented endpoints & pages
server/api/* (jobs, interviews, candidates, applications, public apply), app/pages/**, app/components/**
Inserted PostHog tracking calls across many server API handlers (job created/updated, interview scheduled, candidate created, application status change, public apply) and client components/pages (candidate sidebar, score breakdown, interviews page, jobs pages, settings).
PostHog client & identity
app/plugins/posthog-identity.client.ts, app/composables/usePostHogIdentity.ts, app/composables/useTrack.ts, app/composables/useTrack.ts
Extended posthog identity payload to include member_role, made org watcher async to fetch member role, and added captureError helper (consent-gated) to useTrack.
Nuxt config & front-end behavior
nuxt.config.ts
Enabled client sourcemaps (hidden), added PostHog client/server config for exception capture, and injected blocking head script to set dark mode class before first paint.
Misc: dev origins & workspace
server/utils/auth.ts, app/pages/dashboard/main-workspace.code-workspace
Added additional trusted local origins (:3002) and added a workspace file (no runtime effect).

Sequence Diagram

sequenceDiagram
    participant User as User
    participant Page as Timeline Page
    participant Composable as useTimeline
    participant API as ActivityLog API
    participant DB as Database

    User->>Page: Navigate to /dashboard/timeline
    Page->>Composable: loadInitial(resourceType?)
    Composable->>API: GET /api/activity-log/timeline?limit=100&resourceType?
    API->>DB: Query activity_log (orgId, optional resourceType, cursor)
    DB-->>API: Return rows
    API->>DB: Batch fetch users, jobs, candidates, applications, interviews
    DB-->>API: Return related records
    API-->>Composable: { items, upcoming, hasMore, oldestTimestamp, newestTimestamp }
    Composable->>Composable: Merge upcoming, group by day, build sections
    Composable-->>Page: dayGroups, totalEvents, hasMore

    User->>Page: Scroll / reach sentinel
    Page->>Composable: loadMore()
    Composable->>API: GET /api/activity-log/timeline?before=oldestTimestamp&limit=100
    API->>DB: Query older activity_log entries
    DB-->>API: Return older rows
    API-->>Composable: older { items, hasMore, oldestTimestamp }
    Composable->>Composable: Append items, update groups
    Composable-->>Page: Updated dayGroups
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Poem

🐰 I hopped through logs both near and far,

Collected events that glint like stars.
Filters clicked and days unfurled,
Scrolls reveal the bustling world. ✨

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The pull request description is entirely an unfilled template with no actual content, context, validation details, or DCO sign-off information provided. Fill out the template: provide a summary of changes, indicate the type of change (Feature), confirm testing and auth verification, and add DCO sign-off (Signed-off-by).
Docstring Coverage ⚠️ Warning Docstring coverage is 64.29% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: implementing a Timeline page with activity log and infinite scroll functionality, which matches the core additions in the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/timeline-view

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/composables/useTimeline.ts`:
- Around line 123-125: The code is grouping items by the UTC date string from
item.createdAt (item.createdAt.slice(0, 10)), which mismatches the local-date
key produced by formatDateKey(now); instead, parse item.createdAt into a Date
and convert it to the same local date key format used by formatDateKey so
grouping uses local time; update the loop in useTimeline (where allItems and
groupMap are used) to compute dateKey by creating a Date from item.createdAt and
passing/formatting that Date with the same logic as formatDateKey (or call
formatDateKey with that Date) so Today/Tomorrow labels and grouping are
consistent.
- Around line 54-60: Extract a shared interface (e.g., TimelineResponse)
describing { items: TimelineItem[]; upcoming: TimelineItem[]; hasMore: boolean;
oldestTimestamp: string | null; newestTimestamp: string | null } and declare the
timeline endpoint as a string-typed constant (e.g., const TIMELINE_ENDPOINT:
string = '/api/activity-log/timeline'); then update the two $fetch calls (the
one assigning result and the other at lines ~91-97) to use the generic
$fetch<TimelineResponse>(TIMELINE_ENDPOINT, { query }) form to break recursive
literal route inference and resolve the TS2321 error.

In `@app/pages/dashboard/timeline.vue`:
- Around line 63-75: The infinite-scroll observer is attached in onMounted but
the sentinel is null during the initial skeleton render; change to watch the
template ref scrollSentinel (use watch(scrollSentinel, cb, { flush: 'post',
immediate: true })) and in the watch callback create the IntersectionObserver
(same callback that checks entries[0]?.isIntersecting && hasMore.value &&
!isLoadingMore.value then calls loadMore()), call observer.observe when
scrollSentinel.value is non-null, and ensure you disconnect the observer either
inside the watch cleanup or in onUnmounted (observer.disconnect()) to avoid
leaks.

In `@server/api/activity-log/timeline.get.ts`:
- Around line 100-123: The application and interview enrichment queries (the
db.select blocks that use applicationIds and interviewIds) lack organization
scoping and must include the same org constraint used elsewhere; update the
application query (selecting fields from application joined to job and
candidate) to add a where clause that includes application.organizationId (or
organizationId) equals the current org id in addition to inArray(application.id,
Array.from(applicationIds)), and likewise update the interview query (selecting
fields from interview) to add interview.organizationId equals the current org id
alongside the existing inArray(interview.id, Array.from(interviewIds))). Ensure
you use the same org id variable/reference used by the job/candidate and
upcomingInterviews queries so both enrichment lookups are scoped to the
organization.
- Around line 188-227: The upcoming-interviews block ignores query.resourceType;
update the guard around the upcomingInterviews query so it only runs when
resourceType is not set or equals 'interview' (i.e., change the condition around
the if that currently checks !query.before && !query.after to also check
query.resourceType === 'interview' or !query.resourceType), ensuring the
upcomingInterviews select/map (variables and functions: upcomingInterviews,
upcoming, query.resourceType, interview) only executes and appends interview
items when the timeline filter allows interview resources.
- Around line 36-42: Change the pagination to use exclusive cursor bounds and a
deterministic composite sort: replace lte/gte filters on activityLog.createdAt
with lt()/gt() and add a secondary tie-breaker by ordering with
orderBy(desc(activityLog.createdAt), desc(activityLog.id)); update the
corresponding pagination cursor handling that builds/consumes the cursor (the
logic around where you read/write the cursor values) to include both createdAt
and id and to use exclusive comparisons. Also, in the enrichment queries for
application and interview, add organization scoping filters so the application
lookup includes eq(applicationId.organizationId, orgId) and the interview lookup
includes eq(interview.organizationId, orgId) to prevent cross-tenant leakage
when resource IDs are stale.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 52b7d3be-194f-40be-9168-4f09135db1ee

📥 Commits

Reviewing files that changed from the base of the PR and between 750dc0b and abda1a3.

📒 Files selected for processing (5)
  • app/components/AppTopBar.vue
  • app/composables/useTimeline.ts
  • app/pages/dashboard/timeline.vue
  • nuxt.config.ts
  • server/api/activity-log/timeline.get.ts

Comment thread app/composables/useTimeline.ts Outdated
Comment on lines +123 to +125
for (const item of allItems) {
const dateKey = item.createdAt.slice(0, 10)
if (!groupMap.has(dateKey)) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n app/composables/useTimeline.ts | head -150

Repository: reqcore-inc/reqcore

Length of output: 5321


🏁 Script executed:

cat -n app/composables/useTimeline.ts | tail -100

Repository: reqcore-inc/reqcore

Length of output: 3561


Group timeline items by local date, not UTC prefix.

Line 124 uses item.createdAt.slice(0, 10) which extracts the date portion from an ISO timestamp (UTC-based), while line 116 uses formatDateKey(now) which generates a date key in local time. This timezone mismatch causes items to be grouped under the wrong day header near midnight, and the Today/Tomorrow labels (lines 189-191) will be incorrect for affected events.

Suggested fix
-      const dateKey = item.createdAt.slice(0, 10)
+      const dateKey = formatDateKey(new Date(item.createdAt))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for (const item of allItems) {
const dateKey = item.createdAt.slice(0, 10)
if (!groupMap.has(dateKey)) {
for (const item of allItems) {
const dateKey = formatDateKey(new Date(item.createdAt))
if (!groupMap.has(dateKey)) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/composables/useTimeline.ts` around lines 123 - 125, The code is grouping
items by the UTC date string from item.createdAt (item.createdAt.slice(0, 10)),
which mismatches the local-date key produced by formatDateKey(now); instead,
parse item.createdAt into a Date and convert it to the same local date key
format used by formatDateKey so grouping uses local time; update the loop in
useTimeline (where allItems and groupMap are used) to compute dateKey by
creating a Date from item.createdAt and passing/formatting that Date with the
same logic as formatDateKey (or call formatDateKey with that Date) so
Today/Tomorrow labels and grouping are consistent.

Comment on lines +63 to +75
onMounted(() => {
if (!scrollSentinel.value) return
const observer = new IntersectionObserver(
(entries) => {
if (entries[0]?.isIntersecting && hasMore.value && !isLoadingMore.value) {
loadMore()
}
},
{ rootMargin: '400px' },
)
observer.observe(scrollSentinel.value)
onUnmounted(() => observer.disconnect())
})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "timeline.vue" -path "*/pages/dashboard/*"

Repository: reqcore-inc/reqcore

Length of output: 97


🏁 Script executed:

wc -l ./app/pages/dashboard/timeline.vue

Repository: reqcore-inc/reqcore

Length of output: 101


🏁 Script executed:

cat -n ./app/pages/dashboard/timeline.vue | head -100

Repository: reqcore-inc/reqcore

Length of output: 3235


🏁 Script executed:

cat -n ./app/pages/dashboard/timeline.vue | tail -400

Repository: reqcore-inc/reqcore

Length of output: 18422


🏁 Script executed:

rg -A 5 "useTimeline" ./app/pages/dashboard/timeline.vue

Repository: reqcore-inc/reqcore

Length of output: 176


🏁 Script executed:

rg "watch\(" ./app/pages/dashboard/timeline.vue

Repository: reqcore-inc/reqcore

Length of output: 45


🏁 Script executed:

rg "from 'vue'" ./app/pages/dashboard/timeline.vue

Repository: reqcore-inc/reqcore

Length of output: 45


🏁 Script executed:

find . -name "nuxt.config.*" -o -name "package.json" | head -5

Repository: reqcore-inc/reqcore

Length of output: 94


🏁 Script executed:

cat ./package.json | grep -A 5 '"vue"\|"nuxt"'

Repository: reqcore-inc/reqcore

Length of output: 352


Use watch() instead of onMounted() to attach the observer after the sentinel renders.

On initial mount, the page renders a loading skeleton with v-if="isLoading", so scrollSentinel.value is null and the observer never gets attached (line 64 returns early). By the time the sentinel appears in the final v-else branch after data loads, the onMounted() hook has already run. This breaks the infinite-scroll feature entirely—only the manual "Load older events" button works.

Switch to watch() with { flush: 'post' } to monitor the template ref. Once the sentinel enters the DOM, the watch callback will fire and attach the observer.

Suggested fix
 const scrollSentinel = useTemplateRef<HTMLElement>('scrollSentinel')
 
-onMounted(() => {
-  if (!scrollSentinel.value) return
-  const observer = new IntersectionObserver(
-    (entries) => {
-      if (entries[0]?.isIntersecting && hasMore.value && !isLoadingMore.value) {
-        loadMore()
-      }
-    },
-    { rootMargin: '400px' },
-  )
-  observer.observe(scrollSentinel.value)
-  onUnmounted(() => observer.disconnect())
-})
+watch(
+  scrollSentinel,
+  (el, _, onCleanup) => {
+    if (!el) return
+    const observer = new IntersectionObserver(
+      (entries) => {
+        if (entries[0]?.isIntersecting && hasMore.value && !isLoadingMore.value) {
+          loadMore()
+        }
+      },
+      { rootMargin: '400px' },
+    )
+    observer.observe(el)
+    onCleanup(() => observer.disconnect())
+  },
+  { flush: 'post' },
+)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
onMounted(() => {
if (!scrollSentinel.value) return
const observer = new IntersectionObserver(
(entries) => {
if (entries[0]?.isIntersecting && hasMore.value && !isLoadingMore.value) {
loadMore()
}
},
{ rootMargin: '400px' },
)
observer.observe(scrollSentinel.value)
onUnmounted(() => observer.disconnect())
})
const scrollSentinel = useTemplateRef<HTMLElement>('scrollSentinel')
watch(
scrollSentinel,
(el, _, onCleanup) => {
if (!el) return
const observer = new IntersectionObserver(
(entries) => {
if (entries[0]?.isIntersecting && hasMore.value && !isLoadingMore.value) {
loadMore()
}
},
{ rootMargin: '400px' },
)
observer.observe(el)
onCleanup(() => observer.disconnect())
},
{ flush: 'post' },
)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/pages/dashboard/timeline.vue` around lines 63 - 75, The infinite-scroll
observer is attached in onMounted but the sentinel is null during the initial
skeleton render; change to watch the template ref scrollSentinel (use
watch(scrollSentinel, cb, { flush: 'post', immediate: true })) and in the watch
callback create the IntersectionObserver (same callback that checks
entries[0]?.isIntersecting && hasMore.value && !isLoadingMore.value then calls
loadMore()), call observer.observe when scrollSentinel.value is non-null, and
ensure you disconnect the observer either inside the watch cleanup or in
onUnmounted (observer.disconnect()) to avoid leaks.

Comment on lines +36 to +42
if (query.before) {
conditions.push(lte(activityLog.createdAt, new Date(query.before)))
}

if (query.after) {
conditions.push(gte(activityLog.createdAt, new Date(query.after)))
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n server/api/activity-log/timeline.get.ts | head -80

Repository: reqcore-inc/reqcore

Length of output: 3377


🏁 Script executed:

cat -n server/api/activity-log/timeline.get.ts | sed -n '60,140p'

Repository: reqcore-inc/reqcore

Length of output: 3619


🏁 Script executed:

cat -n server/api/activity-log/timeline.get.ts | sed -n '140,200p'

Repository: reqcore-inc/reqcore

Length of output: 2451


🏁 Script executed:

cat -n server/api/activity-log/timeline.get.ts | sed -n '200,230p'

Repository: reqcore-inc/reqcore

Length of output: 1291


Make the cursor exclusive and use a composite sort key for deterministic ordering.

Using inclusive bounds (lte for before, gte for after) combined with single-column ordering by createdAt causes boundary rows to replay on the next page and introduces non-determinism when multiple events share the same timestamp. Infinite scroll will duplicate or skip items.

Fix by either:

  1. Using exclusive bounds: lt() and gt() with inclusive ordering, or
  2. Adding a secondary sort key: .orderBy(desc(activityLog.createdAt), desc(activityLog.id))

This applies to lines 36–42 and the corresponding pagination logic at lines 63–64.

Additionally, the application (line 112) and interview (line 122) enrichment queries lack organization scoping and should filter by eq(applicationId.organizationId, orgId) / eq(interview.organizationId, orgId) to prevent cross-tenant data leakage if an activity row carries a stale or incorrect resourceId.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/api/activity-log/timeline.get.ts` around lines 36 - 42, Change the
pagination to use exclusive cursor bounds and a deterministic composite sort:
replace lte/gte filters on activityLog.createdAt with lt()/gt() and add a
secondary tie-breaker by ordering with orderBy(desc(activityLog.createdAt),
desc(activityLog.id)); update the corresponding pagination cursor handling that
builds/consumes the cursor (the logic around where you read/write the cursor
values) to include both createdAt and id and to use exclusive comparisons. Also,
in the enrichment queries for application and interview, add organization
scoping filters so the application lookup includes
eq(applicationId.organizationId, orgId) and the interview lookup includes
eq(interview.organizationId, orgId) to prevent cross-tenant leakage when
resource IDs are stale.

Comment thread server/api/activity-log/timeline.get.ts
Comment thread server/api/activity-log/timeline.get.ts Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@server/scripts/delete-demo-org.ts`:
- Around line 10-19: Extract the DB URL resolution logic used in seed.ts into a
shared exported helper (e.g., resolveDatabaseUrl or getDatabaseUrl) that
performs the same fallbacks (load .env if needed, check
PGHOST/PGPORT/PGUSER/PGPASSWORD/PGDATABASE and Railway proxy vars) and returns
the final connection string or null/undefined; then update delete-demo-org.ts to
import and call that helper instead of directly reading process.env.DATABASE_URL
(replace the local DATABASE_URL constant with the helper call) and keep the
existing error/exit behavior when the helper returns no URL. Ensure the helper
name you choose is exported so both seed.ts and delete-demo-org.ts can reference
it.

In `@server/scripts/seed.ts`:
- Around line 2377-2388: The current seed creates a 'scored' activity whenever
app.score is present and hardcodes model/criterionCount; instead only emit
activityLog entries when real analysis exists by checking AI_SCORING_DATA or the
inserted analysisRun rows for that application. Update the block that inserts
into schema.activityLog to first look up the corresponding entry in
AI_SCORING_DATA (or query the inserted analysisRun records) for the appId, and
only insert when a matching analysis exists; populate metadata.compositeScore,
metadata.model and metadata.criterionCount from the real analysis data (not
hardcoded 'gpt-4o-mini' and 5), keeping identifiers like id(), orgId, userId,
appId and daysAgo(Math.max(0, appCreatedDays - 1)) unchanged. Ensure you
reference AI_SCORING_DATA or the created analysisRun rows to drive which
applications get 'scored' actions.
- Around line 2290-2375: The activity timestamps are being generated anew for
activityLog rows (in the application/job/candidate/interview activity sections)
which can create impossible timelines; modify the seeding code to capture and
reuse the chosen createdAt timestamps when inserting the primary rows (jobs,
candidates, applications, interviews) instead of calling daysAgo again.
Concretely: when you insert a job, candidate, application or interview, store
its selected createdAt in a map keyed by jobId/candidateId/appId/interviewId (or
by jobIndex/candidateIndex) (e.g. jobTimestamps, candidateTimestamps,
applicationTimestamps), then in the activityLog inserts (the blocks that call
db.insert(schema.activityLog) for actions 'created' and 'status_changed' and the
interview activity block referenced) read the timestamp from the corresponding
map and use that value for createdAt rather than computing a new daysAgo value.
Ensure keys match how IDs are derived (e.g. applicationMap keys) so all related
activities reuse the original seeded timestamp.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1ea50f92-9535-4063-8076-2551ced8ed06

📥 Commits

Reviewing files that changed from the base of the PR and between abda1a3 and f0f8b2e.

📒 Files selected for processing (3)
  • package.json
  • server/scripts/delete-demo-org.ts
  • server/scripts/seed.ts

Comment on lines +10 to +19
const processWithLoadEnv = process as NodeJS.Process & { loadEnvFile?: (path?: string) => void }
if (!process.env.DATABASE_URL && typeof processWithLoadEnv.loadEnvFile === 'function') {
try { processWithLoadEnv.loadEnvFile('.env') } catch {}
}

const DATABASE_URL = process.env.DATABASE_URL
if (!DATABASE_URL) {
console.error('DATABASE_URL is required.')
process.exit(1)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reuse the same DB URL resolution as seed.ts.

This script exits unless DATABASE_URL is already present, but server/scripts/seed.ts also falls back to PGHOST/PGPORT/PGUSER/PGPASSWORD/PGDATABASE and Railway proxy vars. That makes db:seed succeed while db:reseed can fail in the same environment. Please move the resolver into a shared helper and call it from both scripts.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/scripts/delete-demo-org.ts` around lines 10 - 19, Extract the DB URL
resolution logic used in seed.ts into a shared exported helper (e.g.,
resolveDatabaseUrl or getDatabaseUrl) that performs the same fallbacks (load
.env if needed, check PGHOST/PGPORT/PGUSER/PGPASSWORD/PGDATABASE and Railway
proxy vars) and returns the final connection string or null/undefined; then
update delete-demo-org.ts to import and call that helper instead of directly
reading process.env.DATABASE_URL (replace the local DATABASE_URL constant with
the helper call) and keep the existing error/exit behavior when the helper
returns no URL. Ensure the helper name you choose is exported so both seed.ts
and delete-demo-org.ts can reference it.

Comment thread server/scripts/seed.ts
Comment on lines +2290 to +2375
await db.insert(schema.activityLog).values({
id: id(),
organizationId: orgId,
actorId: userId,
action: 'created',
resourceType: 'job',
resourceId: jobId,
metadata: { title: jobData.title },
createdAt: daysAgo(20 + Math.floor(Math.random() * 10)),
})
totalActivities++
}

// --- Candidate creation activities ---
for (let i = 0; i < CANDIDATES_DATA.length; i++) {
const c = CANDIDATES_DATA[i]
const cId = candidateIds[i]
if (!c || !cId) continue

await db.insert(schema.activityLog).values({
id: id(),
organizationId: orgId,
actorId: userId,
action: 'created',
resourceType: 'candidate',
resourceId: cId,
metadata: { name: `${c.firstName} ${c.lastName}` },
createdAt: daysAgo(5 + Math.floor(Math.random() * 20)),
})
totalActivities++
}

// --- Application creation + status change activities ---
// Maps each application's final status to a realistic sequence of transitions
const STATUS_PIPELINE: Record<AppStatus, AppStatus[]> = {
new: [],
screening: ['screening'],
interview: ['screening', 'interview'],
offer: ['screening', 'interview', 'offer'],
hired: ['screening', 'interview', 'offer', 'hired'],
rejected: ['rejected'], // rejected can happen at any stage
}

for (let jobIndex = 0; jobIndex < JOB_APPLICATIONS.length; jobIndex++) {
const apps = JOB_APPLICATIONS[jobIndex]
const jobId = jobIds[jobIndex]
if (!apps || !jobId) continue

for (const app of apps) {
const candidateId = candidateIds[app.candidateIndex]
const appId = applicationMap.get(`${jobIndex}-${app.candidateIndex}`)
if (!candidateId || !appId) continue

// Application created activity
const appCreatedDays = 1 + Math.floor(Math.random() * 15)
await db.insert(schema.activityLog).values({
id: id(),
organizationId: orgId,
actorId: userId,
action: 'created',
resourceType: 'application',
resourceId: appId,
metadata: { candidateId, jobId },
createdAt: daysAgo(appCreatedDays),
})
totalActivities++

// Status change activities along the pipeline
const transitions = STATUS_PIPELINE[app.status] ?? []
let previousStatus: AppStatus = 'new'
for (let t = 0; t < transitions.length; t++) {
const toStatus = transitions[t]!
const transitionDays = Math.max(0, appCreatedDays - (t + 1) * 2)
await db.insert(schema.activityLog).values({
id: id(),
organizationId: orgId,
actorId: userId,
action: 'status_changed',
resourceType: 'application',
resourceId: appId,
metadata: { from: previousStatus, to: toStatus },
createdAt: daysAgo(transitionDays),
})
totalActivities++
previousStatus = toStatus
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't regenerate activity timestamps independently of the seeded rows.

This block creates fresh random createdAt values instead of reusing the timestamps picked when the jobs, candidates, applications, and interviews were inserted. Because those ranges are independent, the timeline can end up with impossible histories, such as an interview activity older than the same application's created activity. Capture the chosen timestamps alongside each seeded id and reuse them here.

Also applies to: 2394-2419

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/scripts/seed.ts` around lines 2290 - 2375, The activity timestamps are
being generated anew for activityLog rows (in the
application/job/candidate/interview activity sections) which can create
impossible timelines; modify the seeding code to capture and reuse the chosen
createdAt timestamps when inserting the primary rows (jobs, candidates,
applications, interviews) instead of calling daysAgo again. Concretely: when you
insert a job, candidate, application or interview, store its selected createdAt
in a map keyed by jobId/candidateId/appId/interviewId (or by
jobIndex/candidateIndex) (e.g. jobTimestamps, candidateTimestamps,
applicationTimestamps), then in the activityLog inserts (the blocks that call
db.insert(schema.activityLog) for actions 'created' and 'status_changed' and the
interview activity block referenced) read the timestamp from the corresponding
map and use that value for createdAt rather than computing a new daysAgo value.
Ensure keys match how IDs are derived (e.g. applicationMap keys) so all related
activities reuse the original seeded timestamp.

Comment thread server/scripts/seed.ts
Comment on lines +2377 to +2388
// Scored activity for applications with a score
if (app.score) {
await db.insert(schema.activityLog).values({
id: id(),
organizationId: orgId,
actorId: userId,
action: 'scored',
resourceType: 'application',
resourceId: appId,
metadata: { compositeScore: app.score, model: 'gpt-4o-mini', criterionCount: 5 },
createdAt: daysAgo(Math.max(0, appCreatedDays - 1)),
})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Only emit scored activities from actual analysis data.

This loop treats any application with a score as an AI-scored event and hardcodes gpt-4o-mini plus criterionCount: 5. The seed only creates scoring criteria and analysisRun rows for the first three jobs, so the Technical Writer and Frontend Intern applications will get timeline entries for scoring that never happened. Build these activities from AI_SCORING_DATA or from the inserted analysisRun rows instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/scripts/seed.ts` around lines 2377 - 2388, The current seed creates a
'scored' activity whenever app.score is present and hardcodes
model/criterionCount; instead only emit activityLog entries when real analysis
exists by checking AI_SCORING_DATA or the inserted analysisRun rows for that
application. Update the block that inserts into schema.activityLog to first look
up the corresponding entry in AI_SCORING_DATA (or query the inserted analysisRun
records) for the appId, and only insert when a matching analysis exists;
populate metadata.compositeScore, metadata.model and metadata.criterionCount
from the real analysis data (not hardcoded 'gpt-4o-mini' and 5), keeping
identifiers like id(), orgId, userId, appId and daysAgo(Math.max(0,
appCreatedDays - 1)) unchanged. Ensure you reference AI_SCORING_DATA or the
created analysisRun rows to drive which applications get 'scored' actions.

@railway-app railway-app Bot temporarily deployed to applirank / reqcore-pr-122 March 24, 2026 10:58 Destroyed
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (4)
server/api/activity-log/timeline.get.ts (3)

214-215: ⚠️ Potential issue | 🟠 Major

Respect resourceType when adding synthetic upcoming interviews.

The main activity query honors query.resourceType, but this guard still appends resourceType: 'interview' items on the first page for every filter. A job- or candidate-filtered timeline will therefore include unrelated interviews.

Suggested fix
-  if (!query.before && !query.after) {
+  if (!query.before && !query.after && (!query.resourceType || query.resourceType === 'interview')) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/api/activity-log/timeline.get.ts` around lines 214 - 215, The
synthetic upcoming-interview items are being appended unconditionally in the if
(!query.before && !query.after) block and ignore query.resourceType; update that
guard to respect query.resourceType by only fetching/appending
upcomingInterviews when query.resourceType is absent or equals 'interview', or
alternatively filter upcomingInterviews to only include items matching
query.resourceType before merging; adjust the logic around upcomingInterviews
and the place where resourceType: 'interview' is injected so timelines filtered
by job or candidate no longer receive unrelated interview items.

36-42: ⚠️ Potential issue | 🟠 Major

The pagination cursor is still inclusive and timestamp-only.

With lte/gte, orderBy(desc(createdAt)), and a cursor that only returns createdAt, the next page will replay the boundary row, and equal timestamps can still be skipped or re-ordered. This will show up as duplicates or gaps in infinite scroll.

Also applies to: 62-64, 265-266

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/api/activity-log/timeline.get.ts` around lines 36 - 42, The current
use of lte/gte on activityLog.createdAt with an orderBy(desc(createdAt)) and a
timestamp-only cursor causes duplicate or skipped rows; change the cursor and
filters to use a deterministic composite cursor (createdAt + id) and switch to
exclusive comparisons (lt/gt) consistent with the sort direction. Update the
places using query.before/query.after and
conditions.push(lte/gte(activityLog.createdAt, ...)) to instead construct a
composite boundary from createdAt and id, use lt for "before" when ordering desc
and gt for "after" when ordering desc (or the opposite if order flips), and
ensure the cursor returned by the query includes both createdAt and id so
tie-breaking is deterministic (apply the same change where similar code appears
around the other occurrences you noted).

100-130: ⚠️ Potential issue | 🟠 Major

Scope the enrichment lookups to the active organization.

The application and interview enrichment queries currently trust resourceId alone. If an activity row ever carries a stale or incorrect ID, the response can enrich it with job/candidate labels from another org. Use the same org-scoped filtering pattern already applied to the job and candidate lookups.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/api/activity-log/timeline.get.ts` around lines 100 - 130, The
application and interview enrichment queries lack an organization scoping filter
and can return labels from other orgs; update the application and interview
selects to apply the same org-scoped filter used for jobs/candidates by adding
an equality check against the active org ID (e.g. add
where(eq(application.orgId, organizationId)) for the application query and
ensure the interview query also constrains the joined application to the active
org (e.g. include where(eq(application.orgId, organizationId)) or
where(eq(interview.orgId, organizationId)) as appropriate) so enrichment only
returns records from the current organization.
app/composables/useTimeline.ts (1)

149-152: ⚠️ Potential issue | 🟠 Major

Group by the same local date key used for labels.

todayStr is built with formatDateKey(now) in local time, but this bucket key uses the raw ISO prefix. Events near midnight will land under the wrong day header and get the wrong Today/Tomorrow label.

Suggested fix
-      const dateKey = item.createdAt.slice(0, 10)
+      const dateKey = formatDateKey(new Date(item.createdAt))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/composables/useTimeline.ts` around lines 149 - 152, The grouping uses
item.createdAt.slice(0,10) which produces a UTC/ISO prefix, causing items near
midnight to misgroup compared to the local-date labels built with
formatDateKey(now); update the grouping to generate dateKey using the same local
date helper (e.g., call formatDateKey(new Date(item.createdAt)) or equivalent)
so groupMap, the loop over allItems, and the dateKey computation use the same
local-date logic as the Today/Tomorrow labels.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/composables/useTimeline.ts`:
- Around line 72-92: loadInitial and loadMore can apply late responses from a
previous filter run into the current refs (items, upcoming, hasMore, etc.); fix
by snapshotting a request token or the current activeFilter before the async
call (e.g., const requestFilter = activeFilter.value or increment a loadCounter)
and after the await return early unless the token still matches (or the counter
equals current). Apply the same guard in both loadInitial and loadMore so that
any response that does not match the activeFilter/request token is ignored and
does not mutate items, upcoming, hasMore, oldestTimestamp or newestTimestamp.

---

Duplicate comments:
In `@app/composables/useTimeline.ts`:
- Around line 149-152: The grouping uses item.createdAt.slice(0,10) which
produces a UTC/ISO prefix, causing items near midnight to misgroup compared to
the local-date labels built with formatDateKey(now); update the grouping to
generate dateKey using the same local date helper (e.g., call formatDateKey(new
Date(item.createdAt)) or equivalent) so groupMap, the loop over allItems, and
the dateKey computation use the same local-date logic as the Today/Tomorrow
labels.

In `@server/api/activity-log/timeline.get.ts`:
- Around line 214-215: The synthetic upcoming-interview items are being appended
unconditionally in the if (!query.before && !query.after) block and ignore
query.resourceType; update that guard to respect query.resourceType by only
fetching/appending upcomingInterviews when query.resourceType is absent or
equals 'interview', or alternatively filter upcomingInterviews to only include
items matching query.resourceType before merging; adjust the logic around
upcomingInterviews and the place where resourceType: 'interview' is injected so
timelines filtered by job or candidate no longer receive unrelated interview
items.
- Around line 36-42: The current use of lte/gte on activityLog.createdAt with an
orderBy(desc(createdAt)) and a timestamp-only cursor causes duplicate or skipped
rows; change the cursor and filters to use a deterministic composite cursor
(createdAt + id) and switch to exclusive comparisons (lt/gt) consistent with the
sort direction. Update the places using query.before/query.after and
conditions.push(lte/gte(activityLog.createdAt, ...)) to instead construct a
composite boundary from createdAt and id, use lt for "before" when ordering desc
and gt for "after" when ordering desc (or the opposite if order flips), and
ensure the cursor returned by the query includes both createdAt and id so
tie-breaking is deterministic (apply the same change where similar code appears
around the other occurrences you noted).
- Around line 100-130: The application and interview enrichment queries lack an
organization scoping filter and can return labels from other orgs; update the
application and interview selects to apply the same org-scoped filter used for
jobs/candidates by adding an equality check against the active org ID (e.g. add
where(eq(application.orgId, organizationId)) for the application query and
ensure the interview query also constrains the joined application to the active
org (e.g. include where(eq(application.orgId, organizationId)) or
where(eq(interview.orgId, organizationId)) as appropriate) so enrichment only
returns records from the current organization.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7fa29e2e-4500-4a65-894f-a3f473d22dee

📥 Commits

Reviewing files that changed from the base of the PR and between f0f8b2e and b8c6ab9.

📒 Files selected for processing (4)
  • app/composables/useTimeline.ts
  • app/pages/dashboard/timeline.vue
  • server/api/activity-log/timeline.get.ts
  • server/utils/auth.ts
✅ Files skipped from review due to trivial changes (1)
  • server/utils/auth.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • app/pages/dashboard/timeline.vue

Comment on lines +72 to +92
async function loadInitial(resourceType?: string) {
isLoading.value = true
error.value = null
activeFilter.value = resourceType

try {
const query: Record<string, string | number> = { limit: 100 }
if (resourceType) query.resourceType = resourceType

const result = await $fetch('/api/activity-log/timeline', { query }) as {
items: TimelineItem[]
upcoming: TimelineItem[]
hasMore: boolean
oldestTimestamp: string | null
newestTimestamp: string | null
}

items.value = result.items
upcoming.value = result.upcoming
hasMore.value = result.hasMore
oldestTimestamp.value = result.oldestTimestamp
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Prevent stale responses from crossing filter boundaries.

loadInitial() and loadMore() both commit into the same refs after await, but loadMore() is still allowed while an initial reload is in flight. A late response from the previous filter can overwrite the fresh list or append an old page into the new filter's results.

Suggested fix
 export function useTimeline() {
   const items = ref<TimelineItem[]>([])
   const upcoming = ref<TimelineItem[]>([])
   const isLoading = ref(false)
   const isLoadingMore = ref(false)
   const hasMore = ref(true)
   const oldestTimestamp = ref<string | null>(null)
   const error = ref<string | null>(null)
   const activeFilter = ref<string | undefined>(undefined)
+  let requestVersion = 0

   async function loadInitial(resourceType?: string) {
+    const version = ++requestVersion
     isLoading.value = true
     error.value = null
-    activeFilter.value = resourceType

     try {
       const query: Record<string, string | number> = { limit: 100 }
       if (resourceType) query.resourceType = resourceType

       const result = await $fetch('/api/activity-log/timeline', { query }) as {
         items: TimelineItem[]
         upcoming: TimelineItem[]
         hasMore: boolean
         oldestTimestamp: string | null
         newestTimestamp: string | null
       }

+      if (version !== requestVersion) return
+      activeFilter.value = resourceType
       items.value = result.items
       upcoming.value = result.upcoming
       hasMore.value = result.hasMore
       oldestTimestamp.value = result.oldestTimestamp
     }
@@
   async function loadMore() {
-    if (isLoadingMore.value || !hasMore.value || !oldestTimestamp.value) return
+    if (isLoading.value || isLoadingMore.value || !hasMore.value || !oldestTimestamp.value) return
+    const version = requestVersion

     isLoadingMore.value = true

     try {
@@
       const result = await $fetch('/api/activity-log/timeline', { query }) as {
         items: TimelineItem[]
         upcoming: TimelineItem[]
         hasMore: boolean
         oldestTimestamp: string | null
         newestTimestamp: string | null
       }

+      if (version !== requestVersion) return
       items.value.push(...result.items)
       hasMore.value = result.hasMore
       oldestTimestamp.value = result.oldestTimestamp

Also applies to: 106-128

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/composables/useTimeline.ts` around lines 72 - 92, loadInitial and
loadMore can apply late responses from a previous filter run into the current
refs (items, upcoming, hasMore, etc.); fix by snapshotting a request token or
the current activeFilter before the async call (e.g., const requestFilter =
activeFilter.value or increment a loadCounter) and after the await return early
unless the token still matches (or the counter equals current). Apply the same
guard in both loadInitial and loadMore so that any response that does not match
the activeFilter/request token is ignored and does not mutate items, upcoming,
hasMore, oldestTimestamp or newestTimestamp.

@railway-app railway-app Bot temporarily deployed to applirank / reqcore-pr-122 March 24, 2026 15:38 Destroyed
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (4)
app/pages/dashboard/timeline.vue (1)

80-92: ⚠️ Potential issue | 🟠 Major

Use watch() instead of onMounted() to attach the observer after the sentinel renders.

The issue flagged previously persists: during initial load, the page renders a loading skeleton with v-if="isLoading" (line 281), so scrollSentinel.value is null and the observer never attaches (line 81 returns early). By the time the sentinel appears in the v-else branch (line 336) after data loads, onMounted() has already completed. This breaks infinite scroll entirely.

Suggested fix
 const scrollSentinel = useTemplateRef<HTMLElement>('scrollSentinel')
 
-onMounted(() => {
-  if (!scrollSentinel.value) return
-  const observer = new IntersectionObserver(
-    (entries) => {
-      if (entries[0]?.isIntersecting && hasMore.value && !isLoadingMore.value) {
-        loadMore()
-      }
-    },
-    { rootMargin: '400px' },
-  )
-  observer.observe(scrollSentinel.value)
-  onUnmounted(() => observer.disconnect())
-})
+watch(
+  scrollSentinel,
+  (el, _, onCleanup) => {
+    if (!el) return
+    const observer = new IntersectionObserver(
+      (entries) => {
+        if (entries[0]?.isIntersecting && hasMore.value && !isLoadingMore.value) {
+          loadMore()
+        }
+      },
+      { rootMargin: '400px' },
+    )
+    observer.observe(el)
+    onCleanup(() => observer.disconnect())
+  },
+  { flush: 'post' },
+)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/pages/dashboard/timeline.vue` around lines 80 - 92, The observer is
attached inside onMounted() but returns early if scrollSentinel.value is null,
so when the sentinel is rendered later the observer never attaches; replace this
with a watcher that reacts to scrollSentinel becoming non-null (e.g.,
watch(scrollSentinel) or watchEffect) and then create an IntersectionObserver
that observes scrollSentinel.value and disconnects on unmount; keep the same
callback logic using hasMore.value, isLoadingMore.value and calling loadMore(),
and ensure you store/cleanup the observer reference (observer.disconnect()) in
onUnmounted() to avoid leaks.
app/composables/useTimeline.ts (3)

143-145: ⚠️ Potential issue | 🟠 Major

Group timeline items by local date, not UTC prefix.

The timezone mismatch flagged previously persists. Line 144 uses item.createdAt.slice(0, 10) which extracts the UTC date portion from an ISO timestamp, while formatDateKey(now) at line 136 generates a local date key. Near midnight, events may be grouped under the wrong day and Today/Tomorrow labels will be incorrect.

Suggested fix
     for (const item of allItems) {
-      const dateKey = item.createdAt.slice(0, 10)
+      const dateKey = formatDateKey(new Date(item.createdAt))
       if (!groupMap.has(dateKey)) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/composables/useTimeline.ts` around lines 143 - 145, The grouping uses
item.createdAt.slice(0, 10) (UTC substring) which mismatches the local-date key
produced by formatDateKey(now); instead parse the ISO timestamp into a Date and
generate a local date key with the same formatter used elsewhere (e.g., call
formatDateKey(new Date(item.createdAt)) or similar) so groupMap keys use local
dates consistently; update the loop in useTimeline.ts (references: allItems,
item.createdAt, formatDateKey, groupMap) to replace the slice-based key with a
local-date key derived from a Date object.

65-94: ⚠️ Potential issue | 🟠 Major

Prevent stale responses from crossing filter boundaries.

The race condition issue flagged previously remains: loadInitial() and loadMore() can apply late responses from a previous filter into the current refs. When a user quickly switches filters, the slower response may overwrite the faster one's data.

Suggested fix using request versioning
 export function useTimeline() {
   const items = ref<TimelineItem[]>([])
   const upcoming = ref<TimelineItem[]>([])
   const isLoading = ref(false)
   const isLoadingMore = ref(false)
   const hasMore = ref(true)
   const oldestTimestamp = ref<string | null>(null)
   const error = ref<string | null>(null)
   const activeFilter = ref<string | undefined>(undefined)
+  let requestVersion = 0

   async function loadInitial(resourceType?: string) {
+    const version = ++requestVersion
     isLoading.value = true
     error.value = null
-    activeFilter.value = resourceType

     try {
       const query: Record<string, string | number> = { limit: 100 }
       if (resourceType) query.resourceType = resourceType

-      const result = await $fetch('/api/activity-log/timeline', { query }) as {...}
+      const result = await $fetch<TimelineResponse>(TIMELINE_ENDPOINT, { query })

+      if (version !== requestVersion) return // Stale response, discard
+      activeFilter.value = resourceType
       items.value = result.items
       // ...
     }
     // ...
   }

   async function loadMore() {
-    if (isLoadingMore.value || !hasMore.value || !oldestTimestamp.value) return
+    if (isLoading.value || isLoadingMore.value || !hasMore.value || !oldestTimestamp.value) return
+    const version = requestVersion

     isLoadingMore.value = true

     try {
       // ...
-      const result = await $fetch('/api/activity-log/timeline', { query }) as {...}
+      const result = await $fetch<TimelineResponse>(TIMELINE_ENDPOINT, { query })

+      if (version !== requestVersion) return // Filter changed, discard
       items.value.push(...result.items)
       // ...
     }
     // ...
   }

Also applies to: 99-129

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/composables/useTimeline.ts` around lines 65 - 94, The loadInitial and
loadMore functions can apply late responses from a previous filter into the
current refs (activeFilter, items, upcoming, hasMore, oldestTimestamp,
newestTimestamp); fix this by adding a request version/token that you increment
before each request (e.g., requestVersion++), capture the current version in a
local variable at request start, and only assign the fetched result to
items/upcoming/hasMore/oldestTimestamp/newestTimestamp if the local captured
version still matches the global requestVersion; apply the same pattern to both
loadInitial and loadMore so stale responses are ignored when the user switches
filters quickly.

74-80: ⚠️ Potential issue | 🔴 Critical

Refactor $fetch type to resolve TS2321 "Excessive stack depth" typecheck failure.

The CI is failing with TS2321 at line 74. As previously suggested, extract a shared response interface and use a string-typed constant for the endpoint to break the recursive type inference:

Suggested fix
+interface TimelineResponse {
+  items: TimelineItem[]
+  upcoming: TimelineItem[]
+  hasMore: boolean
+  oldestTimestamp: string | null
+  newestTimestamp: string | null
+}
+
+const TIMELINE_ENDPOINT: string = '/api/activity-log/timeline'
+
 export function useTimeline() {
   // ...
   async function loadInitial(resourceType?: string) {
     // ...
-      const result = await $fetch('/api/activity-log/timeline', { query }) as {
-        items: TimelineItem[]
-        upcoming: TimelineItem[]
-        hasMore: boolean
-        oldestTimestamp: string | null
-        newestTimestamp: string | null
-      }
+      const result = await $fetch<TimelineResponse>(TIMELINE_ENDPOINT, { query })
     // ...
   }

   async function loadMore() {
     // ...
-      const result = await $fetch('/api/activity-log/timeline', { query }) as {
-        items: TimelineItem[]
-        upcoming: TimelineItem[]
-        hasMore: boolean
-        oldestTimestamp: string | null
-        newestTimestamp: string | null
-      }
+      const result = await $fetch<TimelineResponse>(TIMELINE_ENDPOINT, { query })
     // ...
   }

Also applies to: 111-117

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/composables/useTimeline.ts` around lines 74 - 80, Extract the inline
response shape into a shared interface (e.g. TimelineResponse { items:
TimelineItem[]; upcoming: TimelineItem[]; hasMore: boolean; oldestTimestamp:
string | null; newestTimestamp: string | null }) and replace the inline cast on
the $fetch call in useTimeline.ts (the const result = await $fetch(...)
expression) with a typed call or explicit cast to that interface; also extract
the endpoint string into a const (e.g. const TIMELINE_ENDPOINT =
'/api/activity-log/timeline' as const) and use that constant for the fetch so
TypeScript stops expanding recursive types; apply the same change to the second
$fetch usage around the block referenced (lines ~111-117).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@app/composables/useTimeline.ts`:
- Around line 143-145: The grouping uses item.createdAt.slice(0, 10) (UTC
substring) which mismatches the local-date key produced by formatDateKey(now);
instead parse the ISO timestamp into a Date and generate a local date key with
the same formatter used elsewhere (e.g., call formatDateKey(new
Date(item.createdAt)) or similar) so groupMap keys use local dates consistently;
update the loop in useTimeline.ts (references: allItems, item.createdAt,
formatDateKey, groupMap) to replace the slice-based key with a local-date key
derived from a Date object.
- Around line 65-94: The loadInitial and loadMore functions can apply late
responses from a previous filter into the current refs (activeFilter, items,
upcoming, hasMore, oldestTimestamp, newestTimestamp); fix this by adding a
request version/token that you increment before each request (e.g.,
requestVersion++), capture the current version in a local variable at request
start, and only assign the fetched result to
items/upcoming/hasMore/oldestTimestamp/newestTimestamp if the local captured
version still matches the global requestVersion; apply the same pattern to both
loadInitial and loadMore so stale responses are ignored when the user switches
filters quickly.
- Around line 74-80: Extract the inline response shape into a shared interface
(e.g. TimelineResponse { items: TimelineItem[]; upcoming: TimelineItem[];
hasMore: boolean; oldestTimestamp: string | null; newestTimestamp: string | null
}) and replace the inline cast on the $fetch call in useTimeline.ts (the const
result = await $fetch(...) expression) with a typed call or explicit cast to
that interface; also extract the endpoint string into a const (e.g. const
TIMELINE_ENDPOINT = '/api/activity-log/timeline' as const) and use that constant
for the fetch so TypeScript stops expanding recursive types; apply the same
change to the second $fetch usage around the block referenced (lines ~111-117).

In `@app/pages/dashboard/timeline.vue`:
- Around line 80-92: The observer is attached inside onMounted() but returns
early if scrollSentinel.value is null, so when the sentinel is rendered later
the observer never attaches; replace this with a watcher that reacts to
scrollSentinel becoming non-null (e.g., watch(scrollSentinel) or watchEffect)
and then create an IntersectionObserver that observes scrollSentinel.value and
disconnects on unmount; keep the same callback logic using hasMore.value,
isLoadingMore.value and calling loadMore(), and ensure you store/cleanup the
observer reference (observer.disconnect()) in onUnmounted() to avoid leaks.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 88f263ce-4d37-427b-a8e2-484af4509780

📥 Commits

Reviewing files that changed from the base of the PR and between b8c6ab9 and 8d226c8.

📒 Files selected for processing (3)
  • app/composables/useTimeline.ts
  • app/pages/dashboard/main-workspace.code-workspace
  • app/pages/dashboard/timeline.vue
✅ Files skipped from review due to trivial changes (1)
  • app/pages/dashboard/main-workspace.code-workspace

@railway-app railway-app Bot temporarily deployed to applirank / reqcore-pr-122 March 25, 2026 09:21 Destroyed
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
app/pages/dashboard/jobs/[id]/index.vue (1)

1121-1126: ⚠️ Potential issue | 🟡 Minor

document_viewed includes non-preview downloads.

At Line 1122, non-PDF files are tracked as viewed even though the flow immediately downloads them. Track document_viewed only for actual previews, and use a separate event for download fallback.

Suggested fix
 function handleDocPreview(doc: SwipeDocument) {
-  track('document_viewed', { document_type: doc.type, mime_type: doc.mimeType })
   if (doc.mimeType !== 'application/pdf') {
+    track('document_download_requested', { document_type: doc.type, mime_type: doc.mimeType })
     // Non-PDFs: fall back to download
     window.open(`/api/documents/${doc.id}/download`, '_blank')
     return
   }
+  track('document_viewed', { document_type: doc.type, mime_type: doc.mimeType })
   docPreviewDocId.value = doc.id
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/pages/dashboard/jobs/`[id]/index.vue around lines 1121 - 1126, In
handleDocPreview, move the track('document_viewed', ...) call so it only runs
for actual preview paths (i.e., when doc.mimeType === 'application/pdf' or
whatever preview branch you use) and remove it from the download fallback;
instead, add a separate tracking call (e.g., track('document_downloaded', {
document_type: doc.type, mime_type: doc.mimeType })) in the non-preview branch
before window.open(`/api/documents/${doc.id}/download`, '_blank'); keep the same
payload shape and use the handleDocPreview function and its
doc.id/doc.type/doc.mimeType fields when adding the new event.
♻️ Duplicate comments (3)
app/composables/useTimeline.ts (3)

139-144: ⚠️ Potential issue | 🟠 Major

Timezone mismatch: items grouped by UTC date but compared against local "today".

Line 140 extracts the date from item.createdAt using .slice(0, 10), which yields a UTC date from the ISO string. However, todayStr (line 132) is computed using formatDateKey(now) which uses local time. This mismatch causes items near midnight to be grouped under the wrong day header, and "Today"/"Tomorrow" labels will be incorrect.

Suggested fix
     for (const item of allItems) {
-      const dateKey = item.createdAt.slice(0, 10)
+      const dateKey = formatDateKey(new Date(item.createdAt))
       if (!groupMap.has(dateKey)) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/composables/useTimeline.ts` around lines 139 - 144, The grouping uses
item.createdAt.slice(0,10) (UTC date) while todayStr is computed with
formatDateKey(now) (local time), causing mismatched day buckets; update the
grouping to compute dateKey with the same local-time formatter—e.g., call
formatDateKey(new Date(item.createdAt)) (or otherwise parse item.createdAt to a
Date and pass it to formatDateKey) when building groupMap so groupMap, todayStr,
and any "Today"/"Tomorrow" logic use the same timezone and date representation.

101-102: ⚠️ Potential issue | 🟠 Major

loadMore() should also check isLoading.value to avoid running during initial load.

If loadInitial() is in flight (e.g., due to a filter change), loadMore() can still execute because it only checks isLoadingMore. This can cause items from the old filter to be appended to results after the new filter's response arrives.

Suggested fix
   async function loadMore() {
-    if (isLoadingMore.value || !hasMore.value || !oldestTimestamp.value) return
+    if (isLoading.value || isLoadingMore.value || !hasMore.value || !oldestTimestamp.value) return
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/composables/useTimeline.ts` around lines 101 - 102, The loadMore function
must also guard against the initial load in progress: update the early-return
condition in loadMore to include isLoading.value so it returns immediately when
loadInitial is in flight, preventing stale results from being appended; locate
the loadMore function and add the isLoading check alongside isLoadingMore,
hasMore, and oldestTimestamp to avoid the race with loadInitial.

73-96: ⚠️ Potential issue | 🟠 Major

Race condition: stale responses can overwrite results when filter changes rapidly.

The activeFilter is set before the await (line 76), but there's no guard to prevent a late-arriving response from a previous filter from overwriting the current results. If a user quickly changes filters, the response from the first filter could arrive after the second filter's request started, corrupting the displayed data.

Add a request version counter and check it after the await to discard stale responses.

Suggested fix
 export function useTimeline() {
   const items = ref<TimelineItem[]>([])
   const upcoming = ref<TimelineItem[]>([])
   const isLoading = ref(false)
   const isLoadingMore = ref(false)
   const hasMore = ref(true)
   const oldestTimestamp = ref<string | null>(null)
   const error = ref<string | null>(null)
   const activeFilter = ref<string | undefined>(undefined)
+  let requestVersion = 0

   async function loadInitial(resourceType?: string) {
+    const version = ++requestVersion
     isLoading.value = true
     error.value = null
-    activeFilter.value = resourceType

     try {
       const query: Record<string, string | number> = { limit: 100 }
       if (resourceType) query.resourceType = resourceType

       const result = await $fetch<TimelineResponse>('/api/activity-log/timeline', { query })

+      if (version !== requestVersion) return
+      activeFilter.value = resourceType
       items.value = result.items
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/composables/useTimeline.ts` around lines 73 - 96, In loadInitial, prevent
stale responses from overwriting state by adding a request version counter
(e.g., timelineRequestVersion) that you increment at the start of loadInitial,
capture into a local const (e.g., currentRequestVersion) before the await, and
after the await verify the captured version still matches the shared
timelineRequestVersion; only then assign items/upcoming/hasMore/oldestTimestamp
and clear error/isLoading. Update the error/finally branches to also check the
version so they don't clear state for newer requests; touch the loadInitial
function and the shared reactive refs (isLoading, error, activeFilter, items,
upcoming, hasMore, oldestTimestamp) to implement this guard.
🧹 Nitpick comments (2)
app/composables/useTrack.ts (1)

108-118: Protect captureError from bubbling analytics failures.

Line 113 can throw in edge cases (serialization/runtime SDK failures), which would make telemetry affect product flow. Wrap this capture in try/catch like other fire-and-forget tracking paths.

Proposed hardening
 function captureError(error: unknown, properties?: Record<string, unknown>) {
   if (!import.meta.client) return
   const ph = getPostHog()
   if (!ph || !ph.has_opted_in_capturing()) return

-  ph.captureException(error instanceof Error ? error : new Error(String(error)), {
-    path: route.path,
-    viewport_width: window.innerWidth,
-    ...properties,
-  })
+  try {
+    ph.captureException(error instanceof Error ? error : new Error(String(error)), {
+      path: route.path,
+      viewport_width: window.innerWidth,
+      ...properties,
+    })
+  }
+  catch {
+    // never let telemetry break caller flow
+  }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/composables/useTrack.ts` around lines 108 - 118, The captureError
function currently calls ph.captureException directly and can throw (e.g.,
serialization or SDK runtime errors); wrap the call to
getPostHog()/ph.captureException in a try/catch so any errors from analytics are
swallowed and do not bubble to product flow, preserving existing checks
(import.meta.client, ph, ph.has_opted_in_capturing()); include the same payload
(route.path, viewport_width, ...properties) in the protected call and log or
ignore the caught error silently to keep this path fire-and-forget.
server/utils/posthog.ts (1)

33-35: Consider avoiding hardcoded app version in super properties.

Line 34 uses a fixed '1.2.0', which can silently drift from deployed code and skew event segmentation. Prefer an env-injected build version.

Proposed tweak
 client.register({
   $app_name: 'reqcore',
-  $app_version: '1.2.0',
+  $app_version: process.env.APP_VERSION || 'unknown',
   $environment: process.env.RAILWAY_ENVIRONMENT_NAME || 'development',
   $source: 'server',
 })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/utils/posthog.ts` around lines 33 - 35, The super properties object
currently hardcodes $app_version as '1.2.0' which can drift; change $app_version
to derive from an environment or build-injected value (e.g.,
process.env.APP_VERSION) or read the package version at startup and fall back to
'development' when missing. Update the code that builds the super properties
(the $app_version entry in server/utils/posthog.ts) to use
process.env.APP_VERSION || packageVersion and ensure the build pipeline sets
APP_VERSION during deployment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/components/CandidateDetailSidebar.vue`:
- Around line 248-251: The event 'document_downloaded' is emitted in
handleDownload before confirming the download succeeded; move the
track('document_downloaded', { document_id: docId }) call to after the await
downloadDocument(docId) so it only fires on success, or alternatively rename the
event to 'document_download_requested' if you intend to signal initiation rather
than completion; update the track call in the handleDownload function
accordingly.

In `@app/composables/usePostHogIdentity.ts`:
- Around line 51-68: The watcher async callback may call
authClient.organization.getActiveMemberRole() which can resolve after
org/consented have changed, causing a stale org to be sent to PostHog; fix by
snapshotting the relevant identity (e.g., const snapshotOrgId = org?.id and
const snapshotConsented = consented) before the await, then after the await
verify snapshotConsented is still true and org?.id === snapshotOrgId (and org is
defined) before calling $posthogSetOrganization; use the existing symbols (the
async watcher callback, authClient.organization.getActiveMemberRole, and
$posthogSetOrganization) to implement this guard so out-of-order resolutions
don’t send stale organization data.

In `@app/composables/useTimeline.ts`:
- Around line 119-121: The catch block in loadMore currently only logs to
console and doesn't update the shared error state like loadInitial does; update
the loadMore error handling so that on failure you set error.value to the caught
error (or a formatted message), ensure isLoading.value is set to false if
applicable, and keep any existing state updates consistent with loadInitial so
consumers reading the error ref see the pagination failure (refer to the
loadMore function and the error.value used in loadInitial).

In `@app/pages/dashboard/interviews/`[id].vue:
- Around line 101-106: The tracking call records the previous status after
updateInterview refreshes the interview ref, so from_status can equal newStatus;
before calling updateInterview({ status: newStatus }) capture the current
interview.value?.status into a local variable (e.g., prevStatus) and then call
track('interview_status_changed', { interview_id: interviewId, from_status:
prevStatus, to_status: newStatus }) to ensure the transition analytics use the
pre-update value; update references around updateInterview, track, interviewId,
interview and newStatus accordingly.

In `@app/pages/dashboard/jobs/`[id]/index.vue:
- Around line 963-964: The event tracking call track('job_deleted', { job_id:
jobId }) is executed before deletion is confirmed; move or change it so only
successful deletes are tracked: call await deleteJob() first, check the
result/absence of errors, then call track('job_deleted', { job_id: jobId }) on
success (or alternatively rename the pre-call to track('job_delete_requested', {
job_id: jobId }) if you want to log the intent). Update the code around the
delete invocation (references: deleteJob, track, job_deleted,
job_delete_requested, jobId) to ensure tracking reflects actual success.

In `@app/pages/dashboard/jobs/`[id]/settings.vue:
- Around line 170-171: Move the analytics call so it only runs after a
successful delete: await deleteJob() and confirm it succeeded (e.g., inside the
try block after await or by checking the resolved result) before calling
track('job_deleted', { job_id: jobId, source: 'settings' }); ensure you handle
errors from deleteJob() (try/catch) so failed deletions do not emit the
job_deleted event.

In `@app/pages/dashboard/settings/index.vue`:
- Around line 100-103: The event `org_deleted` is emitted before the delete
actually completes; move the call to track('org_deleted') to after the awaited
delete call so it only fires on success: call await
authClient.organization.delete({ organizationId: activeOrg.value!.id }) first
and then invoke track('org_deleted') (optionally inside the same try block after
the await) so failed deletes are not tracked as successful.

In `@server/middleware/posthog-api-tracking.ts`:
- Around line 30-35: The current logic always calls trackApiError when duration
> 3000 which misclassifies slow successful responses and duplicates events for
error responses; change the branching in the duration check so that when
duration > 3000 you only emit a "slow" metric for non-error responses
(statusCode < 400) — e.g., call a separate tracker (trackApiSlow or
trackApiEvent) for slow successes — and for error responses (statusCode >= 400)
avoid calling trackApiError twice by adding slow_request: true to the existing
trackApiError invocation instead of emitting a separate slow event; update the
duration handling around trackApiError, trackApiSlow (or trackApiEvent) and the
duration_ms/slow_request payloads accordingly.

---

Outside diff comments:
In `@app/pages/dashboard/jobs/`[id]/index.vue:
- Around line 1121-1126: In handleDocPreview, move the track('document_viewed',
...) call so it only runs for actual preview paths (i.e., when doc.mimeType ===
'application/pdf' or whatever preview branch you use) and remove it from the
download fallback; instead, add a separate tracking call (e.g.,
track('document_downloaded', { document_type: doc.type, mime_type: doc.mimeType
})) in the non-preview branch before
window.open(`/api/documents/${doc.id}/download`, '_blank'); keep the same
payload shape and use the handleDocPreview function and its
doc.id/doc.type/doc.mimeType fields when adding the new event.

---

Duplicate comments:
In `@app/composables/useTimeline.ts`:
- Around line 139-144: The grouping uses item.createdAt.slice(0,10) (UTC date)
while todayStr is computed with formatDateKey(now) (local time), causing
mismatched day buckets; update the grouping to compute dateKey with the same
local-time formatter—e.g., call formatDateKey(new Date(item.createdAt)) (or
otherwise parse item.createdAt to a Date and pass it to formatDateKey) when
building groupMap so groupMap, todayStr, and any "Today"/"Tomorrow" logic use
the same timezone and date representation.
- Around line 101-102: The loadMore function must also guard against the initial
load in progress: update the early-return condition in loadMore to include
isLoading.value so it returns immediately when loadInitial is in flight,
preventing stale results from being appended; locate the loadMore function and
add the isLoading check alongside isLoadingMore, hasMore, and oldestTimestamp to
avoid the race with loadInitial.
- Around line 73-96: In loadInitial, prevent stale responses from overwriting
state by adding a request version counter (e.g., timelineRequestVersion) that
you increment at the start of loadInitial, capture into a local const (e.g.,
currentRequestVersion) before the await, and after the await verify the captured
version still matches the shared timelineRequestVersion; only then assign
items/upcoming/hasMore/oldestTimestamp and clear error/isLoading. Update the
error/finally branches to also check the version so they don't clear state for
newer requests; touch the loadInitial function and the shared reactive refs
(isLoading, error, activeFilter, items, upcoming, hasMore, oldestTimestamp) to
implement this guard.

---

Nitpick comments:
In `@app/composables/useTrack.ts`:
- Around line 108-118: The captureError function currently calls
ph.captureException directly and can throw (e.g., serialization or SDK runtime
errors); wrap the call to getPostHog()/ph.captureException in a try/catch so any
errors from analytics are swallowed and do not bubble to product flow,
preserving existing checks (import.meta.client, ph,
ph.has_opted_in_capturing()); include the same payload (route.path,
viewport_width, ...properties) in the protected call and log or ignore the
caught error silently to keep this path fire-and-forget.

In `@server/utils/posthog.ts`:
- Around line 33-35: The super properties object currently hardcodes
$app_version as '1.2.0' which can drift; change $app_version to derive from an
environment or build-injected value (e.g., process.env.APP_VERSION) or read the
package version at startup and fall back to 'development' when missing. Update
the code that builds the super properties (the $app_version entry in
server/utils/posthog.ts) to use process.env.APP_VERSION || packageVersion and
ensure the build pipeline sets APP_VERSION during deployment.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ec56014d-5cf5-4985-80d5-1965f1c4a8a0

📥 Commits

Reviewing files that changed from the base of the PR and between 8d226c8 and 674993c.

📒 Files selected for processing (21)
  • app/components/CandidateDetailSidebar.vue
  • app/components/ScoreBreakdown.vue
  • app/composables/usePostHogIdentity.ts
  • app/composables/useTimeline.ts
  • app/composables/useTrack.ts
  • app/pages/dashboard/interviews/[id].vue
  • app/pages/dashboard/jobs/[id]/ai-analysis.vue
  • app/pages/dashboard/jobs/[id]/index.vue
  • app/pages/dashboard/jobs/[id]/settings.vue
  • app/pages/dashboard/settings/index.vue
  • app/plugins/posthog-identity.client.ts
  • nuxt.config.ts
  • server/api/applications/[id].patch.ts
  • server/api/candidates/index.post.ts
  • server/api/interviews/index.post.ts
  • server/api/jobs/[id].patch.ts
  • server/api/jobs/index.post.ts
  • server/api/public/jobs/[slug]/apply.post.ts
  • server/middleware/posthog-api-tracking.ts
  • server/utils/posthog.ts
  • server/utils/trackEvent.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • nuxt.config.ts

Comment on lines 248 to 251
async function handleDownload(docId: string) {
try {
track('document_downloaded', { document_id: docId })
await downloadDocument(docId)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

document_downloaded is emitted before confirming success.

At Line 250, failures still produce a “downloaded” event. Emit after await downloadDocument(docId) or rename to document_download_requested.

Suggested fix
 async function handleDownload(docId: string) {
   try {
-    track('document_downloaded', { document_id: docId })
     await downloadDocument(docId)
+    track('document_downloaded', { document_id: docId })
   } catch {
     toast.error('Failed to download document')
   }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async function handleDownload(docId: string) {
try {
track('document_downloaded', { document_id: docId })
await downloadDocument(docId)
async function handleDownload(docId: string) {
try {
await downloadDocument(docId)
track('document_downloaded', { document_id: docId })
} catch {
toast.error('Failed to download document')
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/components/CandidateDetailSidebar.vue` around lines 248 - 251, The event
'document_downloaded' is emitted in handleDownload before confirming the
download succeeded; move the track('document_downloaded', { document_id: docId
}) call to after the await downloadDocument(docId) so it only fires on success,
or alternatively rename the event to 'document_download_requested' if you intend
to signal initiation rather than completion; update the track call in the
handleDownload function accordingly.

Comment on lines +51 to 68
async ([org, consented]) => {
if (consented) {
if (org?.id) {
// Only org id and name are forwarded; slug is omitted to minimise data.
;($posthogSetOrganization as (org: { id: string, name?: string }) => void)({
// Fetch the current member's role to enrich group properties — useful
// for debugging permission issues without exposing personal data.
let memberRole: string | undefined
try {
const { data } = await authClient.organization.getActiveMemberRole()
memberRole = data?.role ?? undefined
}
catch { /* non-critical; role is just an enrichment property */ }

// Only org id, name, and member role are forwarded; slug is omitted to minimise data.
;($posthogSetOrganization as (org: { id: string, name?: string, member_role?: string }) => void)({
id: org.id,
name: org.name || undefined,
member_role: memberRole,
})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Prevent stale organization attribution from out-of-order async watcher runs.

At Line 58, the awaited role fetch can resolve after org/consent changes, and Line 64 can then send a stale org payload to PostHog. This can misattribute events to the wrong organization.

Suggested fix
   watch(
     [() => activeOrgState.value?.data, hasConsented] as const,
-    async ([org, consented]) => {
+    async ([org, consented], _prev, onCleanup) => {
+      let cancelled = false
+      onCleanup(() => { cancelled = true })
+
       if (consented) {
         if (org?.id) {
+          const orgId = org.id
           // Fetch the current member's role to enrich group properties — useful
           // for debugging permission issues without exposing personal data.
           let memberRole: string | undefined
           try {
             const { data } = await authClient.organization.getActiveMemberRole()
             memberRole = data?.role ?? undefined
           }
           catch { /* non-critical; role is just an enrichment property */ }

+          if (cancelled) return
+          if (!hasConsented.value) return
+          if (activeOrgState.value?.data?.id !== orgId) return
+
           // Only org id, name, and member role are forwarded; slug is omitted to minimise data.
           ;($posthogSetOrganization as (org: { id: string, name?: string, member_role?: string }) => void)({
-            id: org.id,
+            id: orgId,
             name: org.name || undefined,
             member_role: memberRole,
           })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async ([org, consented]) => {
if (consented) {
if (org?.id) {
// Only org id and name are forwarded; slug is omitted to minimise data.
;($posthogSetOrganization as (org: { id: string, name?: string }) => void)({
// Fetch the current member's role to enrich group properties — useful
// for debugging permission issues without exposing personal data.
let memberRole: string | undefined
try {
const { data } = await authClient.organization.getActiveMemberRole()
memberRole = data?.role ?? undefined
}
catch { /* non-critical; role is just an enrichment property */ }
// Only org id, name, and member role are forwarded; slug is omitted to minimise data.
;($posthogSetOrganization as (org: { id: string, name?: string, member_role?: string }) => void)({
id: org.id,
name: org.name || undefined,
member_role: memberRole,
})
async ([org, consented], _prev, onCleanup) => {
let cancelled = false
onCleanup(() => { cancelled = true })
if (consented) {
if (org?.id) {
const orgId = org.id
// Fetch the current member's role to enrich group properties — useful
// for debugging permission issues without exposing personal data.
let memberRole: string | undefined
try {
const { data } = await authClient.organization.getActiveMemberRole()
memberRole = data?.role ?? undefined
}
catch { /* non-critical; role is just an enrichment property */ }
if (cancelled) return
if (!hasConsented.value) return
if (activeOrgState.value?.data?.id !== orgId) return
// Only org id, name, and member role are forwarded; slug is omitted to minimise data.
;($posthogSetOrganization as (org: { id: string, name?: string, member_role?: string }) => void)({
id: orgId,
name: org.name || undefined,
member_role: memberRole,
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/composables/usePostHogIdentity.ts` around lines 51 - 68, The watcher
async callback may call authClient.organization.getActiveMemberRole() which can
resolve after org/consented have changed, causing a stale org to be sent to
PostHog; fix by snapshotting the relevant identity (e.g., const snapshotOrgId =
org?.id and const snapshotConsented = consented) before the await, then after
the await verify snapshotConsented is still true and org?.id === snapshotOrgId
(and org is defined) before calling $posthogSetOrganization; use the existing
symbols (the async watcher callback,
authClient.organization.getActiveMemberRole, and $posthogSetOrganization) to
implement this guard so out-of-order resolutions don’t send stale organization
data.

Comment on lines +119 to +121
catch (err) {
console.error('[Timeline] Failed to load more:', err)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inconsistent error handling: loadMore doesn't expose errors to consumers.

loadInitial sets error.value on failure (line 90), but loadMore only logs to console. Consumers relying on the error ref won't know when pagination fails, potentially leaving users stuck with no feedback.

Suggested fix
     catch (err) {
+      error.value = err instanceof Error ? err.message : 'Failed to load more items'
       console.error('[Timeline] Failed to load more:', err)
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
catch (err) {
console.error('[Timeline] Failed to load more:', err)
}
catch (err) {
error.value = err instanceof Error ? err.message : 'Failed to load more items'
console.error('[Timeline] Failed to load more:', err)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/composables/useTimeline.ts` around lines 119 - 121, The catch block in
loadMore currently only logs to console and doesn't update the shared error
state like loadInitial does; update the loadMore error handling so that on
failure you set error.value to the caught error (or a formatted message), ensure
isLoading.value is set to false if applicable, and keep any existing state
updates consistent with loadInitial so consumers reading the error ref see the
pagination failure (refer to the loadMore function and the error.value used in
loadInitial).

Comment on lines 101 to +106
await updateInterview({ status: newStatus })
track('interview_status_changed', {
interview_id: interviewId,
from_status: interview.value?.status,
to_status: newStatus,
})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

from_status is captured after refresh and can be incorrect.

Because Line 101 triggers updateInterview() (which refreshes the interview ref), Line 104 may already equal newStatus, corrupting transition analytics.

Proposed fix
 async function handleTransition(newStatus: InterviewStatus) {
   isTransitioning.value = true
   try {
+    const previousStatus = interview.value?.status
     await updateInterview({ status: newStatus })
     track('interview_status_changed', {
       interview_id: interviewId,
-      from_status: interview.value?.status,
+      from_status: previousStatus,
       to_status: newStatus,
     })
   } catch (err: any) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/pages/dashboard/interviews/`[id].vue around lines 101 - 106, The tracking
call records the previous status after updateInterview refreshes the interview
ref, so from_status can equal newStatus; before calling updateInterview({
status: newStatus }) capture the current interview.value?.status into a local
variable (e.g., prevStatus) and then call track('interview_status_changed', {
interview_id: interviewId, from_status: prevStatus, to_status: newStatus }) to
ensure the transition analytics use the pre-update value; update references
around updateInterview, track, interviewId, interview and newStatus accordingly.

Comment on lines +963 to 964
track('job_deleted', { job_id: jobId })
await deleteJob()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

job_deleted is tracked before deletion is confirmed.

At Line 963, failed deletes will still be counted as successful deletes. Either emit after confirmed success or rename to job_delete_requested.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/pages/dashboard/jobs/`[id]/index.vue around lines 963 - 964, The event
tracking call track('job_deleted', { job_id: jobId }) is executed before
deletion is confirmed; move or change it so only successful deletes are tracked:
call await deleteJob() first, check the result/absence of errors, then call
track('job_deleted', { job_id: jobId }) on success (or alternatively rename the
pre-call to track('job_delete_requested', { job_id: jobId }) if you want to log
the intent). Update the code around the delete invocation (references:
deleteJob, track, job_deleted, job_delete_requested, jobId) to ensure tracking
reflects actual success.

Comment on lines +170 to 171
track('job_deleted', { job_id: jobId, source: 'settings' })
await deleteJob()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Track job_deleted only after successful deletion.

Line 170 emits job_deleted before Line 171 executes the actual delete. If deletion fails, analytics still records a success event.

Proposed fix
 async function handleDelete() {
   isDeleting.value = true
   try {
-    track('job_deleted', { job_id: jobId, source: 'settings' })
     await deleteJob()
+    track('job_deleted', { job_id: jobId, source: 'settings' })
   } catch (err: any) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/pages/dashboard/jobs/`[id]/settings.vue around lines 170 - 171, Move the
analytics call so it only runs after a successful delete: await deleteJob() and
confirm it succeeded (e.g., inside the try block after await or by checking the
resolved result) before calling track('job_deleted', { job_id: jobId, source:
'settings' }); ensure you handle errors from deleteJob() (try/catch) so failed
deletions do not emit the job_deleted event.

Comment on lines +100 to 103
track('org_deleted')
await authClient.organization.delete({
organizationId: activeOrg.value!.id,
})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Emit org_deleted after successful delete response.

Line 100 fires org_deleted before Line 101 performs the delete operation, so failed deletes are currently tracked as successful.

Proposed fix
   try {
-    track('org_deleted')
     await authClient.organization.delete({
       organizationId: activeOrg.value!.id,
     })
+    track('org_deleted')
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
track('org_deleted')
await authClient.organization.delete({
organizationId: activeOrg.value!.id,
})
await authClient.organization.delete({
organizationId: activeOrg.value!.id,
})
track('org_deleted')
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/pages/dashboard/settings/index.vue` around lines 100 - 103, The event
`org_deleted` is emitted before the delete actually completes; move the call to
track('org_deleted') to after the awaited delete call so it only fires on
success: call await authClient.organization.delete({ organizationId:
activeOrg.value!.id }) first and then invoke track('org_deleted') (optionally
inside the same try block after the await) so failed deletes are not tracked as
successful.

Comment on lines +30 to +35
if (duration > 3000) {
trackApiError(event, statusCode, {
duration_ms: duration,
slow_request: true,
})
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Slow-request branch is misclassified as api error and can double-count failures.

At Line 30, all slow requests are sent through trackApiError, including successful responses, and slow 4xx/5xx produce duplicate error events.

Suggested fix
       // Track API errors (4xx client errors, 5xx server errors)
       if (statusCode >= 400) {
-        trackApiError(event, statusCode, { duration_ms: duration })
+        trackApiError(event, statusCode, {
+          duration_ms: duration,
+          slow_request: duration > 3000,
+        })
       }

       // Track slow requests (>3s threshold)
-      if (duration > 3000) {
-        trackApiError(event, statusCode, {
-          duration_ms: duration,
-          slow_request: true,
-        })
+      if (duration > 3000 && statusCode < 400) {
+        trackEvent(event, null, 'api slow_request', {
+          status_code: statusCode,
+          duration_ms: duration,
+        })
       }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/middleware/posthog-api-tracking.ts` around lines 30 - 35, The current
logic always calls trackApiError when duration > 3000 which misclassifies slow
successful responses and duplicates events for error responses; change the
branching in the duration check so that when duration > 3000 you only emit a
"slow" metric for non-error responses (statusCode < 400) — e.g., call a separate
tracker (trackApiSlow or trackApiEvent) for slow successes — and for error
responses (statusCode >= 400) avoid calling trackApiError twice by adding
slow_request: true to the existing trackApiError invocation instead of emitting
a separate slow event; update the duration handling around trackApiError,
trackApiSlow (or trackApiEvent) and the duration_ms/slow_request payloads
accordingly.

- Added OpenTelemetry logging dependencies to package.json and package-lock.json.
- Implemented logger initialization and shutdown in Nitro plugin.
- Created logger utility functions for INFO, WARN, and ERROR levels.
- Enhanced API handlers to log significant events (e.g., application creation, job status changes) with structured logs.
- Captured errors and slow requests in the middleware for better observability.
@railway-app railway-app Bot temporarily deployed to applirank / reqcore-pr-122 March 25, 2026 09:38 Destroyed
…r handling across multiple modules and add Vitest setup for logging stubs
…tency

- Changed transition classes for various statuses in CandidateDetailSidebar.vue, PipelineCard.vue, and [id].vue to use a new color scheme.
- Updated status badge classes across multiple components to align with the new color scheme.
- Introduced search functionality in timeline.vue to filter timeline events based on user input.
- Enhanced getActionStyle function to derive colors from target pipeline status for better visual feedback.
- Adjusted API endpoint in timeline.get.ts to conditionally fetch upcoming interviews based on query parameters.
…ion and update date displays across applications
@JoachimLK JoachimLK merged commit 5b1c694 into main Mar 26, 2026
5 checks passed
@coderabbitai coderabbitai Bot mentioned this pull request Apr 24, 2026
9 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant