Skip to content

feat: introduce 24-hour sync window and DB-failure fallback for VS Code extension#74

Open
Copilot wants to merge 5 commits intomainfrom
copilot/introduce-24-hour-sync-window
Open

feat: introduce 24-hour sync window and DB-failure fallback for VS Code extension#74
Copilot wants to merge 5 commits intomainfrom
copilot/introduce-24-hour-sync-window

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Apr 13, 2026

Adds an opt-in 24-hour sync window and an automatic local fallback buffer for when the database or server is unreachable. Heartbeats are never lost — they are either intentionally deferred (sync window mode) or automatically recovered after a connection failure (fallback mode).

Extension (extension/)

  • New config devmeter.syncWindow (boolean, default false) — toggle via VS Code Settings.
  • sendHeartbeat: routes to local buffer (globalState) when syncWindow is on; when off, if a heartbeat POST fails due to a server/network error (5xx or no response), the heartbeat is saved to the local fallback buffer instead of being dropped. Auth/4xx errors are not retried.
  • checkAndFlushSyncWindow: called on activation and every 5 min; behaviour depends on mode:
    • syncWindow=true: auto-flushes once ≥24 h have elapsed since last sync.
    • syncWindow=false: flushes fallback-buffered heartbeats immediately whenever the server is reachable again.
  • flushPendingHeartbeats(silent?): POSTs all buffered heartbeats to /api/heartbeat/batch, clears local buffer, records sync timestamp. Passes silent=true for background auto-retries to avoid repeated toast notifications.
  • devmeter.syncNow: always flushes all pending heartbeats regardless of syncWindow setting.
  • devmeter.showMenu: dynamically surfaces pending count for both sync-window and fallback-buffer cases, e.g. "Sync Now (42 pending heartbeats)".

API (client/)

New route POST /api/heartbeat/batch (app/api/heartbeat/batch/route.ts):

// Extension sends when flush triggers
await axios.post(`${apiUrl}/heartbeat/batch`, { heartbeats: pending }, { ... });

// Server response
{ status: "ok", synced: 42 }
  • Authenticates via API key (same as single-heartbeat route).
  • Accepts up to 1 000 heartbeats; inserts atomically via prisma.heartbeat.createMany.
  • Awards XP in bulk, runs achievement checks, and invalidates stats cache — all in the background to keep latency near-instant.

Database (client/prisma/)

  • Schema: Added optional heartbeatId String? @unique field to the Heartbeat model for batch deduplication — prevents duplicate records when retrying a buffered batch.
  • Migration 20260424000000_add_heartbeat_id_to_heartbeat: adds the heartbeatId TEXT column and its unique index to the Heartbeat table.

Copilot AI linked an issue Apr 13, 2026 that may be closed by this pull request
@vercel
Copy link
Copy Markdown

vercel Bot commented Apr 13, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
dev-meter-bypy Ready Ready Preview, Comment Apr 24, 2026 7:42pm
dev-meter-v2 Ready Ready Preview, Comment Apr 24, 2026 7:42pm

Agent-Logs-Url: https://github.com/Zaid-maker/DevMeter/sessions/d1627f16-c62e-4988-897c-4aea979be57e

Co-authored-by: zaidhafeezvu <219703339+zaidhafeezvu@users.noreply.github.com>
Copilot AI changed the title [WIP] Implement 24-hour sync window feature feat: introduce 24-hour sync window for VS Code extension Apr 13, 2026
Copilot AI requested a review from zaidhafeezvu April 13, 2026 18:31
@zaidhafeezvu
Copy link
Copy Markdown
Collaborator

@copilot alright, there is one issue right now the neon database is going to run out very soon so, we need this feature work as a fallback if database connection fails

Copy link
Copy Markdown
Contributor Author

Copilot AI commented Apr 21, 2026

@copilot alright, there is one issue right now the neon database is going to run out very soon so, we need this feature work as a fallback if database connection fails

Done in b78458c. The local buffer now acts as a fallback regardless of whether syncWindow is enabled:

  • On heartbeat failure (syncWindow=false): if the POST fails with a server/network error (5xx or no response — i.e., DB connection failure, Neon quota exceeded, etc.), the heartbeat is saved to globalState instead of being dropped. Auth/4xx errors are not buffered since retrying them won't help.
  • On recovery: checkAndFlushSyncWindow (runs on startup and every 5 min) now flushes fallback-buffered heartbeats immediately when the server becomes reachable again — no 24-hour gate in this path.
  • Manual sync: devmeter.syncNow always flushes all pending heartbeats regardless of syncWindow setting.
  • Background retries are silent (no toast) to avoid noise; the 24h window flush and manual sync still show the notification.

Warning

Firewall rules blocked me from connecting to one or more addresses (expand for details)

I tried to connect to the following addresses, but was blocked by firewall rules:

  • checkpoint.prisma.io
    • Triggering command: /opt/hostedtoolcache/node/24.15.0/x64/bin/node /opt/hostedtoolcache/node/24.15.0/x64/bin/node /home/REDACTED/work/DevMeter/DevMeter/client/node_modules/prisma/build/child {&#34;product&#34;:&#34;prisma&#34;,&#34;version&#34;:&#34;7.4.2&#34;,&#34;cli_install_type&#34;:&#34;local&#34;,&#34;information&#34;:&#34;&#34;,&#34;local_timestamp&#34;:&#34;2026-04-21T18:11:49Z&#34;,&#34;project_hash&#34;:&#34;1c4597a0&#34;,&#34;cli_path&#34;:&#34;/home/REDACTED/work/DevMeter/DevMeter/client/node_modules/.bin/prisma&#34;,&#34;cli_path_hash&#34;:&#34;16ef0d16 (dns block)

If you need me to access, download, or install something from one of these locations, you can either:

Copilot AI changed the title feat: introduce 24-hour sync window for VS Code extension feat: introduce 24-hour sync window and DB-failure fallback for VS Code extension Apr 21, 2026
@Zaid-maker Zaid-maker marked this pull request as ready for review April 21, 2026 18:13
@Zaid-maker
Copy link
Copy Markdown
Owner

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 21, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 21, 2026

📝 Walkthrough

Walkthrough

The changes introduce a batched heartbeat ingestion endpoint and local buffering capability for the VS Code extension. The new API endpoint validates requests via bearer token authentication, accepts a batch of up to 1000 heartbeats, stores them in the database, and triggers background processing. The extension now optionally buffers heartbeats locally for deferred 24-hour batch syncing instead of sending immediately.

Changes

Cohort / File(s) Summary
Batched Heartbeat API
client/app/api/heartbeat/batch/route.ts
New Next.js route handler implementing batched heartbeat ingestion with bearer token authentication, Redis-cached API key validation, JSON payload parsing with max batch size of 1000, database insertion via prisma.heartbeat.createMany, and asynchronous background processing (XP awards, level computation, achievement evaluation, cache invalidation).
Extension Configuration
extension/package.json
Added new boolean configuration property devmeter.syncWindow (default false) to enable local heartbeat buffering with 24-hour batch sync behavior.
Extension Buffering Logic
extension/src/extension.ts
Introduced persistent local buffering using extensionContext.globalState with conditional branching: when syncWindow enabled, heartbeats buffer locally and flush after 24 hours; when disabled, heartbeats send immediately with fallback buffering on network failures. Added checkAndFlushSyncWindow() scheduler (startup and every 5 minutes) and flushPendingHeartbeats() function for batch posting to the new endpoint. Updated "Sync Now" menu to reflect buffered state and trigger manual flush.

Sequence Diagram

sequenceDiagram
    participant User as VS Code User
    participant Ext as Extension
    participant LocalBuffer as Local Buffer<br/>(globalState)
    participant Server as Batch API<br/>Endpoint
    participant DB as Database<br/>(Prisma)
    participant Redis as Redis Cache
    participant BG as Background<br/>Processing

    User->>Ext: Generate heartbeat<br/>(activity detected)
    
    alt syncWindow Enabled
        Ext->>LocalBuffer: Buffer heartbeat<br/>(24-hour deferred)
        Note over Ext,LocalBuffer: Heartbeats stored locally,<br/>no immediate send
    else syncWindow Disabled
        Ext->>Server: POST heartbeat<br/>(immediate send)
        alt Server Success
            Server->>DB: Insert heartbeat
            Server-->>Ext: ✓ Success response
        else Network/Server Failure
            Ext->>LocalBuffer: Fallback buffer<br/>heartbeat
            Ext-->>User: Queued locally
        end
    end

    par Periodic Flush (every 5 min or manual)
        Ext->>Ext: checkAndFlushSyncWindow()<br/>or devmeter.syncNow
        alt 24 Hours Elapsed (syncWindow)
            Ext->>LocalBuffer: Read pending<br/>heartbeats
        else Immediate (normal)
            Ext->>LocalBuffer: Read pending<br/>heartbeats
        end
        Ext->>Server: POST /api/heartbeat/batch<br/>{ heartbeats: [...] }
        
        activate Server
        Server->>Redis: Validate API key<br/>(24h TTL cache)
        alt Key Valid & User Active
            Server->>DB: prisma.heartbeat<br/>.createMany(batch)
            Server->>BG: Trigger async tasks
            activate BG
                BG->>DB: Award XP
                BG->>DB: Compute user level
                BG->>DB: Evaluate & unlock<br/>achievements
                BG->>Redis: Invalidate<br/>stats/contributions<br/>cache keys
            deactivate BG
            Server-->>Ext: { status: "ok",<br/>synced: count }
        else Invalid Key or Deleted User
            Server-->>Ext: 401/400 error
        end
        deactivate Server
        
        Ext->>LocalBuffer: Clear synced<br/>heartbeats
        Ext->>Ext: Record lastSyncTime
        Ext-->>User: Show sync status
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 Hops with glee!

Batches of heartbeats, buffered with care,
A sync window opens, suspended in air—
Twenty-four hours until the flush,
Then to the server in one eager rush!
Background tasks dance while heartbeats are stored,
The rabbits rejoice at the features explored! 💓

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 28.57% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the primary changes: introducing a 24-hour sync window and a fallback buffer for heartbeat handling in the VS Code extension.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch copilot/introduce-24-hour-sync-window

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (1)
client/app/api/heartbeat/batch/route.ts (1)

158-165: Achievement queries scan all heartbeats per batch — won't scale.

prisma.heartbeat.count on line 158 and especially prisma.heartbeat.groupBy({ by: ['language'] }) on lines 161-164 run against the user's full heartbeat history on every batch sync, just to check two trivial thresholds (>= 30, >= 3). For an active user with months of data this becomes an expensive query on the hot path of every 24h flush, and it only grows over time.

Two cheap wins:

  • Gate the checks on unlockedSlugs first — if hour-1 and languages-3 are already unlocked, skip the queries entirely.
  • For languages-3, findMany({ select: { language: true }, distinct: ['language'], take: 3 }) returns as soon as 3 distinct values are found and avoids a full aggregation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/app/api/heartbeat/batch/route.ts` around lines 158 - 165, Gate the
achievement queries on unlockedSlugs first and replace full-table aggregations
with bounded queries: before calling prisma.heartbeat.count or
prisma.heartbeat.groupBy, check if 'hour-1' or 'languages-3' are already in
unlockedSlugs and skip their checks if present; for the "hour-1" check replace
prisma.heartbeat.count({ where: { userId: apiKey.userId } }) with a bounded
fetch like prisma.heartbeat.findMany({ where: { userId: apiKey.userId }, select:
{ id: true }, take: 30 }) and test if result.length >= 30; for the "languages-3"
check replace prisma.heartbeat.groupBy({ by: ['language'], where: { userId:
apiKey.userId } }) with prisma.heartbeat.findMany({ where: { userId:
apiKey.userId }, select: { language: true }, distinct: ['language'], take: 3 })
and test if result.length >= 3, then call checkAndUnlock('hour-1') or
checkAndUnlock('languages-3') accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@client/app/api/heartbeat/batch/route.ts`:
- Around line 82-96: The createMany call in prisma.heartbeat.createMany
currently sets skipDuplicates: false and has no dedup key, so client retries can
double-insert heartbeats; modify the implementation to enforce idempotency by
introducing a stable dedup key and using skipDuplicates: true (or short-circuit
based on a batchId): update the IncomingHeartbeat shape to include a
client-generated heartbeatId (or accept a batchId header), include that id in
the mapped data (e.g., heartbeatId alongside userId/timestamp/file) and add a
unique constraint in the DB (or treat batchId in Redis) so duplicates are
ignored; ensure prisma.heartbeat.createMany uses skipDuplicates: true and the
mapped objects include the unique identifier so retries won’t double-count.
- Around line 41-63: The cached prisma apiKey object (set via
redis.set(cacheKey, apiKey) using API_KEY_CACHE_TTL) can contain stale
user.deletedAt so apiKey.user.deletedAt in the route can be outdated; change the
caching strategy to store only the mapping from apiKeyStr → userId (or a small
DTO without deletedAt/xp/level) and after resolving userId call
prisma.user.findUnique({ where: { id: userId }, select: { deletedAt } }) to
verify soft-delete on every request, or add cache invalidation that removes
apikey:* when a user is soft-deleted; also replace the untyped redis.get(...) as
any with a typed helper/decoder to validate the cached shape before use (refer
to cacheKey, apiKeyStr, redis.get/set, API_KEY_CACHE_TTL,
prisma.apiKey.findUnique, and apiKey.user.deletedAt).

In `@extension/src/extension.ts`:
- Around line 414-429: The syncWindow buffering branch currently runs even when
no API credentials exist, causing heartbeats to be buffered forever; update the
logic around syncWindow in extension.ts so buffering only happens when apiKey
and apiUrl are present — either move the existing !apiKey || !apiUrl check to
occur before the syncWindow branch or add that same guard to the syncWindow
branch that manipulates extensionContext.globalState (PENDING_HEARTBEATS_KEY)
and heartbeatPayload, ensuring isProcessing and lastHeartbeat are only updated
when credentials exist.
- Around line 515-565: flushPendingHeartbeats currently posts the entire pending
array at once which can exceed the server MAX_BATCH_SIZE (1000) and get a 400
that leaves the buffer permanently stuck; update flushPendingHeartbeats to: 1)
enforce a local cap on the stored buffer (e.g., truncate pending to the most
recent N entries when reading from extensionContext.globalState in
flushPendingHeartbeats and wherever heartbeats are added) to prevent unbounded
growth, 2) split the pending array into sequential chunks of at most 1000 and
POST each chunk to `${apiUrl}/heartbeat/batch`, only removing/updating the
stored buffer after a chunk succeeds, and 3) treat permanent client errors by
removing or quarantining that chunk on 4xx responses other than 429 (for 429+5xx
retry the chunk and keep it in storage); use the symbols flushPendingHeartbeats,
PENDING_HEARTBEATS_KEY, LAST_SYNC_TIME_KEY and the axios.post call site to
locate and implement the changes.
- Around line 524-548: The flushPendingHeartbeats function currently clears the
entire buffer after axios.post which drops heartbeats appended during the await;
instead, after the POST succeeds, read the current array from
extensionContext.globalState.get(PENDING_HEARTBEATS_KEY), remove only the
heartbeats that were sent (the local pending variable which was posted) and then
call extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, remaining) so
new entries added by sendHeartbeat/onDidChangeTextDocument/onDidSaveTextDocument
are preserved; use a stable comparison (e.g., compare unique id or timestamp +
payload, or JSON stringify) to identify and remove exactly the sent items before
updating.

---

Nitpick comments:
In `@client/app/api/heartbeat/batch/route.ts`:
- Around line 158-165: Gate the achievement queries on unlockedSlugs first and
replace full-table aggregations with bounded queries: before calling
prisma.heartbeat.count or prisma.heartbeat.groupBy, check if 'hour-1' or
'languages-3' are already in unlockedSlugs and skip their checks if present; for
the "hour-1" check replace prisma.heartbeat.count({ where: { userId:
apiKey.userId } }) with a bounded fetch like prisma.heartbeat.findMany({ where:
{ userId: apiKey.userId }, select: { id: true }, take: 30 }) and test if
result.length >= 30; for the "languages-3" check replace
prisma.heartbeat.groupBy({ by: ['language'], where: { userId: apiKey.userId } })
with prisma.heartbeat.findMany({ where: { userId: apiKey.userId }, select: {
language: true }, distinct: ['language'], take: 3 }) and test if result.length
>= 3, then call checkAndUnlock('hour-1') or checkAndUnlock('languages-3')
accordingly.
🪄 Autofix (Beta)

✅ Autofix completed


ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b9fd4969-121c-4930-bb8e-896a2923fa8a

📥 Commits

Reviewing files that changed from the base of the PR and between 7ba4b66 and b78458c.

⛔ Files ignored due to path filters (1)
  • extension/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (3)
  • client/app/api/heartbeat/batch/route.ts
  • extension/package.json
  • extension/src/extension.ts

Comment on lines +41 to +63
try {
// Resolve and cache the API key
const cacheKey = `apikey:${apiKeyStr}`;
let apiKey = await redis.get(cacheKey) as any;

if (!apiKey) {
apiKey = await prisma.apiKey.findUnique({
where: { key: apiKeyStr },
include: { user: { select: { id: true, deletedAt: true, xp: true, level: true } } }
});

if (!apiKey) {
return NextResponse.json({ error: "Invalid API Key" }, { status: 401, headers: CORS_HEADERS });
}

// Cache for 24 hours
await redis.set(cacheKey, apiKey, { ex: API_KEY_CACHE_TTL });
}

// Check if user is soft-deleted
if (apiKey.user.deletedAt) {
return NextResponse.json({ error: "User account is deleted" }, { status: 401, headers: CORS_HEADERS });
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Cached apiKey snapshot keeps stale deletedAt for up to 24h.

On line 57 the full Prisma row (including user.deletedAt, user.xp, user.level) is cached for API_KEY_CACHE_TTL = 86400s. The deletedAt check on line 61 therefore trusts a value that may be up to 24h out of date — a user soft-deleted during that window can continue to sync batches (and accrue XP) until the cache expires.

Consider caching only the (apiKeyStr → userId) mapping and fetching deletedAt on each request, or invalidating apikey:* on soft-delete. Also note redis.get(...) as any on line 44 drops type safety; a typed helper would catch shape regressions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/app/api/heartbeat/batch/route.ts` around lines 41 - 63, The cached
prisma apiKey object (set via redis.set(cacheKey, apiKey) using
API_KEY_CACHE_TTL) can contain stale user.deletedAt so apiKey.user.deletedAt in
the route can be outdated; change the caching strategy to store only the mapping
from apiKeyStr → userId (or a small DTO without deletedAt/xp/level) and after
resolving userId call prisma.user.findUnique({ where: { id: userId }, select: {
deletedAt } }) to verify soft-delete on every request, or add cache invalidation
that removes apikey:* when a user is soft-deleted; also replace the untyped
redis.get(...) as any with a typed helper/decoder to validate the cached shape
before use (refer to cacheKey, apiKeyStr, redis.get/set, API_KEY_CACHE_TTL,
prisma.apiKey.findUnique, and apiKey.user.deletedAt).

Comment on lines +82 to +96
// Insert all heartbeats in a single transaction for atomicity
await prisma.heartbeat.createMany({
data: heartbeats.map((h: IncomingHeartbeat) => ({
userId: apiKey.userId,
project: h.project || "Unknown",
language: h.language || "unknown",
file: h.file || "unknown",
type: h.type || "file",
isSave: h.is_save || false,
editor: h.editor || null,
platform: h.platform || null,
timestamp: new Date(h.timestamp || Date.now()),
})),
skipDuplicates: false,
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Duplicate heartbeats possible on client retry — no idempotency.

skipDuplicates: false combined with no natural dedup key means that if the extension's POST request succeeds server-side but the response is lost (network drop, 30s timeout on a big batch), the client keeps the buffer and retries — and the same heartbeats get inserted twice. That inflates totalTime, XP (10 per heartbeat), and achievement progress.

Options:

  • Have the extension send a stable per-heartbeat client id and use a unique index + skipDuplicates: true, or
  • Add a (userId, timestamp, file) unique/compound key for dedup, or
  • Accept a batchId (UUID) header and short-circuit duplicate batch ids via Redis.

Given the PR's explicit goal of "heartbeats are never lost" during DB/server failures, the inverse — "never double-counted on retry" — is worth addressing in the same change.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/app/api/heartbeat/batch/route.ts` around lines 82 - 96, The createMany
call in prisma.heartbeat.createMany currently sets skipDuplicates: false and has
no dedup key, so client retries can double-insert heartbeats; modify the
implementation to enforce idempotency by introducing a stable dedup key and
using skipDuplicates: true (or short-circuit based on a batchId): update the
IncomingHeartbeat shape to include a client-generated heartbeatId (or accept a
batchId header), include that id in the mapped data (e.g., heartbeatId alongside
userId/timestamp/file) and add a unique constraint in the DB (or treat batchId
in Redis) so duplicates are ignored; ensure prisma.heartbeat.createMany uses
skipDuplicates: true and the mapped objects include the unique identifier so
retries won’t double-count.

Comment on lines +98 to +179
// BACKGROUND PROCESSING: Award XP for the batch, check levels and achievements.
// We don't await this to keep the API response near-instant.
(async () => {
try {
const totalXp = heartbeats.length * XP_PER_HEARTBEAT;

const updatedUser = await prisma.user.update({
where: { id: apiKey.userId },
data: { xp: { increment: totalXp } }
});

const { getLevelFromXP } = await import("@/lib/gamification");
const newLevel = getLevelFromXP(updatedUser.xp);

if (newLevel > updatedUser.level) {
await prisma.user.update({
where: { id: apiKey.userId },
data: { level: newLevel }
});
console.log(`User ${apiKey.userId} leveled up to ${newLevel}!`);
}

// --- Background Achievement Logic ---
const userAchievements = await prisma.userAchievement.findMany({
where: { userId: apiKey.userId },
select: { achievement: { select: { slug: true } } }
});
const unlockedSlugs = new Set(userAchievements.map(ua => ua.achievement.slug));

const checkAndUnlock = async (slug: string) => {
if (unlockedSlugs.has(slug)) return;
const achievement = await prisma.achievement.findUnique({ where: { slug } });
if (!achievement) return;

try {
await prisma.userAchievement.create({
data: { userId: apiKey.userId, achievementId: achievement.id }
});
} catch (e: any) {
if (e?.code === 'P2002') return; // Already unlocked concurrently
throw e;
}

if (achievement.xpReward > 0) {
const afterXPUser = await prisma.user.update({
where: { id: apiKey.userId },
data: { xp: { increment: achievement.xpReward } }
});
const finalLevel = getLevelFromXP(afterXPUser.xp);
if (finalLevel > afterXPUser.level) {
await prisma.user.update({
where: { id: apiKey.userId },
data: { level: finalLevel }
});
}
}
};

await checkAndUnlock('first-heartbeat');

const heartbeatCount = await prisma.heartbeat.count({ where: { userId: apiKey.userId } });
if (heartbeatCount >= 30) await checkAndUnlock('hour-1');

const languagesCount = await prisma.heartbeat.groupBy({
by: ['language'],
where: { userId: apiKey.userId }
});
if (languagesCount.length >= 3) await checkAndUnlock('languages-3');

// --- Cache Invalidation ---
const ranges = ["today", "all", "yesterday", "default"];
const keysToDelete = [
`contributions:${apiKey.userId}`,
...ranges.map(r => `stats:${apiKey.userId}:${r}`)
];
await redis.del(...keysToDelete);
console.log(`Cache invalidated for user ${apiKey.userId} after batch sync`);

} catch (err) {
console.error("Batch background gamification/cache error:", err);
}
})();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Next.js 15 app router after() API for background work after response

💡 Result:

In Next.js 15 App Router, the after API schedules background work to execute after the HTTP response (or prerender) is sent to the client, without blocking the response. This is ideal for non-critical tasks like logging, analytics, or cleanup. Import from 'next/server': import { after } from 'next/server'; Usage example in a Server Component (layout/page): export default function Layout({ children }: { children: React.ReactNode }) { after( => { // Background task, e.g., logging log; }); return <>{children}</>; } In Route Handlers or Server Actions (with request APIs): export async function POST(request: Request) { // Main work... after(async => { const userAgent = (await headers.get('user-agent')) || 'unknown'; // Log analytics }); return Response.json({ success: true }); } Key points: - Stable since v15.1.0 (introduced as unstable_after in 15.0.0-rc). - Runs even on errors or failed responses. - Supports async callbacks; runs up to route's maxDuration. - Cannot use request APIs (cookies, headers) directly in Server Components' after—read them before calling after. - Supported on Node.js/Docker; platform-specific for serverless/static export (no static export). - Non-dynamic: doesn't force dynamic rendering. Official docs: https://nextjs.org/docs/app/api-reference/functions/after

Citations:


🏁 Script executed:

# Find package.json to check Next.js version
fd -t f package.json | head -5

Repository: Zaid-maker/DevMeter

Length of output: 153


🏁 Script executed:

# Read the route file to verify context around lines 98-179
# First, check the file size
wc -l client/app/api/heartbeat/batch/route.ts

Repository: Zaid-maker/DevMeter

Length of output: 106


🏁 Script executed:

# Check for any waitUntil usage or other async handling patterns in the file
rg -n "waitUntil|unstable_after" client/app/api/heartbeat/batch/route.ts

Repository: Zaid-maker/DevMeter

Length of output: 45


🏁 Script executed:

# Check Next.js version in client
cat client/package.json | grep -A 5 -B 5 "next"

Repository: Zaid-maker/DevMeter

Length of output: 915


🏁 Script executed:

# Read the entire route file to verify the actual code structure
cat -n client/app/api/heartbeat/batch/route.ts

Repository: Zaid-maker/DevMeter

Length of output: 9010


Background IIFE is not awaited — on serverless runtimes it may not complete.

After return NextResponse.json(...) on line 181, the serverless invocation can be frozen/terminated by the platform (Vercel/Next.js runtime), which means XP increments, level-ups, achievement unlocks, and the Redis cache invalidation on lines 168-174 may silently fail to run for some batches. That leads to inconsistent gamification state and stale dashboard stats that users attribute to "my offline heartbeats didn't count."

Prefer one of:

  • Use Next.js 16's stable after() API (imported from next/server) to defer the gamification block past the response while keeping the invocation alive, or
  • await the gamification block so the response is delayed but guaranteed to persist its side effects.

At minimum, await the XP update and the Redis cache invalidation (the parts users immediately notice) and keep only the achievement lookups fire-and-forget.

Comment on lines +414 to +429
// When the 24-hour sync window is enabled, buffer heartbeats locally
if (syncWindow) {
isProcessing = true;
try {
const pending = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []);
pending.push(heartbeatPayload);
await extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, pending);
log(`Heartbeat buffered locally (${pending.length} pending). Will sync after 24 hours.`);
lastHeartbeat = now;
} catch (error: any) {
log(`Failed to buffer heartbeat locally: ${error.message}`);
} finally {
isProcessing = false;
}
return;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

syncWindow=true buffers heartbeats even when the API key is missing.

On line 369 syncWindow is read before the apiKey/apiUrl check on line 371, but the buffering branch on lines 415-429 runs regardless of whether the user has actually configured an API key. A fresh install with syncWindow enabled will quietly accumulate heartbeats forever (they'll just fail to flush with 401 later, which per the other comment isn't cleared). Guard the buffering branch with the same !apiKey || !apiUrl check, or move that check above the syncWindow branch.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extension/src/extension.ts` around lines 414 - 429, The syncWindow buffering
branch currently runs even when no API credentials exist, causing heartbeats to
be buffered forever; update the logic around syncWindow in extension.ts so
buffering only happens when apiKey and apiUrl are present — either move the
existing !apiKey || !apiUrl check to occur before the syncWindow branch or add
that same guard to the syncWindow branch that manipulates
extensionContext.globalState (PENDING_HEARTBEATS_KEY) and heartbeatPayload,
ensuring isProcessing and lastHeartbeat are only updated when credentials exist.

Comment on lines +515 to +565
async function flushPendingHeartbeats(silent = false) {
const config = vscode.workspace.getConfiguration('devmeter');
const apiKey = config.get<string>('apiKey');
const apiUrl = config.get<string>('apiUrl');

if (!apiKey || !apiUrl) {
return;
}

const pending = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []);
if (pending.length === 0) {
log('No pending heartbeats to sync.');
return;
}

log(`Syncing ${pending.length} buffered heartbeat(s) to ${apiUrl}/heartbeat/batch`);

const editorName = vscode.env.appName || 'unknown';

try {
await axios.post(`${apiUrl}/heartbeat/batch`, { heartbeats: pending }, {
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json',
'User-Agent': `DevMeter-VSCode-Extension/${editorName}`
},
timeout: 30000 // Allow more time for large batches
});

log(`Successfully synced ${pending.length} heartbeat(s).`);

// Clear the local buffer and record sync time
await extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, []);
await extensionContext.globalState.update(LAST_SYNC_TIME_KEY, Date.now());

if (!silent) {
vscode.window.showInformationMessage(
`DevMeter: Synced ${pending.length} buffered heartbeat(s) to the dashboard.`
);
} else {
log(`Silently recovered ${pending.length} heartbeat(s) from local fallback buffer.`);
}
updateStatusBar();
} catch (error: any) {
log(`Failed to sync buffered heartbeats: ${error.message}`);
if (error.response) {
log(`Response error: ${error.response.status} - ${JSON.stringify(error.response.data)}`);
}
// Keep pending heartbeats in local storage so they can be retried later
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Buffer can exceed server's 1000-item cap and then stay stuck forever.

The server enforces MAX_BATCH_SIZE = 1000 and rejects larger batches with HTTP 400 (client/app/api/heartbeat/batch/route.ts lines 75-80). In fallback mode (syncWindow=false), a prolonged outage — e.g. DB down over a weekend, which is exactly the scenario called out in the PR — lets the buffer grow well past 1000 (saves bypass the 2-minute interval check on line 357). From that point on, every flush hits 400, the catch on line 558 leaves the buffer untouched, and the extension never recovers without user intervention.

Two changes together fix this:

  1. Chunk the outgoing request into ≤1000-item slices and post them sequentially, removing each chunk only on success.
  2. Cap the in-memory buffer (e.g., keep most recent N, or drop oldest) so pathological growth doesn't balloon globalState.

Also note that 400/401/403 responses are currently treated identically to transient failures — the buffer is retained indefinitely even for permanent errors (invalid API key, malformed payload). Consider dropping or quarantining the batch on 4xx responses other than 429.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extension/src/extension.ts` around lines 515 - 565, flushPendingHeartbeats
currently posts the entire pending array at once which can exceed the server
MAX_BATCH_SIZE (1000) and get a 400 that leaves the buffer permanently stuck;
update flushPendingHeartbeats to: 1) enforce a local cap on the stored buffer
(e.g., truncate pending to the most recent N entries when reading from
extensionContext.globalState in flushPendingHeartbeats and wherever heartbeats
are added) to prevent unbounded growth, 2) split the pending array into
sequential chunks of at most 1000 and POST each chunk to
`${apiUrl}/heartbeat/batch`, only removing/updating the stored buffer after a
chunk succeeds, and 3) treat permanent client errors by removing or quarantining
that chunk on 4xx responses other than 429 (for 429+5xx retry the chunk and keep
it in storage); use the symbols flushPendingHeartbeats, PENDING_HEARTBEATS_KEY,
LAST_SYNC_TIME_KEY and the axios.post call site to locate and implement the
changes.

Comment thread extension/src/extension.ts Outdated
Comment on lines +524 to +548
const pending = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []);
if (pending.length === 0) {
log('No pending heartbeats to sync.');
return;
}

log(`Syncing ${pending.length} buffered heartbeat(s) to ${apiUrl}/heartbeat/batch`);

const editorName = vscode.env.appName || 'unknown';

try {
await axios.post(`${apiUrl}/heartbeat/batch`, { heartbeats: pending }, {
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json',
'User-Agent': `DevMeter-VSCode-Extension/${editorName}`
},
timeout: 30000 // Allow more time for large batches
});

log(`Successfully synced ${pending.length} heartbeat(s).`);

// Clear the local buffer and record sync time
await extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, []);
await extensionContext.globalState.update(LAST_SYNC_TIME_KEY, Date.now());
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Race: heartbeats buffered during the axios.post await are silently dropped.

flushPendingHeartbeats reads pending on line 524, awaits the network POST on line 535, then unconditionally replaces the buffer with [] on line 547. During that await — which has a 30 s timeout — onDidChangeTextDocument / onDidSaveTextDocument can fire and call sendHeartbeat, which appends to the same pendingHeartbeats globalState key (lines 418-420 and 462-464). Those new entries are then wiped by the update(..., []) on line 547 even though they were never sent. isProcessing only guards overlapping sends, not send-vs-flush, so this is reachable in normal use.

Fix by removing exactly what was sent rather than clearing:

🔧 Proposed fix
-    const pending = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []);
-    if (pending.length === 0) {
+    const pending = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []);
+    if (pending.length === 0) {
         log('No pending heartbeats to sync.');
         return;
     }
+    const batch = pending.slice();
+    const sentCount = batch.length;
@@
-        await axios.post(`${apiUrl}/heartbeat/batch`, { heartbeats: pending }, {
+        await axios.post(`${apiUrl}/heartbeat/batch`, { heartbeats: batch }, {
@@
-        log(`Successfully synced ${pending.length} heartbeat(s).`);
-
-        // Clear the local buffer and record sync time
-        await extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, []);
-        await extensionContext.globalState.update(LAST_SYNC_TIME_KEY, Date.now());
+        log(`Successfully synced ${sentCount} heartbeat(s).`);
+
+        // Remove only what we actually sent; preserve any heartbeats buffered during the POST.
+        const current = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []);
+        await extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, current.slice(sentCount));
+        await extensionContext.globalState.update(LAST_SYNC_TIME_KEY, Date.now());

This assumes appends always go to the tail (true for the two call sites today).

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const pending = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []);
if (pending.length === 0) {
log('No pending heartbeats to sync.');
return;
}
log(`Syncing ${pending.length} buffered heartbeat(s) to ${apiUrl}/heartbeat/batch`);
const editorName = vscode.env.appName || 'unknown';
try {
await axios.post(`${apiUrl}/heartbeat/batch`, { heartbeats: pending }, {
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json',
'User-Agent': `DevMeter-VSCode-Extension/${editorName}`
},
timeout: 30000 // Allow more time for large batches
});
log(`Successfully synced ${pending.length} heartbeat(s).`);
// Clear the local buffer and record sync time
await extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, []);
await extensionContext.globalState.update(LAST_SYNC_TIME_KEY, Date.now());
const pending = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []);
if (pending.length === 0) {
log('No pending heartbeats to sync.');
return;
}
const batch = pending.slice();
const sentCount = batch.length;
log(`Syncing ${pending.length} buffered heartbeat(s) to ${apiUrl}/heartbeat/batch`);
const editorName = vscode.env.appName || 'unknown';
try {
await axios.post(`${apiUrl}/heartbeat/batch`, { heartbeats: batch }, {
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json',
'User-Agent': `DevMeter-VSCode-Extension/${editorName}`
},
timeout: 30000 // Allow more time for large batches
});
log(`Successfully synced ${sentCount} heartbeat(s).`);
// Remove only what we actually sent; preserve any heartbeats buffered during the POST.
const current = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []);
await extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, current.slice(sentCount));
await extensionContext.globalState.update(LAST_SYNC_TIME_KEY, Date.now());
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@extension/src/extension.ts` around lines 524 - 548, The
flushPendingHeartbeats function currently clears the entire buffer after
axios.post which drops heartbeats appended during the await; instead, after the
POST succeeds, read the current array from
extensionContext.globalState.get(PENDING_HEARTBEATS_KEY), remove only the
heartbeats that were sent (the local pending variable which was posted) and then
call extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, remaining) so
new entries added by sendHeartbeat/onDidChangeTextDocument/onDidSaveTextDocument
are preserved; use a stable comparison (e.g., compare unique id or timestamp +
payload, or JSON stringify) to identify and remove exactly the sent items before
updating.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 21, 2026

Note

Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it.

Fixes Applied Successfully

Fixed 3 file(s) based on 5 unresolved review comments.

Files modified:

  • client/app/api/heartbeat/batch/route.ts
  • client/prisma/schema.prisma
  • extension/src/extension.ts

Commit: 4d62e9cecfa51d387bdb57c1c23cf5ffe6fe6fcf

The changes have been pushed to the copilot/introduce-24-hour-sync-window branch.

Time taken: 11m 55s

Fixed 3 file(s) based on 5 unresolved review comments.

Co-authored-by: CodeRabbit <noreply@coderabbit.ai>
@zaidhafeezvu
Copy link
Copy Markdown
Collaborator

@copilot Create a migration For new changes in #file:schema.prisma

Copy link
Copy Markdown
Contributor Author

Copilot AI commented Apr 24, 2026

@copilot Create a migration For new changes in #file:schema.prisma

Done in 33b6536. Created client/prisma/migrations/20260424000000_add_heartbeat_id_to_heartbeat/migration.sql which applies the only schema change introduced in this PR:

-- Add optional deduplication identifier for batch-synced heartbeats
ALTER TABLE "Heartbeat"
ADD COLUMN "heartbeatId" TEXT;

-- Enforce uniqueness so batch imports cannot create duplicate heartbeats
CREATE UNIQUE INDEX "Heartbeat_heartbeatId_key" ON "Heartbeat"("heartbeatId");

Also fixed the missing trailing newline in schema.prisma.

Warning

Firewall rules blocked me from connecting to one or more addresses (expand for details)

I tried to connect to the following addresses, but was blocked by firewall rules:

  • checkpoint.prisma.io
    • Triggering command: /usr/local/bin/node /usr/local/bin/node /home/REDACTED/work/DevMeter/DevMeter/client/node_modules/prisma/build/child {&#34;product&#34;:&#34;prisma&#34;,&#34;version&#34;:&#34;7.4.2&#34;,&#34;cli_install_type&#34;:&#34;local&#34;,&#34;information&#34;:&#34;&#34;,&#34;local_timestamp&#34;:&#34;2026-04-24T19:41:24Z&#34;,&#34;project_hash&#34;:&#34;1c4597a0&#34;,&#34;cli_path&#34;:&#34;/home/REDACTED/work/DevMeter/DevMeter/client/node_modules/.bin/prisma&#34;,&#34;cli_path_hash&#34;:&#34;16ef0d16 (dns block)

If you need me to access, download, or install something from one of these locations, you can either:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

introduce 24-hour sync window

3 participants