feat: introduce 24-hour sync window and DB-failure fallback for VS Code extension#74
feat: introduce 24-hour sync window and DB-failure fallback for VS Code extension#74
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Agent-Logs-Url: https://github.com/Zaid-maker/DevMeter/sessions/d1627f16-c62e-4988-897c-4aea979be57e Co-authored-by: zaidhafeezvu <219703339+zaidhafeezvu@users.noreply.github.com>
|
@copilot alright, there is one issue right now the neon database is going to run out very soon so, we need this feature work as a fallback if database connection fails |
Agent-Logs-Url: https://github.com/Zaid-maker/DevMeter/sessions/ded1ffab-49a0-4f14-bc7d-e4c5bb5868fb Co-authored-by: zaidhafeezvu <219703339+zaidhafeezvu@users.noreply.github.com>
Done in b78458c. The local buffer now acts as a fallback regardless of whether
Warning Firewall rules blocked me from connecting to one or more addresses (expand for details)I tried to connect to the following addresses, but was blocked by firewall rules:
If you need me to access, download, or install something from one of these locations, you can either:
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
📝 WalkthroughWalkthroughThe changes introduce a batched heartbeat ingestion endpoint and local buffering capability for the VS Code extension. The new API endpoint validates requests via bearer token authentication, accepts a batch of up to 1000 heartbeats, stores them in the database, and triggers background processing. The extension now optionally buffers heartbeats locally for deferred 24-hour batch syncing instead of sending immediately. Changes
Sequence DiagramsequenceDiagram
participant User as VS Code User
participant Ext as Extension
participant LocalBuffer as Local Buffer<br/>(globalState)
participant Server as Batch API<br/>Endpoint
participant DB as Database<br/>(Prisma)
participant Redis as Redis Cache
participant BG as Background<br/>Processing
User->>Ext: Generate heartbeat<br/>(activity detected)
alt syncWindow Enabled
Ext->>LocalBuffer: Buffer heartbeat<br/>(24-hour deferred)
Note over Ext,LocalBuffer: Heartbeats stored locally,<br/>no immediate send
else syncWindow Disabled
Ext->>Server: POST heartbeat<br/>(immediate send)
alt Server Success
Server->>DB: Insert heartbeat
Server-->>Ext: ✓ Success response
else Network/Server Failure
Ext->>LocalBuffer: Fallback buffer<br/>heartbeat
Ext-->>User: Queued locally
end
end
par Periodic Flush (every 5 min or manual)
Ext->>Ext: checkAndFlushSyncWindow()<br/>or devmeter.syncNow
alt 24 Hours Elapsed (syncWindow)
Ext->>LocalBuffer: Read pending<br/>heartbeats
else Immediate (normal)
Ext->>LocalBuffer: Read pending<br/>heartbeats
end
Ext->>Server: POST /api/heartbeat/batch<br/>{ heartbeats: [...] }
activate Server
Server->>Redis: Validate API key<br/>(24h TTL cache)
alt Key Valid & User Active
Server->>DB: prisma.heartbeat<br/>.createMany(batch)
Server->>BG: Trigger async tasks
activate BG
BG->>DB: Award XP
BG->>DB: Compute user level
BG->>DB: Evaluate & unlock<br/>achievements
BG->>Redis: Invalidate<br/>stats/contributions<br/>cache keys
deactivate BG
Server-->>Ext: { status: "ok",<br/>synced: count }
else Invalid Key or Deleted User
Server-->>Ext: 401/400 error
end
deactivate Server
Ext->>LocalBuffer: Clear synced<br/>heartbeats
Ext->>Ext: Record lastSyncTime
Ext-->>User: Show sync status
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (1)
client/app/api/heartbeat/batch/route.ts (1)
158-165: Achievement queries scan all heartbeats per batch — won't scale.
prisma.heartbeat.counton line 158 and especiallyprisma.heartbeat.groupBy({ by: ['language'] })on lines 161-164 run against the user's full heartbeat history on every batch sync, just to check two trivial thresholds (>= 30,>= 3). For an active user with months of data this becomes an expensive query on the hot path of every 24h flush, and it only grows over time.Two cheap wins:
- Gate the checks on
unlockedSlugsfirst — ifhour-1andlanguages-3are already unlocked, skip the queries entirely.- For
languages-3,findMany({ select: { language: true }, distinct: ['language'], take: 3 })returns as soon as 3 distinct values are found and avoids a full aggregation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client/app/api/heartbeat/batch/route.ts` around lines 158 - 165, Gate the achievement queries on unlockedSlugs first and replace full-table aggregations with bounded queries: before calling prisma.heartbeat.count or prisma.heartbeat.groupBy, check if 'hour-1' or 'languages-3' are already in unlockedSlugs and skip their checks if present; for the "hour-1" check replace prisma.heartbeat.count({ where: { userId: apiKey.userId } }) with a bounded fetch like prisma.heartbeat.findMany({ where: { userId: apiKey.userId }, select: { id: true }, take: 30 }) and test if result.length >= 30; for the "languages-3" check replace prisma.heartbeat.groupBy({ by: ['language'], where: { userId: apiKey.userId } }) with prisma.heartbeat.findMany({ where: { userId: apiKey.userId }, select: { language: true }, distinct: ['language'], take: 3 }) and test if result.length >= 3, then call checkAndUnlock('hour-1') or checkAndUnlock('languages-3') accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@client/app/api/heartbeat/batch/route.ts`:
- Around line 82-96: The createMany call in prisma.heartbeat.createMany
currently sets skipDuplicates: false and has no dedup key, so client retries can
double-insert heartbeats; modify the implementation to enforce idempotency by
introducing a stable dedup key and using skipDuplicates: true (or short-circuit
based on a batchId): update the IncomingHeartbeat shape to include a
client-generated heartbeatId (or accept a batchId header), include that id in
the mapped data (e.g., heartbeatId alongside userId/timestamp/file) and add a
unique constraint in the DB (or treat batchId in Redis) so duplicates are
ignored; ensure prisma.heartbeat.createMany uses skipDuplicates: true and the
mapped objects include the unique identifier so retries won’t double-count.
- Around line 41-63: The cached prisma apiKey object (set via
redis.set(cacheKey, apiKey) using API_KEY_CACHE_TTL) can contain stale
user.deletedAt so apiKey.user.deletedAt in the route can be outdated; change the
caching strategy to store only the mapping from apiKeyStr → userId (or a small
DTO without deletedAt/xp/level) and after resolving userId call
prisma.user.findUnique({ where: { id: userId }, select: { deletedAt } }) to
verify soft-delete on every request, or add cache invalidation that removes
apikey:* when a user is soft-deleted; also replace the untyped redis.get(...) as
any with a typed helper/decoder to validate the cached shape before use (refer
to cacheKey, apiKeyStr, redis.get/set, API_KEY_CACHE_TTL,
prisma.apiKey.findUnique, and apiKey.user.deletedAt).
In `@extension/src/extension.ts`:
- Around line 414-429: The syncWindow buffering branch currently runs even when
no API credentials exist, causing heartbeats to be buffered forever; update the
logic around syncWindow in extension.ts so buffering only happens when apiKey
and apiUrl are present — either move the existing !apiKey || !apiUrl check to
occur before the syncWindow branch or add that same guard to the syncWindow
branch that manipulates extensionContext.globalState (PENDING_HEARTBEATS_KEY)
and heartbeatPayload, ensuring isProcessing and lastHeartbeat are only updated
when credentials exist.
- Around line 515-565: flushPendingHeartbeats currently posts the entire pending
array at once which can exceed the server MAX_BATCH_SIZE (1000) and get a 400
that leaves the buffer permanently stuck; update flushPendingHeartbeats to: 1)
enforce a local cap on the stored buffer (e.g., truncate pending to the most
recent N entries when reading from extensionContext.globalState in
flushPendingHeartbeats and wherever heartbeats are added) to prevent unbounded
growth, 2) split the pending array into sequential chunks of at most 1000 and
POST each chunk to `${apiUrl}/heartbeat/batch`, only removing/updating the
stored buffer after a chunk succeeds, and 3) treat permanent client errors by
removing or quarantining that chunk on 4xx responses other than 429 (for 429+5xx
retry the chunk and keep it in storage); use the symbols flushPendingHeartbeats,
PENDING_HEARTBEATS_KEY, LAST_SYNC_TIME_KEY and the axios.post call site to
locate and implement the changes.
- Around line 524-548: The flushPendingHeartbeats function currently clears the
entire buffer after axios.post which drops heartbeats appended during the await;
instead, after the POST succeeds, read the current array from
extensionContext.globalState.get(PENDING_HEARTBEATS_KEY), remove only the
heartbeats that were sent (the local pending variable which was posted) and then
call extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, remaining) so
new entries added by sendHeartbeat/onDidChangeTextDocument/onDidSaveTextDocument
are preserved; use a stable comparison (e.g., compare unique id or timestamp +
payload, or JSON stringify) to identify and remove exactly the sent items before
updating.
---
Nitpick comments:
In `@client/app/api/heartbeat/batch/route.ts`:
- Around line 158-165: Gate the achievement queries on unlockedSlugs first and
replace full-table aggregations with bounded queries: before calling
prisma.heartbeat.count or prisma.heartbeat.groupBy, check if 'hour-1' or
'languages-3' are already in unlockedSlugs and skip their checks if present; for
the "hour-1" check replace prisma.heartbeat.count({ where: { userId:
apiKey.userId } }) with a bounded fetch like prisma.heartbeat.findMany({ where:
{ userId: apiKey.userId }, select: { id: true }, take: 30 }) and test if
result.length >= 30; for the "languages-3" check replace
prisma.heartbeat.groupBy({ by: ['language'], where: { userId: apiKey.userId } })
with prisma.heartbeat.findMany({ where: { userId: apiKey.userId }, select: {
language: true }, distinct: ['language'], take: 3 }) and test if result.length
>= 3, then call checkAndUnlock('hour-1') or checkAndUnlock('languages-3')
accordingly.
🪄 Autofix (Beta)
✅ Autofix completed
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b9fd4969-121c-4930-bb8e-896a2923fa8a
⛔ Files ignored due to path filters (1)
extension/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (3)
client/app/api/heartbeat/batch/route.tsextension/package.jsonextension/src/extension.ts
| try { | ||
| // Resolve and cache the API key | ||
| const cacheKey = `apikey:${apiKeyStr}`; | ||
| let apiKey = await redis.get(cacheKey) as any; | ||
|
|
||
| if (!apiKey) { | ||
| apiKey = await prisma.apiKey.findUnique({ | ||
| where: { key: apiKeyStr }, | ||
| include: { user: { select: { id: true, deletedAt: true, xp: true, level: true } } } | ||
| }); | ||
|
|
||
| if (!apiKey) { | ||
| return NextResponse.json({ error: "Invalid API Key" }, { status: 401, headers: CORS_HEADERS }); | ||
| } | ||
|
|
||
| // Cache for 24 hours | ||
| await redis.set(cacheKey, apiKey, { ex: API_KEY_CACHE_TTL }); | ||
| } | ||
|
|
||
| // Check if user is soft-deleted | ||
| if (apiKey.user.deletedAt) { | ||
| return NextResponse.json({ error: "User account is deleted" }, { status: 401, headers: CORS_HEADERS }); | ||
| } |
There was a problem hiding this comment.
Cached apiKey snapshot keeps stale deletedAt for up to 24h.
On line 57 the full Prisma row (including user.deletedAt, user.xp, user.level) is cached for API_KEY_CACHE_TTL = 86400s. The deletedAt check on line 61 therefore trusts a value that may be up to 24h out of date — a user soft-deleted during that window can continue to sync batches (and accrue XP) until the cache expires.
Consider caching only the (apiKeyStr → userId) mapping and fetching deletedAt on each request, or invalidating apikey:* on soft-delete. Also note redis.get(...) as any on line 44 drops type safety; a typed helper would catch shape regressions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@client/app/api/heartbeat/batch/route.ts` around lines 41 - 63, The cached
prisma apiKey object (set via redis.set(cacheKey, apiKey) using
API_KEY_CACHE_TTL) can contain stale user.deletedAt so apiKey.user.deletedAt in
the route can be outdated; change the caching strategy to store only the mapping
from apiKeyStr → userId (or a small DTO without deletedAt/xp/level) and after
resolving userId call prisma.user.findUnique({ where: { id: userId }, select: {
deletedAt } }) to verify soft-delete on every request, or add cache invalidation
that removes apikey:* when a user is soft-deleted; also replace the untyped
redis.get(...) as any with a typed helper/decoder to validate the cached shape
before use (refer to cacheKey, apiKeyStr, redis.get/set, API_KEY_CACHE_TTL,
prisma.apiKey.findUnique, and apiKey.user.deletedAt).
| // Insert all heartbeats in a single transaction for atomicity | ||
| await prisma.heartbeat.createMany({ | ||
| data: heartbeats.map((h: IncomingHeartbeat) => ({ | ||
| userId: apiKey.userId, | ||
| project: h.project || "Unknown", | ||
| language: h.language || "unknown", | ||
| file: h.file || "unknown", | ||
| type: h.type || "file", | ||
| isSave: h.is_save || false, | ||
| editor: h.editor || null, | ||
| platform: h.platform || null, | ||
| timestamp: new Date(h.timestamp || Date.now()), | ||
| })), | ||
| skipDuplicates: false, | ||
| }); |
There was a problem hiding this comment.
Duplicate heartbeats possible on client retry — no idempotency.
skipDuplicates: false combined with no natural dedup key means that if the extension's POST request succeeds server-side but the response is lost (network drop, 30s timeout on a big batch), the client keeps the buffer and retries — and the same heartbeats get inserted twice. That inflates totalTime, XP (10 per heartbeat), and achievement progress.
Options:
- Have the extension send a stable per-heartbeat client id and use a unique index +
skipDuplicates: true, or - Add a
(userId, timestamp, file)unique/compound key for dedup, or - Accept a
batchId(UUID) header and short-circuit duplicate batch ids via Redis.
Given the PR's explicit goal of "heartbeats are never lost" during DB/server failures, the inverse — "never double-counted on retry" — is worth addressing in the same change.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@client/app/api/heartbeat/batch/route.ts` around lines 82 - 96, The createMany
call in prisma.heartbeat.createMany currently sets skipDuplicates: false and has
no dedup key, so client retries can double-insert heartbeats; modify the
implementation to enforce idempotency by introducing a stable dedup key and
using skipDuplicates: true (or short-circuit based on a batchId): update the
IncomingHeartbeat shape to include a client-generated heartbeatId (or accept a
batchId header), include that id in the mapped data (e.g., heartbeatId alongside
userId/timestamp/file) and add a unique constraint in the DB (or treat batchId
in Redis) so duplicates are ignored; ensure prisma.heartbeat.createMany uses
skipDuplicates: true and the mapped objects include the unique identifier so
retries won’t double-count.
| // BACKGROUND PROCESSING: Award XP for the batch, check levels and achievements. | ||
| // We don't await this to keep the API response near-instant. | ||
| (async () => { | ||
| try { | ||
| const totalXp = heartbeats.length * XP_PER_HEARTBEAT; | ||
|
|
||
| const updatedUser = await prisma.user.update({ | ||
| where: { id: apiKey.userId }, | ||
| data: { xp: { increment: totalXp } } | ||
| }); | ||
|
|
||
| const { getLevelFromXP } = await import("@/lib/gamification"); | ||
| const newLevel = getLevelFromXP(updatedUser.xp); | ||
|
|
||
| if (newLevel > updatedUser.level) { | ||
| await prisma.user.update({ | ||
| where: { id: apiKey.userId }, | ||
| data: { level: newLevel } | ||
| }); | ||
| console.log(`User ${apiKey.userId} leveled up to ${newLevel}!`); | ||
| } | ||
|
|
||
| // --- Background Achievement Logic --- | ||
| const userAchievements = await prisma.userAchievement.findMany({ | ||
| where: { userId: apiKey.userId }, | ||
| select: { achievement: { select: { slug: true } } } | ||
| }); | ||
| const unlockedSlugs = new Set(userAchievements.map(ua => ua.achievement.slug)); | ||
|
|
||
| const checkAndUnlock = async (slug: string) => { | ||
| if (unlockedSlugs.has(slug)) return; | ||
| const achievement = await prisma.achievement.findUnique({ where: { slug } }); | ||
| if (!achievement) return; | ||
|
|
||
| try { | ||
| await prisma.userAchievement.create({ | ||
| data: { userId: apiKey.userId, achievementId: achievement.id } | ||
| }); | ||
| } catch (e: any) { | ||
| if (e?.code === 'P2002') return; // Already unlocked concurrently | ||
| throw e; | ||
| } | ||
|
|
||
| if (achievement.xpReward > 0) { | ||
| const afterXPUser = await prisma.user.update({ | ||
| where: { id: apiKey.userId }, | ||
| data: { xp: { increment: achievement.xpReward } } | ||
| }); | ||
| const finalLevel = getLevelFromXP(afterXPUser.xp); | ||
| if (finalLevel > afterXPUser.level) { | ||
| await prisma.user.update({ | ||
| where: { id: apiKey.userId }, | ||
| data: { level: finalLevel } | ||
| }); | ||
| } | ||
| } | ||
| }; | ||
|
|
||
| await checkAndUnlock('first-heartbeat'); | ||
|
|
||
| const heartbeatCount = await prisma.heartbeat.count({ where: { userId: apiKey.userId } }); | ||
| if (heartbeatCount >= 30) await checkAndUnlock('hour-1'); | ||
|
|
||
| const languagesCount = await prisma.heartbeat.groupBy({ | ||
| by: ['language'], | ||
| where: { userId: apiKey.userId } | ||
| }); | ||
| if (languagesCount.length >= 3) await checkAndUnlock('languages-3'); | ||
|
|
||
| // --- Cache Invalidation --- | ||
| const ranges = ["today", "all", "yesterday", "default"]; | ||
| const keysToDelete = [ | ||
| `contributions:${apiKey.userId}`, | ||
| ...ranges.map(r => `stats:${apiKey.userId}:${r}`) | ||
| ]; | ||
| await redis.del(...keysToDelete); | ||
| console.log(`Cache invalidated for user ${apiKey.userId} after batch sync`); | ||
|
|
||
| } catch (err) { | ||
| console.error("Batch background gamification/cache error:", err); | ||
| } | ||
| })(); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Next.js 15 app router after() API for background work after response
💡 Result:
In Next.js 15 App Router, the after API schedules background work to execute after the HTTP response (or prerender) is sent to the client, without blocking the response. This is ideal for non-critical tasks like logging, analytics, or cleanup. Import from 'next/server': import { after } from 'next/server'; Usage example in a Server Component (layout/page): export default function Layout({ children }: { children: React.ReactNode }) { after( => { // Background task, e.g., logging log; }); return <>{children}</>; } In Route Handlers or Server Actions (with request APIs): export async function POST(request: Request) { // Main work... after(async => { const userAgent = (await headers.get('user-agent')) || 'unknown'; // Log analytics }); return Response.json({ success: true }); } Key points: - Stable since v15.1.0 (introduced as unstable_after in 15.0.0-rc). - Runs even on errors or failed responses. - Supports async callbacks; runs up to route's maxDuration. - Cannot use request APIs (cookies, headers) directly in Server Components' after—read them before calling after. - Supported on Node.js/Docker; platform-specific for serverless/static export (no static export). - Non-dynamic: doesn't force dynamic rendering. Official docs: https://nextjs.org/docs/app/api-reference/functions/after
Citations:
- 1: https://nextjs.im/docs/15/app/api-reference/functions/after/
- 2: https://nextjs.org/docs/app/api-reference/functions/after
🏁 Script executed:
# Find package.json to check Next.js version
fd -t f package.json | head -5Repository: Zaid-maker/DevMeter
Length of output: 153
🏁 Script executed:
# Read the route file to verify context around lines 98-179
# First, check the file size
wc -l client/app/api/heartbeat/batch/route.tsRepository: Zaid-maker/DevMeter
Length of output: 106
🏁 Script executed:
# Check for any waitUntil usage or other async handling patterns in the file
rg -n "waitUntil|unstable_after" client/app/api/heartbeat/batch/route.tsRepository: Zaid-maker/DevMeter
Length of output: 45
🏁 Script executed:
# Check Next.js version in client
cat client/package.json | grep -A 5 -B 5 "next"Repository: Zaid-maker/DevMeter
Length of output: 915
🏁 Script executed:
# Read the entire route file to verify the actual code structure
cat -n client/app/api/heartbeat/batch/route.tsRepository: Zaid-maker/DevMeter
Length of output: 9010
Background IIFE is not awaited — on serverless runtimes it may not complete.
After return NextResponse.json(...) on line 181, the serverless invocation can be frozen/terminated by the platform (Vercel/Next.js runtime), which means XP increments, level-ups, achievement unlocks, and the Redis cache invalidation on lines 168-174 may silently fail to run for some batches. That leads to inconsistent gamification state and stale dashboard stats that users attribute to "my offline heartbeats didn't count."
Prefer one of:
- Use Next.js 16's stable
after()API (imported fromnext/server) to defer the gamification block past the response while keeping the invocation alive, or awaitthe gamification block so the response is delayed but guaranteed to persist its side effects.
At minimum, await the XP update and the Redis cache invalidation (the parts users immediately notice) and keep only the achievement lookups fire-and-forget.
| // When the 24-hour sync window is enabled, buffer heartbeats locally | ||
| if (syncWindow) { | ||
| isProcessing = true; | ||
| try { | ||
| const pending = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []); | ||
| pending.push(heartbeatPayload); | ||
| await extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, pending); | ||
| log(`Heartbeat buffered locally (${pending.length} pending). Will sync after 24 hours.`); | ||
| lastHeartbeat = now; | ||
| } catch (error: any) { | ||
| log(`Failed to buffer heartbeat locally: ${error.message}`); | ||
| } finally { | ||
| isProcessing = false; | ||
| } | ||
| return; | ||
| } |
There was a problem hiding this comment.
syncWindow=true buffers heartbeats even when the API key is missing.
On line 369 syncWindow is read before the apiKey/apiUrl check on line 371, but the buffering branch on lines 415-429 runs regardless of whether the user has actually configured an API key. A fresh install with syncWindow enabled will quietly accumulate heartbeats forever (they'll just fail to flush with 401 later, which per the other comment isn't cleared). Guard the buffering branch with the same !apiKey || !apiUrl check, or move that check above the syncWindow branch.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@extension/src/extension.ts` around lines 414 - 429, The syncWindow buffering
branch currently runs even when no API credentials exist, causing heartbeats to
be buffered forever; update the logic around syncWindow in extension.ts so
buffering only happens when apiKey and apiUrl are present — either move the
existing !apiKey || !apiUrl check to occur before the syncWindow branch or add
that same guard to the syncWindow branch that manipulates
extensionContext.globalState (PENDING_HEARTBEATS_KEY) and heartbeatPayload,
ensuring isProcessing and lastHeartbeat are only updated when credentials exist.
| async function flushPendingHeartbeats(silent = false) { | ||
| const config = vscode.workspace.getConfiguration('devmeter'); | ||
| const apiKey = config.get<string>('apiKey'); | ||
| const apiUrl = config.get<string>('apiUrl'); | ||
|
|
||
| if (!apiKey || !apiUrl) { | ||
| return; | ||
| } | ||
|
|
||
| const pending = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []); | ||
| if (pending.length === 0) { | ||
| log('No pending heartbeats to sync.'); | ||
| return; | ||
| } | ||
|
|
||
| log(`Syncing ${pending.length} buffered heartbeat(s) to ${apiUrl}/heartbeat/batch`); | ||
|
|
||
| const editorName = vscode.env.appName || 'unknown'; | ||
|
|
||
| try { | ||
| await axios.post(`${apiUrl}/heartbeat/batch`, { heartbeats: pending }, { | ||
| headers: { | ||
| 'Authorization': `Bearer ${apiKey}`, | ||
| 'Content-Type': 'application/json', | ||
| 'User-Agent': `DevMeter-VSCode-Extension/${editorName}` | ||
| }, | ||
| timeout: 30000 // Allow more time for large batches | ||
| }); | ||
|
|
||
| log(`Successfully synced ${pending.length} heartbeat(s).`); | ||
|
|
||
| // Clear the local buffer and record sync time | ||
| await extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, []); | ||
| await extensionContext.globalState.update(LAST_SYNC_TIME_KEY, Date.now()); | ||
|
|
||
| if (!silent) { | ||
| vscode.window.showInformationMessage( | ||
| `DevMeter: Synced ${pending.length} buffered heartbeat(s) to the dashboard.` | ||
| ); | ||
| } else { | ||
| log(`Silently recovered ${pending.length} heartbeat(s) from local fallback buffer.`); | ||
| } | ||
| updateStatusBar(); | ||
| } catch (error: any) { | ||
| log(`Failed to sync buffered heartbeats: ${error.message}`); | ||
| if (error.response) { | ||
| log(`Response error: ${error.response.status} - ${JSON.stringify(error.response.data)}`); | ||
| } | ||
| // Keep pending heartbeats in local storage so they can be retried later | ||
| } | ||
| } |
There was a problem hiding this comment.
Buffer can exceed server's 1000-item cap and then stay stuck forever.
The server enforces MAX_BATCH_SIZE = 1000 and rejects larger batches with HTTP 400 (client/app/api/heartbeat/batch/route.ts lines 75-80). In fallback mode (syncWindow=false), a prolonged outage — e.g. DB down over a weekend, which is exactly the scenario called out in the PR — lets the buffer grow well past 1000 (saves bypass the 2-minute interval check on line 357). From that point on, every flush hits 400, the catch on line 558 leaves the buffer untouched, and the extension never recovers without user intervention.
Two changes together fix this:
- Chunk the outgoing request into ≤1000-item slices and post them sequentially, removing each chunk only on success.
- Cap the in-memory buffer (e.g., keep most recent N, or drop oldest) so pathological growth doesn't balloon
globalState.
Also note that 400/401/403 responses are currently treated identically to transient failures — the buffer is retained indefinitely even for permanent errors (invalid API key, malformed payload). Consider dropping or quarantining the batch on 4xx responses other than 429.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@extension/src/extension.ts` around lines 515 - 565, flushPendingHeartbeats
currently posts the entire pending array at once which can exceed the server
MAX_BATCH_SIZE (1000) and get a 400 that leaves the buffer permanently stuck;
update flushPendingHeartbeats to: 1) enforce a local cap on the stored buffer
(e.g., truncate pending to the most recent N entries when reading from
extensionContext.globalState in flushPendingHeartbeats and wherever heartbeats
are added) to prevent unbounded growth, 2) split the pending array into
sequential chunks of at most 1000 and POST each chunk to
`${apiUrl}/heartbeat/batch`, only removing/updating the stored buffer after a
chunk succeeds, and 3) treat permanent client errors by removing or quarantining
that chunk on 4xx responses other than 429 (for 429+5xx retry the chunk and keep
it in storage); use the symbols flushPendingHeartbeats, PENDING_HEARTBEATS_KEY,
LAST_SYNC_TIME_KEY and the axios.post call site to locate and implement the
changes.
| const pending = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []); | ||
| if (pending.length === 0) { | ||
| log('No pending heartbeats to sync.'); | ||
| return; | ||
| } | ||
|
|
||
| log(`Syncing ${pending.length} buffered heartbeat(s) to ${apiUrl}/heartbeat/batch`); | ||
|
|
||
| const editorName = vscode.env.appName || 'unknown'; | ||
|
|
||
| try { | ||
| await axios.post(`${apiUrl}/heartbeat/batch`, { heartbeats: pending }, { | ||
| headers: { | ||
| 'Authorization': `Bearer ${apiKey}`, | ||
| 'Content-Type': 'application/json', | ||
| 'User-Agent': `DevMeter-VSCode-Extension/${editorName}` | ||
| }, | ||
| timeout: 30000 // Allow more time for large batches | ||
| }); | ||
|
|
||
| log(`Successfully synced ${pending.length} heartbeat(s).`); | ||
|
|
||
| // Clear the local buffer and record sync time | ||
| await extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, []); | ||
| await extensionContext.globalState.update(LAST_SYNC_TIME_KEY, Date.now()); |
There was a problem hiding this comment.
Race: heartbeats buffered during the axios.post await are silently dropped.
flushPendingHeartbeats reads pending on line 524, awaits the network POST on line 535, then unconditionally replaces the buffer with [] on line 547. During that await — which has a 30 s timeout — onDidChangeTextDocument / onDidSaveTextDocument can fire and call sendHeartbeat, which appends to the same pendingHeartbeats globalState key (lines 418-420 and 462-464). Those new entries are then wiped by the update(..., []) on line 547 even though they were never sent. isProcessing only guards overlapping sends, not send-vs-flush, so this is reachable in normal use.
Fix by removing exactly what was sent rather than clearing:
🔧 Proposed fix
- const pending = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []);
- if (pending.length === 0) {
+ const pending = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []);
+ if (pending.length === 0) {
log('No pending heartbeats to sync.');
return;
}
+ const batch = pending.slice();
+ const sentCount = batch.length;
@@
- await axios.post(`${apiUrl}/heartbeat/batch`, { heartbeats: pending }, {
+ await axios.post(`${apiUrl}/heartbeat/batch`, { heartbeats: batch }, {
@@
- log(`Successfully synced ${pending.length} heartbeat(s).`);
-
- // Clear the local buffer and record sync time
- await extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, []);
- await extensionContext.globalState.update(LAST_SYNC_TIME_KEY, Date.now());
+ log(`Successfully synced ${sentCount} heartbeat(s).`);
+
+ // Remove only what we actually sent; preserve any heartbeats buffered during the POST.
+ const current = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []);
+ await extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, current.slice(sentCount));
+ await extensionContext.globalState.update(LAST_SYNC_TIME_KEY, Date.now());This assumes appends always go to the tail (true for the two call sites today).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const pending = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []); | |
| if (pending.length === 0) { | |
| log('No pending heartbeats to sync.'); | |
| return; | |
| } | |
| log(`Syncing ${pending.length} buffered heartbeat(s) to ${apiUrl}/heartbeat/batch`); | |
| const editorName = vscode.env.appName || 'unknown'; | |
| try { | |
| await axios.post(`${apiUrl}/heartbeat/batch`, { heartbeats: pending }, { | |
| headers: { | |
| 'Authorization': `Bearer ${apiKey}`, | |
| 'Content-Type': 'application/json', | |
| 'User-Agent': `DevMeter-VSCode-Extension/${editorName}` | |
| }, | |
| timeout: 30000 // Allow more time for large batches | |
| }); | |
| log(`Successfully synced ${pending.length} heartbeat(s).`); | |
| // Clear the local buffer and record sync time | |
| await extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, []); | |
| await extensionContext.globalState.update(LAST_SYNC_TIME_KEY, Date.now()); | |
| const pending = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []); | |
| if (pending.length === 0) { | |
| log('No pending heartbeats to sync.'); | |
| return; | |
| } | |
| const batch = pending.slice(); | |
| const sentCount = batch.length; | |
| log(`Syncing ${pending.length} buffered heartbeat(s) to ${apiUrl}/heartbeat/batch`); | |
| const editorName = vscode.env.appName || 'unknown'; | |
| try { | |
| await axios.post(`${apiUrl}/heartbeat/batch`, { heartbeats: batch }, { | |
| headers: { | |
| 'Authorization': `Bearer ${apiKey}`, | |
| 'Content-Type': 'application/json', | |
| 'User-Agent': `DevMeter-VSCode-Extension/${editorName}` | |
| }, | |
| timeout: 30000 // Allow more time for large batches | |
| }); | |
| log(`Successfully synced ${sentCount} heartbeat(s).`); | |
| // Remove only what we actually sent; preserve any heartbeats buffered during the POST. | |
| const current = extensionContext.globalState.get<PendingHeartbeat[]>(PENDING_HEARTBEATS_KEY, []); | |
| await extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, current.slice(sentCount)); | |
| await extensionContext.globalState.update(LAST_SYNC_TIME_KEY, Date.now()); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@extension/src/extension.ts` around lines 524 - 548, The
flushPendingHeartbeats function currently clears the entire buffer after
axios.post which drops heartbeats appended during the await; instead, after the
POST succeeds, read the current array from
extensionContext.globalState.get(PENDING_HEARTBEATS_KEY), remove only the
heartbeats that were sent (the local pending variable which was posted) and then
call extensionContext.globalState.update(PENDING_HEARTBEATS_KEY, remaining) so
new entries added by sendHeartbeat/onDidChangeTextDocument/onDidSaveTextDocument
are preserved; use a stable comparison (e.g., compare unique id or timestamp +
payload, or JSON stringify) to identify and remove exactly the sent items before
updating.
|
Note Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it. Fixes Applied SuccessfullyFixed 3 file(s) based on 5 unresolved review comments. Files modified:
Commit: The changes have been pushed to the Time taken: |
Fixed 3 file(s) based on 5 unresolved review comments. Co-authored-by: CodeRabbit <noreply@coderabbit.ai>
|
@copilot Create a migration For new changes in #file:schema.prisma |
Agent-Logs-Url: https://github.com/Zaid-maker/DevMeter/sessions/5bc8ae69-fa19-41ea-9879-bb09ce378f52 Co-authored-by: zaidhafeezvu <219703339+zaidhafeezvu@users.noreply.github.com>
Done in 33b6536. Created -- Add optional deduplication identifier for batch-synced heartbeats
ALTER TABLE "Heartbeat"
ADD COLUMN "heartbeatId" TEXT;
-- Enforce uniqueness so batch imports cannot create duplicate heartbeats
CREATE UNIQUE INDEX "Heartbeat_heartbeatId_key" ON "Heartbeat"("heartbeatId");Also fixed the missing trailing newline in Warning Firewall rules blocked me from connecting to one or more addresses (expand for details)I tried to connect to the following addresses, but was blocked by firewall rules:
If you need me to access, download, or install something from one of these locations, you can either:
|
Adds an opt-in 24-hour sync window and an automatic local fallback buffer for when the database or server is unreachable. Heartbeats are never lost — they are either intentionally deferred (sync window mode) or automatically recovered after a connection failure (fallback mode).
Extension (
extension/)devmeter.syncWindow(boolean, defaultfalse) — toggle via VS Code Settings.sendHeartbeat: routes to local buffer (globalState) whensyncWindowis on; when off, if a heartbeat POST fails due to a server/network error (5xx or no response), the heartbeat is saved to the local fallback buffer instead of being dropped. Auth/4xx errors are not retried.checkAndFlushSyncWindow: called on activation and every 5 min; behaviour depends on mode:syncWindow=true: auto-flushes once ≥24 h have elapsed since last sync.syncWindow=false: flushes fallback-buffered heartbeats immediately whenever the server is reachable again.flushPendingHeartbeats(silent?): POSTs all buffered heartbeats to/api/heartbeat/batch, clears local buffer, records sync timestamp. Passessilent=truefor background auto-retries to avoid repeated toast notifications.devmeter.syncNow: always flushes all pending heartbeats regardless ofsyncWindowsetting.devmeter.showMenu: dynamically surfaces pending count for both sync-window and fallback-buffer cases, e.g. "Sync Now (42 pending heartbeats)".API (
client/)New route
POST /api/heartbeat/batch(app/api/heartbeat/batch/route.ts):prisma.heartbeat.createMany.Database (
client/prisma/)heartbeatId String? @uniquefield to theHeartbeatmodel for batch deduplication — prevents duplicate records when retrying a buffered batch.20260424000000_add_heartbeat_id_to_heartbeat: adds theheartbeatId TEXTcolumn and its unique index to theHeartbeattable.