Conversation
Implement incremental caching to avoid scanning 270k+ messages on every request: - SQLite cache at ~/.genesis-tools/claude-history/stats-cache.db - Historical data cached forever (past never changes) - Today's data recomputed with 5-minute cache - File change detection via mtime for incremental updates - Add getConversationStatsWithCache(), getQuickStatsFromCache(), getStatsForDateRange() - Add hourlyActivity tracking to ConversationStats
…for stats Implement streaming data pattern for 270k+ messages: - Add skeleton components with cyberpunk shimmer animations - Use TanStack Router defer() + Suspense for progressive loading - Quick stats from cache load instantly, full stats stream in - Add DateRangePicker with preset buttons (Today, 7d, 30d, 90d, All) - Add HourlyHeatmap (24-hour activity visualization) - Add ToolCategoriesChart (File Read/Write, Shell, Web, Tasks, MCP) - Fix StatCard padding/margins for consistency
- Add `text` language identifier to README fenced code blocks (MD040) - Fix TMux.md session picker description (native list, not fuzzy) - Remove non-standard `-?` flag from mcp-debug help option - Add NaN validation for parseInt in fsevents-profile - Add WriteResult.NoChanges handling in mcp-manager install - Fix opts.path scope issue in mcp-manager interactive mode - Add error handling for Azure DevOps CLI commands (getAccessToken, runQuery) - Remove inconsistent .toUpperCase() on org in Azure DevOps URL parser - Add required field validation for description in Azure DevOps work item creation - Guard against undefined tool.input in conversation view - Add language identifiers to github-pr.md fenced blocks - Add try-catch for cleanupOldSessions and atomic write for trackFile - Fix route syntax in claude-history-dashboard README ($id vs :id) - Add default type="button" to Button component - Fix messageCount calculation to use userMessageCount + assistantMessageCount - Fix extractProjectName to preserve dashed project names - Add 30s timeout to Jenkins axios instance - Fix watchman path validation and client cleanup
…cking - Add TokenUsage interface with input/output/cache token tracking - Add model usage counting (opus/sonnet/haiku) - Add git branch activity tracking - Add conversation length distribution data - Add dailyTokens for cumulative token charts - Update SerializableStats to include all new fields
New components with cyberpunk styling: - TokenUsageCard: Token breakdown with input/output/cache and cost estimate - ModelUsageChart: Donut chart showing Opus/Sonnet/Haiku distribution - CumulativeChart: Area chart for cumulative messages and tokens over time - WeeklyTrendsCard: This week vs last week comparison with deltas - BranchActivityChart: Top 10 git branches by activity - ConversationLengthHistogram: Distribution of conversation lengths
- Add FilteredStatCards that show filtered totals when date range is active - Integrate TokenUsageCard, ModelUsageChart, CumulativeChart - Integrate WeeklyTrendsCard, BranchActivityChart, ConversationLengthHistogram - Add skeleton components for all new charts - Reorganize layout for better visual hierarchy
📝 WalkthroughWalkthroughThis pull request introduces a comprehensive caching and analytics enhancement system for Claude History, alongside substantial dashboard improvements. Key additions include a SQLite-backed persistent cache for statistics, new analytics visualizations, enhanced file tracking with atomic writes, improved error handling across multiple modules, and updated documentation. Changes
Sequence DiagramssequenceDiagram
participant Client as Dashboard Client
participant Server as Route Loader
participant Cache as Cache System
participant FS as File System
participant DB as SQLite Cache DB
Client->>Server: Load /stats page
Server->>Cache: getQuickStats()
Cache->>DB: getCachedTotals()
DB-->>Cache: Quick totals (instant)
Cache-->>Server: Quick stats
Server->>Server: defer(getFullStats)
Server-->>Client: Response {quickStats, defer fullStats}
Client->>Client: Render quick stats immediately
Server->>Cache: getConversationStatsWithCache()
Cache->>FS: List conversation files
FS-->>Cache: File list
Cache->>DB: getFileIndex(filePath)
DB-->>Cache: Last indexed metadata
alt File unchanged
Cache->>DB: getDailyStats(date)
DB-->>Cache: Cached stats
else File changed
Cache->>FS: Read & parse conversation
FS-->>Cache: Conversation data
Cache->>Cache: computeFileStats(filePath)
Cache->>DB: upsertDailyStats + upsertFileIndex
DB-->>Cache: Updated
end
Cache->>Cache: aggregateDailyStats()
Cache-->>Server: Full stats
Server-->>Client: Stream deferred fullStats
Client->>Client: Progressive render charts (Cumulative, HourlyHeatmap, etc.)
sequenceDiagram
participant User as User
participant UI as Stats Dashboard UI
participant Loader as Route Loader
participant CacheLib as Cache Library
participant DB as SQLite Cache
User->>UI: Open stats page
UI->>Loader: Load route
Loader->>CacheLib: getQuickStats()
CacheLib->>DB: getCachedTotals()
DB-->>CacheLib: Cached totals
CacheLib-->>Loader: {totalConversations, totalMessages, ...}
Loader-->>UI: Quick stats + defer fullStats
UI->>UI: Render StatCardsGridSkeleton + quick cards
User->>UI: Select date range via DateRangePicker
UI->>Loader: getStatsInRange(from, to)
Loader->>CacheLib: getStatsForDateRange(range)
CacheLib->>DB: getDailyStatsInRange(range)
DB-->>CacheLib: Daily stats for range
CacheLib->>CacheLib: Aggregate into ConversationStats
CacheLib-->>Loader: Filtered stats
Loader-->>UI: Filtered results
UI->>UI: Switch to filtered cards & charts (Activity, Tools, Weekly Trends, etc.)
Note over UI,DB: Background: fullStats stream resolves
Loader->>CacheLib: getConversationStatsWithCache() completes
CacheLib-->>Loader: Full stats
Loader-->>UI: Stream resolved
UI->>UI: Replace charts with fullStats data
Estimated code review effort🎯 4 (Complex) | ⏱️ ~65 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @genesiscz, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a feature-rich statistics dashboard for Claude Code, designed to provide users with detailed insights into their usage patterns. By implementing caching and date range filtering, the dashboard ensures efficient data retrieval and analysis. The addition of new charts and metrics offers a comprehensive overview of activity, while UI enhancements improve the overall user experience. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces significant enhancements across several modules, focusing on improved error handling, caching mechanisms, and a new statistics dashboard. Key changes include: robust error handling in Azure DevOps API calls, an atomic write mechanism for session files, and a comprehensive caching system for conversation statistics using SQLite. The new statistics dashboard provides valuable insights into conversation activity, token usage, model usage, and more, with deferred loading for better performance. Documentation updates and minor usability improvements are also included.
Overall, the changes are well-structured and contribute positively to the project's maintainability and user experience. However, there are a few areas where correctness and maintainability could be further improved, particularly regarding error logging in a catch block, simplifying complex conditional logic for styling, clarifying cost estimation, and refining the daily aggregation of certain statistics.
| } catch { | ||
| // Ignore transient fs errors (file deleted, permissions changed, etc.) | ||
| } |
There was a problem hiding this comment.
While ignoring transient file system errors is often desired during cleanup, an empty catch block can suppress important debugging information. It's generally a good practice to at least log the error, even if no further action is taken, to aid in diagnosing unexpected behavior.
Consider adding a logger.warn or console.warn call here.
| } catch { | |
| // Ignore transient fs errors (file deleted, permissions changed, etc.) | |
| } | |
| } catch (error) { | |
| console.warn(`[track-session-files] Error cleaning up old session file ${filePath}: ${error}`); | |
| } |
| <div | ||
| className={`w-full rounded-t transition-all duration-300 cursor-pointer hover:brightness-110 ${ | ||
| index === 0 | ||
| ? 'bg-purple-400' | ||
| : index === 1 | ||
| ? 'bg-purple-500' | ||
| : index === 2 | ||
| ? 'bg-purple-600' | ||
| : index === 3 | ||
| ? 'bg-purple-700' | ||
| : index === 4 | ||
| ? 'bg-purple-800' | ||
| : 'bg-purple-900' | ||
| }`} |
There was a problem hiding this comment.
The nested ternary operators for assigning background colors based on the index can become difficult to read and maintain as the number of conditions grows. This pattern is prone to errors and makes it hard to quickly understand the color mapping.
A more maintainable approach would be to define an array of colors and access them directly using the index.
const colors = [
'bg-purple-400',
'bg-purple-500',
'bg-purple-600',
'bg-purple-700',
'bg-purple-800',
'bg-purple-900'
];
const barColor = colors[index] || colors[colors.length - 1];
return (
<div
key={bucket.label}
className="flex-1 flex flex-col items-center justify-end h-full group"
>
{/* Bar */}
<div
className={`w-full rounded-t transition-all duration-300 cursor-pointer hover:brightness-110 ${barColor}`}
| function estimateCost(tokenUsage: TokenUsage, modelPrices = TOKEN_PRICES.opus): number { | ||
| const cost = | ||
| (tokenUsage.inputTokens / 1_000_000) * modelPrices.input + | ||
| (tokenUsage.outputTokens / 1_000_000) * modelPrices.output + | ||
| (tokenUsage.cacheCreateTokens / 1_000_000) * modelPrices.cacheCreate + | ||
| (tokenUsage.cacheReadTokens / 1_000_000) * modelPrices.cacheRead | ||
| return cost |
There was a problem hiding this comment.
The estimateCost function defaults to TOKEN_PRICES.opus. While this provides an estimate, it might be misleading if the actual token usage involves a mix of different models (Sonnet, Haiku) which have varying price points. The current UI doesn't explicitly state that the cost is estimated using Opus pricing.
To improve clarity and correctness, consider one of the following:
- Add a small note in the UI (e.g., next to the estimated cost) indicating that the cost is based on Opus pricing.
- If
modelCountsdata is available, pass it toestimateCostto calculate a more accurate weighted average cost across all used models. - Allow the user to select the pricing model for the estimate.
|
|
||
| // Distribute tool counts proportionally (simplified: assign to first date) | ||
| if (dateStr === fileStats.firstDate) { | ||
| Object.assign(toolCountsForDate, fileStats.toolCounts); | ||
| } | ||
|
|
||
| const hourlyForDate: Record<string, number> = {}; | ||
| if (dateStr === fileStats.firstDate) { | ||
| Object.assign(hourlyForDate, fileStats.hourlyActivity); | ||
| } | ||
|
|
||
| // Token, model, and branch data - assign to first date of file | ||
| const tokenForDate: TokenUsage = dateStr === fileStats.firstDate | ||
| ? fileStats.tokenUsage | ||
| : { inputTokens: 0, outputTokens: 0, cacheCreateTokens: 0, cacheReadTokens: 0 }; | ||
|
|
||
| const modelCountsForDate: Record<string, number> = dateStr === fileStats.firstDate | ||
| ? fileStats.modelCounts | ||
| : {}; | ||
|
|
||
| const branchCountsForDate: Record<string, number> = dateStr === fileStats.firstDate | ||
| ? fileStats.branchCounts | ||
| : {}; |
There was a problem hiding this comment.
In processFileForCache, the logic for merging toolCounts, hourlyActivity, tokenUsage, modelCounts, and branchCounts into dailyStats is simplified by assigning all of a conversation's data to its firstDate.
This simplification can lead to inaccuracies in daily statistics for these metrics, especially for conversations that span multiple days. For example, if a long conversation has tool uses on day 1 and day 5, all those tool uses will be attributed to day 1 in the daily stats.
For more precise daily breakdowns, each message's associated metrics (tool uses, tokens, model, branch) should be attributed to the specific day the message occurred.
There was a problem hiding this comment.
Pull request overview
This pull request introduces "Lib enhancements" with a focus on improving the Claude History dashboard with caching, adding new analytics features, and making various refinements across the codebase.
Changes:
- Added SQLite-based caching system for Claude History statistics to improve performance
- Enhanced Claude History dashboard with new analytics (token usage, model distribution, hourly heatmaps, branch activity, etc.)
- Improved error handling and validation in Azure DevOps, watchman, and plugin hooks
- Added atomic file writes and better cleanup mechanisms
- Various UI/UX improvements including deferred loading and date range filtering
Reviewed changes
Copilot reviewed 35 out of 35 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
| src/watchman/index.ts | Enhanced path validation and client cleanup |
| src/mcp-manager/index.ts | Simplified openConfig call in interactive mode |
| src/mcp-manager/commands/install.ts | Updated ENV variable message and added NoChanges handling |
| src/mcp-debug/index.ts | Clarified help option usage |
| src/jenkins-mcp/index.ts | Added 30-second timeout for Jenkins requests |
| src/git-rebase-multiple/README.md | Added language specification to code block |
| src/fsevents-profile/index.ts | Improved validation to check for NaN values |
| src/claude-history/lib.ts | Added extensive caching functionality, new analytics, improved project name extraction |
| src/claude-history/cache.ts | New file implementing SQLite-based statistics cache |
| src/claude-history-dashboard/src/server/conversations.ts | Added new API endpoints for cached stats and date range filtering |
| src/claude-history-dashboard/src/routes/stats.tsx | Major refactor with deferred loading, date filtering, and new chart components |
| src/claude-history-dashboard/src/routes/conversation.$id.tsx | Added null check for tool.input |
| src/claude-history-dashboard/src/components/ui/* | Added new skeleton, date-range-picker components, improved button type handling |
| src/claude-history-dashboard/src/components/stats/* | Added multiple new analytics components (token usage, model distribution, etc.) |
| src/claude-history-dashboard/src/cyberpunk.css | Added new skeleton and loading animations |
| src/azure-devops/utils.ts | Removed case conversion in URL parsing |
| src/azure-devops/index.ts | Added required field validation for description |
| src/azure-devops/api.ts | Improved error handling for authentication and queries |
| plugins/genesis-tools/hooks/track-session-files.ts | Added error handling and atomic file writes |
| TMux.md | Updated session picker description |
| .claude/commands/github-pr.md | Added language specifications to code blocks |
| src/claude-history-dashboard/README.md | Updated route format from :id to $id |
Comments suppressed due to low confidence (1)
src/azure-devops/index.ts:835
- The validation logic for required description has a potential issue. If the user answers "No" to "Add description? (required)", the code proceeds to line 834 and sets an empty description, bypassing the validation. The condition on line 827 should enforce that if
isRequiredis true, the description must be provided. Consider changing the logic to not allow the user to skip if the field is required, or remove the confirm prompt entirely when the field is required.
const useDescription = await confirm({
message: isRequired ? "Add description? (required)" : "Add description?",
default: isRequired || !!state.description,
});
if (useDescription || isRequired) {
state.description = await editor({
message: isRequired ? "Description (required, opens editor):" : "Description (opens editor):",
default: state.description || descriptionTemplate,
validate: isRequired ? ((value) => value.trim() ? true : "Description is required") : undefined,
});
} else {
state.description = "";
}
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| export function getDatabase(cacheDir: string = DEFAULT_CACHE_DIR): Database { | ||
| if (_db) { | ||
| return _db; | ||
| } | ||
|
|
||
| // Ensure directory exists | ||
| if (!existsSync(cacheDir)) { | ||
| mkdirSync(cacheDir, { recursive: true }); | ||
| } | ||
|
|
||
| const dbPath = join(cacheDir, DB_NAME); | ||
| logger.debug(`Opening stats cache database at ${dbPath}`); | ||
|
|
||
| _db = new Database(dbPath); | ||
| _db.exec("PRAGMA journal_mode = WAL"); | ||
|
|
||
| initSchema(_db); | ||
|
|
||
| return _db; | ||
| } |
There was a problem hiding this comment.
The database connection uses a module-level singleton (_db) but the getDatabase function accepts a cacheDir parameter. This creates a potential bug: if getDatabase is called with different cacheDirs, it will always return the first database connection created, ignoring the cacheDir parameter in subsequent calls. Consider either removing the cacheDir parameter or implementing a map of connections keyed by cacheDir.
| const bucketCounts = BUCKETS.map(bucket => ({ | ||
| ...bucket, | ||
| count: conversationLengths.filter(len => len >= bucket.min && len <= bucket.max).length, | ||
| })) |
There was a problem hiding this comment.
The bucket filtering logic doesn't handle conversations with length 0. If there are conversations with 0 messages (which could happen for empty or corrupted data), they won't be counted in any bucket since the first bucket starts at min: 1. Consider either adding a bucket for 0-length conversations or filtering them out before passing to this component, depending on the intended behavior.
|
|
||
| return fileStats; | ||
| } catch (error) { | ||
| console.error(`Error processing file ${filePath}:`, error); |
There was a problem hiding this comment.
Using console.error instead of the logger module is inconsistent with the rest of the codebase. The file already imports logger from "@app/logger" (line 11), so this error logging should use logger.error() instead for consistency and proper log formatting.
| console.error(`Error processing file ${filePath}:`, error); | |
| logger.error(`Error processing file ${filePath}:`, error); |
| for (const [dateStr, messageCount] of Object.entries(fileStats.dailyActivity)) { | ||
| const existingDaily = getDailyStats(dateStr); | ||
| const toolCountsForDate: Record<string, number> = {}; | ||
|
|
||
| // Distribute tool counts proportionally (simplified: assign to first date) | ||
| if (dateStr === fileStats.firstDate) { | ||
| Object.assign(toolCountsForDate, fileStats.toolCounts); | ||
| } | ||
|
|
||
| const hourlyForDate: Record<string, number> = {}; | ||
| if (dateStr === fileStats.firstDate) { | ||
| Object.assign(hourlyForDate, fileStats.hourlyActivity); | ||
| } | ||
|
|
||
| // Token, model, and branch data - assign to first date of file | ||
| const tokenForDate: TokenUsage = dateStr === fileStats.firstDate | ||
| ? fileStats.tokenUsage | ||
| : { inputTokens: 0, outputTokens: 0, cacheCreateTokens: 0, cacheReadTokens: 0 }; | ||
|
|
||
| const modelCountsForDate: Record<string, number> = dateStr === fileStats.firstDate | ||
| ? fileStats.modelCounts | ||
| : {}; | ||
|
|
||
| const branchCountsForDate: Record<string, number> = dateStr === fileStats.firstDate | ||
| ? fileStats.branchCounts | ||
| : {}; | ||
|
|
||
| const newDaily: DailyStats = { | ||
| date: dateStr, | ||
| project: "__all__", | ||
| conversations: (existingDaily?.conversations || 0) + (dateStr === fileStats.firstDate ? 1 : 0), | ||
| messages: (existingDaily?.messages || 0) + messageCount, | ||
| subagentSessions: (existingDaily?.subagentSessions || 0) + (dateStr === fileStats.firstDate && isSubagent ? 1 : 0), | ||
| toolCounts: mergeCounts(existingDaily?.toolCounts || {}, toolCountsForDate), | ||
| hourlyActivity: mergeCounts(existingDaily?.hourlyActivity || {}, hourlyForDate), | ||
| tokenUsage: mergeTokenUsage(existingDaily?.tokenUsage, tokenForDate), | ||
| modelCounts: mergeCounts(existingDaily?.modelCounts || {}, modelCountsForDate), | ||
| branchCounts: mergeCounts(existingDaily?.branchCounts || {}, branchCountsForDate), | ||
| }; | ||
|
|
||
| upsertDailyStats(newDaily); | ||
| } |
There was a problem hiding this comment.
The incremental cache update logic has a potential data inconsistency issue. When a file is re-processed (mtime changed), the old daily stats are not removed before adding new stats. This means if a file's content changes (e.g., messages are deleted or dates change), the old stats will remain in the cache and be added to the new stats, leading to inflated counts. Consider implementing a cleanup mechanism to remove old stats for a file before adding new ones, or track file-specific contributions to daily stats so they can be subtracted when a file is updated.
| for (const [dateStr, count] of Object.entries(dailyActivity)) { | ||
| const date = new Date(dateStr) | ||
| const dayTokens = dailyTokens?.[dateStr] | ||
| const totalDayTokens = dayTokens | ||
| ? dayTokens.inputTokens + dayTokens.outputTokens + dayTokens.cacheCreateTokens + dayTokens.cacheReadTokens | ||
| : 0 | ||
|
|
||
| if (date >= thisWeekStart && date <= today) { | ||
| thisWeekMessages += count | ||
| thisWeekTokens += totalDayTokens | ||
| thisWeekDays++ | ||
| } else if (date >= lastWeekStart && date < thisWeekStart) { | ||
| lastWeekMessages += count | ||
| lastWeekTokens += totalDayTokens | ||
| lastWeekDays++ | ||
| } |
There was a problem hiding this comment.
Date comparison issue in weekly trends calculation. The code compares Date objects created from ISO date strings (YYYY-MM-DD format) without considering timezone. When a date string is parsed with new Date(dateStr), it's interpreted as UTC midnight. However, today is created as a local Date object. This can cause off-by-one errors in week boundaries depending on the user's timezone. Consider using UTC dates consistently or parsing dates in local time explicitly (e.g., by parsing the date components and creating a local Date).
| </span> | ||
| <span className="text-sm text-muted-foreground">tokens</span> | ||
| <span className="ml-auto text-xl font-semibold text-amber-500"> | ||
| ~${estimatedCost.toFixed(2)} |
There was a problem hiding this comment.
The cost estimation always uses Opus pricing regardless of which models were actually used. The function accepts a modelPrices parameter but it's always called with the default (Opus). This means cost estimates will be inaccurate if users primarily use Sonnet or Haiku. Consider either calculating a weighted average based on actual model usage, or making it clear in the UI that this is an "estimated maximum cost (Opus pricing)" rather than presenting it as a definitive cost figure.
| ~${estimatedCost.toFixed(2)} | |
| Est. max cost (Opus): ~${estimatedCost.toFixed(2)} |
| if (vsMatch) { | ||
| return { | ||
| org: `https://dev.azure.com/${vsMatch[1].toUpperCase()}`, | ||
| org: `https://dev.azure.com/${vsMatch[1]}`, |
There was a problem hiding this comment.
Removing .toUpperCase() from organization name in URL parsing changes the behavior for Visual Studio URLs. Previously, organizations from visualstudio.com URLs would be uppercase, now they preserve original casing. This could potentially break existing deployments if organization names are case-sensitive or if there are existing references that expect uppercase names. Consider whether this is an intentional behavior change or if there's a specific case-sensitivity requirement.
| org: `https://dev.azure.com/${vsMatch[1]}`, | |
| org: `https://dev.azure.com/${vsMatch[1].toUpperCase()}`, |
| // Atomic write: write to temp file then rename (avoids race conditions) | ||
| const tempFile = `${sessionFile}.tmp.${Date.now()}`; | ||
| writeFileSync(tempFile, JSON.stringify(sessionData, null, 2)); | ||
| renameSync(tempFile, sessionFile); |
There was a problem hiding this comment.
The atomic write implementation using a temp file with timestamp might still have a race condition. If two processes write simultaneously, they could create different temp files (with different timestamps) but then both try to rename to the same target file. The second rename will fail. Consider adding error handling around the renameSync call to ensure robustness, or use a locking mechanism to prevent concurrent writes.
| renameSync(tempFile, sessionFile); | |
| try { | |
| renameSync(tempFile, sessionFile); | |
| } catch (err) { | |
| // Handle potential race conditions or filesystem errors during rename | |
| console.error( | |
| `[track-session-files] Failed to atomically update session file ${sessionFile} from temp file ${tempFile}:`, | |
| err | |
| ); | |
| // Best-effort cleanup of the temp file; ignore errors | |
| try { | |
| unlinkSync(tempFile); | |
| } catch { | |
| // Ignore cleanup failure | |
| } | |
| } |
| const handleDateRangeChange = async (range: { from: string; to: string }) => { | ||
| setDateRange(range) | ||
|
|
||
| // Get recent activity (last 14 days) | ||
| const recentDays = Object.entries(stats.dailyActivity) | ||
| .sort(([a], [b]) => b.localeCompare(a)) | ||
| .slice(0, 14) | ||
| .reverse() | ||
|
|
||
| const maxDailyMessages = Math.max(...recentDays.map(([, count]) => count), 1) | ||
| // Only fetch if we have both dates | ||
| if (range.from && range.to) { | ||
| setIsLoadingRange(true) | ||
| try { | ||
| const stats = await getStatsInRange({ data: range }) | ||
| setFilteredStats(stats) | ||
| } catch (error) { | ||
| console.error('Failed to fetch stats for date range:', error) | ||
| } finally { | ||
| setIsLoadingRange(false) | ||
| } | ||
| } else { | ||
| setFilteredStats(null) | ||
| } | ||
| } |
There was a problem hiding this comment.
Date range validation is missing. The code should validate that 'from' date is not after 'to' date. Currently, if a user manually enters invalid date ranges (e.g., from: "2025-12-01", to: "2025-01-01"), the query will still be sent to the server without validation. Add validation to check that from <= to before making the API call.
| // Only treat as encoded path if it starts with dash (like "-Users-Martin-...") | ||
| // This preserves legitimate dashed project names like "my-cool-project" | ||
| if (projectDir.startsWith("-")) { | ||
| // Convert -Users-Martin-Tresors-Projects-GenesisTools to GenesisTools | ||
| const parts = projectDir.split("-"); | ||
| return parts[parts.length - 1] || projectDir; |
There was a problem hiding this comment.
The logic change for extracting project names may break existing project name extraction. The new code only treats paths starting with a dash as encoded (like "-Users-Martin-..."), but this might not handle all edge cases. For example, a legitimate project name that starts with a dash like "-my-project" would be incorrectly parsed as an encoded path. Consider adding additional validation to distinguish between encoded paths (which typically have multiple dash-separated segments representing a full file path) and legitimate project names that happen to start with a dash.
| // Only treat as encoded path if it starts with dash (like "-Users-Martin-...") | |
| // This preserves legitimate dashed project names like "my-cool-project" | |
| if (projectDir.startsWith("-")) { | |
| // Convert -Users-Martin-Tresors-Projects-GenesisTools to GenesisTools | |
| const parts = projectDir.split("-"); | |
| return parts[parts.length - 1] || projectDir; | |
| // Only treat as encoded path if it looks like an encoded full path: | |
| // - starts with a dash (e.g. "-Users-Martin-...") | |
| // - has multiple dash-separated segments (to avoid mangling names like "-my-project") | |
| if (projectDir.startsWith("-")) { | |
| const parts = projectDir.split("-"); | |
| // parts[0] will be "" due to the leading dash; require several path-like segments | |
| const looksEncodedPath = parts.length >= 4; | |
| if (looksEncodedPath) { | |
| // Convert -Users-Martin-Tresors-Projects-GenesisTools to GenesisTools | |
| return parts[parts.length - 1] || projectDir; | |
| } |
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/watchman/index.ts (1)
167-167: Use the already-importedwatchmanmodule instead of dynamicrequire().The file already imports
fb-watchmanat line 1 aswatchman, but this line uses a dynamicrequire(). This is inconsistent and could cause issues with bundlers or in strict ESM environments.Suggested fix
- const client = new (require("fb-watchman").Client)(); + const client = new watchman.Client();src/mcp-debug/index.ts (1)
93-101: The help documentation doesn't document the--help-fulloption that's being added.The
--help-fulloption at line 99 is implemented and invokes the customshowHelp()function, but the help text (lines 17–21) never mentions this flag. Additionally, the option description says "use --help-full, not -h", which is misleading since-hand--helpstill work by default in commander and show a different help output.Update the help text to document both help options, and remove the confusing "not -h" language:
🔧 Suggested updates
- -h, --help Show this help message + -h, --help Show brief help + --help-full Show extended help- .option("--help-full", "Show extended help (use --help-full, not -h)") + .option("--help-full", "Show extended help")
🤖 Fix all issues with AI agents
In `@src/claude-history-dashboard/src/components/stats/ChartSkeleton.tsx`:
- Around line 16-27: The render uses Math.random() inside ChartSkeleton's map to
set Skeleton heights, which causes SSR/client hydration mismatches; replace the
random height calculation in the style prop with a deterministic function of the
index (i) instead (for example a seeded or formulaic value derived from i) so
heights are stable between server and client—update the Array.from({ length: 14
}).map((_, i) => ...) block and the style.height expression (used for Skeleton
variant="data-stream") to compute height from i rather than Math.random().
In
`@src/claude-history-dashboard/src/components/stats/ConversationLengthHistogram.tsx`:
- Around line 9-15: The BUCKETS array has a mismatch: the last bucket's label is
'250+' but its min is 251 so value 250 is excluded; update the last bucket in
BUCKETS (where label === '250+' and min === 251) to either rename the label to
'251+' or change min to 250 so the range matches the label—pick one and make the
corresponding edit in the BUCKETS definition.
In `@src/claude-history-dashboard/src/components/stats/CumulativeChart.tsx`:
- Around line 104-145: The SVG gradient IDs messagesGradient and tokensGradient
in CumulativeChart can collide across instances; update the component to
generate unique per-instance IDs (use React's useId or similar) and replace the
hardcoded ids and their references in the fill attributes (e.g.,
fill={`url(#${messagesId})`} and fill={`url(#${tokensId})`}); ensure the
<linearGradient id=...> elements use the generated messagesId and tokensId so
displayData-driven paths for tokens and messages correctly reference
instance-scoped gradients while leaving the existing maxTokens, maxMessages and
path d-calculation logic unchanged.
In `@src/claude-history-dashboard/src/components/stats/ProjectListSkeleton.tsx`:
- Around line 16-25: The skeleton widths currently use Math.random() which
causes SSR hydration mismatches; in ProjectListSkeleton replace both usages of
Math.random() (the width style inside the Array.from({ length: 5 }).map callback
and the second occurrence near line 64) with a deterministic calculation derived
from the loop index i (for example a small formula or lookup array based on i)
so the width values are stable between server and client; update the width
expression used in the Skeleton style to use that deterministic value and keep
the existing animationDelay logic that uses i.
In `@src/claude-history-dashboard/src/components/stats/WeeklyTrendsCard.tsx`:
- Around line 34-78: calculateWeeklyStats is comparing Date objects with time
components causing timezone/off-by-one errors and is averaging over active days
instead of calendar days; normalize all dates to day-start by calling
setHours(0,0,0,0) on today, thisWeekStart, lastWeekStart and on each parsed date
(from dailyActivity keys) before comparisons, and change average calculations
(avgMessagesThisWeek, avgMessagesLastWeek and token averages) to divide by 7
(calendar days per week) rather than thisWeekDays/lastWeekDays if you want
per-calendar-day averages; keep dailyActivity/dailyTokens lookups using the
original date string keys but use the normalized Date for range checks.
In `@src/claude-history-dashboard/src/components/ui/date-range-picker.tsx`:
- Around line 11-56: The activePreset currently uses numeric days and null which
collides with the "All" preset and custom input; update the presets array to
include distinct keys (e.g., 'today','7d','30d','90d','all') alongside their
days values, change activePreset in DateRangePicker from useState<number |
null>(null) to a string literal type (including a 'custom' value), update
handlePresetClick to setActivePreset to the preset key and call
onChange(getDateRange(days)), and update handleCustomChange to
setActivePreset('custom') before calling onChange; finally change the button
active check (activePreset === days) to compare the preset key (activePreset ===
'<key>') so the "All" button no longer appears active during custom edits while
keeping getDateRange(days) behavior intact.
- Around line 19-26: The current formatDate uses toISOString() which yields UTC
dates and shifts local dates; update formatDate(date: Date) to build a
YYYY-MM-DD string from the date's local parts (use date.getFullYear(),
pad(date.getMonth()+1), pad(date.getDate())) so the string matches HTML
input[type="date"] expectations, then ensure getDateRange(days) uses that
updated formatDate; also fix the dateRange initialization in routes/stats.tsx
(replace new Date().toISOString().split('T')[0]) to use the same local-date
formatter so both components produce consistent local dates.
In `@src/claude-history-dashboard/src/components/ui/skeleton.tsx`:
- Around line 1-7: The file uses React types (e.g.,
React.HTMLAttributes<HTMLDivElement>) without importing React; add the React
type import at the top of skeleton.tsx (matching the pattern used in nearby
files like card.tsx) so that SkeletonProps and related types
compile—specifically, import React (or import type React) before the existing
imports so SkeletonVariant and the SkeletonProps interface resolve correctly.
In `@src/claude-history-dashboard/src/routes/stats.tsx`:
- Around line 699-704: The Skeleton elements use Math.random() to compute
heights (e.g., the map in stats.tsx producing style height: `${20 +
Math.random() * 60}%`), which breaks SSR hydration; replace the
non-deterministic random with a deterministic calculation derived from the loop
index `i` (or a fixed array of heights) so the height for each Skeleton is
stable across server and client renders. Update the same pattern in
ProjectListSkeleton.tsx and ChartSkeleton.tsx and ensure the Skeleton component
instances keep the same keying (key={i}) while using the deterministic height
formula based on `i`.
- Around line 37-40: The default "to" date uses new Date().toISOString() (UTC)
which can produce the wrong local date; replace that with a local ISO date
string built from the Date object’s local year/month/day (e.g., using
getFullYear(), getMonth()+1, getDate() with zero-pad) when initializing
dateRange (the state using dateRange and setDateRange) and similarly update
formatDate (the function named formatDate in date-range-picker.tsx) to return
the local YYYY-MM-DD string instead of toISOString().split('T')[0], ensuring
both locations use the same local-date construction logic.
In `@src/claude-history/lib.ts`:
- Around line 1218-1281: Existing code re-adds recomputed file stats on mtime
change without removing the file's previous contribution, causing
double-counting; update the logic in the block that handles changed files (use
getFileIndex, upsertFileIndex, upsertDailyStats, mergeCounts, mergeTokenUsage)
to first retrieve the prior per-file stats from the stored file index (extend
the stored FileIndex entry to include per-file dailyActivity, toolCounts,
hourlyActivity, tokenUsage, modelCounts, branchCounts when first indexing),
subtract those prior per-file metrics from each affected DailyStats (by calling
upsertDailyStats with negative deltas or by fetching existingDaily and
decrementing its fields) and only then add the newly computed fileStats;
alternatively, if you prefer simpler semantics, detect mtime mismatch in
getFileIndex and trigger a targeted rebuild of affected days (or a full refresh)
instead of incrementally merging so totals are not inflated.
- Around line 1339-1382: The projectCounts query and conversationLengths
calculation are not scoped to the requested dateRange; update
getConversationStatsWithCache and getStatsForDateRange to pass the dateRange
(start, end) into the logic: modify the file_index query used to build
projectCounts to include an overlap filter using first_date <= end AND last_date
>= start, and change getConversationLengths to accept a dateRange parameter and
compute lengths only for conversations whose activity overlaps that range
(update callers accordingly); ensure updateCachedTotals and returned
projectCounts/conversationLengths now reflect the filtered results.
In `@src/watchman/index.ts`:
- Around line 65-73: The module-level client used in getDirOfInterest is left
open on successful returns, causing a resource leak; either instantiate a local
Watchman client inside getDirOfInterest and replace references to the
module-level client, or call client.end() on every control path before returning
(including the success path after resolve/return and before handing off to
watchWithRetry). Update getDirOfInterest to ensure the Watchman client (client)
is closed (client.end()) after the watch-list response is processed and before
any return, or refactor to create and close a local client variable within
getDirOfInterest instead of using the module-level client.
In `@TMux.md`:
- Line 18: Update the wording in the "Jump Between Projects" section to use the
term "session tree" instead of "session picker": locate the paragraph or example
that currently references "session picker" (within the "Jump Between Projects"
heading) and replace that phrase with "session tree" so terminology matches the
Sessions table and other examples.
🧹 Nitpick comments (12)
src/claude-history-dashboard/src/cyberpunk.css (1)
275-323: Addprefers-reduced-motionfallback for infinite animations.Continuous shimmer/stream/pulse can be problematic for motion-sensitive users. Consider disabling these animations when reduced motion is requested.
♿ Suggested CSS addition
+@media (prefers-reduced-motion: reduce) { + .animate-skeleton-shimmer, + .animate-data-stream, + .animate-neon-pulse { + animation: none !important; + } +}plugins/genesis-tools/hooks/track-session-files.ts (1)
93-96: Good atomic write pattern; consider cleanup on failure.The atomic write approach correctly prevents race conditions and partial writes. However, if
writeFileSyncorrenameSyncthrows, the temp file may be left orphaned. Consider wrapping in a try-finally to clean up:♻️ Optional: Add temp file cleanup on failure
// Atomic write: write to temp file then rename (avoids race conditions) const tempFile = `${sessionFile}.tmp.${Date.now()}`; - writeFileSync(tempFile, JSON.stringify(sessionData, null, 2)); - renameSync(tempFile, sessionFile); + try { + writeFileSync(tempFile, JSON.stringify(sessionData, null, 2)); + renameSync(tempFile, sessionFile); + } catch (err) { + try { + unlinkSync(tempFile); + } catch { + // Ignore cleanup failure + } + throw err; + }src/claude-history-dashboard/src/components/stats/LoadingProgress.tsx (3)
1-5: Consider exporting the interface.Exporting
LoadingProgressPropswould allow parent components to properly type their state when passing props to this component.Suggested change
-interface LoadingProgressProps { +export interface LoadingProgressProps { loadedDays: number totalDays: number isLoading: boolean }
14-14: Consider clamping percentage to 0-100 range.If
loadedDaysexceedstotalDaysdue to a race condition or data inconsistency, the percentage could exceed 100%. Whileoverflow-hiddenclips it visually, clamping ensures correctness.Suggested change
- const percentage = totalDays > 0 ? (loadedDays / totalDays) * 100 : 0 + const percentage = totalDays > 0 ? Math.min((loadedDays / totalDays) * 100, 100) : 0
22-27: Consider using Shadcn's Progress component for consistency.Per the coding guidelines, Shadcn components are preferred. You could install it with
pnpm dlx shadcn@latest add progress. However, if the custom gradient styling (from-amber-500 to-cyan-500) is intentional for the dashboard's design language and not achievable with Shadcn's Progress, the current implementation is acceptable.Based on learnings, Shadcn is the preferred component library for this dashboard.
src/watchman/index.ts (2)
54-61: Consider validating that the path is a directory, not just that it exists.The current check only verifies the path exists but doesn't confirm it's a directory. A user could accidentally pass a file path, leading to unexpected behavior downstream when Watchman tries to watch it.
Suggested improvement
if (arg) { // Resolve relative paths to absolute const resolved = path.isAbsolute(arg) ? arg : path.resolve(process.cwd(), arg); - if (fs.existsSync(resolved)) { + if (fs.existsSync(resolved) && fs.statSync(resolved).isDirectory()) { return resolved; } - logger.error(`Invalid directory path provided: ${arg} (resolved to ${resolved}, but does not exist)`); + logger.error(`Invalid directory path provided: ${arg} (resolved to ${resolved}, path does not exist or is not a directory)`); }
211-215: Add top-level error handling for meaningful exit codes.The main execution block lacks error handling. If an unexpected error occurs, it results in an unhandled promise rejection rather than a clean exit with a meaningful error code. Per coding guidelines, tools should use meaningful exit codes.
Suggested improvement
(async () => { + try { const dirOfInterest = await getDirOfInterest(); logger.info(`Directory of interest: ${dirOfInterest}`); await watchWithRetry(dirOfInterest); + } catch (error) { + logger.error("Unexpected error:", error); + process.exit(1); + } })();src/claude-history-dashboard/src/components/stats/ToolCategoriesChart.tsx (1)
71-118: Consider an empty-state message when totals are zero.Right now the card can render with no content when there’s no data. A small empty state improves UX.
♻️ Suggested empty-state guard
const total = sortedCategories.reduce((sum, [, count]) => sum + count, 0) + if (total === 0) { + return ( + <Card> + <CardHeader> + <CardTitle className="flex items-center gap-2"> + <Layers className="w-5 h-5 text-secondary" /> + Tool Categories + </CardTitle> + </CardHeader> + <CardContent> + <p className="text-sm text-muted-foreground">No tool usage data available</p> + </CardContent> + </Card> + ) + } + return (src/claude-history-dashboard/src/components/stats/HourlyHeatmap.tsx (1)
56-69: Consider usingformatHourfor the axis labels.This keeps labels consistent with the tooltip format and avoids ambiguity.
♻️ Suggested label formatting
- {hours.slice(0, 12).map(({ hour }) => ( + {hours.slice(0, 12).map(({ hour }) => ( <div key={hour} className="text-[8px] text-muted-foreground text-center"> - {hour} + {formatHour(hour)} </div> ))} </div> <div className="grid grid-cols-12 gap-1"> {hours.slice(12).map(({ hour }) => ( <div key={hour} className="text-[8px] text-muted-foreground text-center"> - {hour} + {formatHour(hour)} </div> ))}src/claude-history-dashboard/src/components/stats/TokenUsageCard.tsx (1)
11-42: Expose a pricing-model prop to match the multi-model price table.Right now the table defines multiple models but the UI always uses opus. Consider letting callers select which pricing tier to apply.
♻️ Suggested prop + usage update
interface TokenUsageCardProps { tokenUsage: TokenUsage + pricingModel?: keyof typeof TOKEN_PRICES } -export function TokenUsageCard({ tokenUsage }: TokenUsageCardProps) { +export function TokenUsageCard({ tokenUsage, pricingModel = 'opus' }: TokenUsageCardProps) { const totalTokens = tokenUsage.inputTokens + tokenUsage.outputTokens + tokenUsage.cacheCreateTokens + tokenUsage.cacheReadTokens - const estimatedCost = estimateCost(tokenUsage) + const estimatedCost = estimateCost(tokenUsage, TOKEN_PRICES[pricingModel])src/claude-history-dashboard/src/server/conversations.ts (1)
44-49: Prefer shared TokenUsage type to avoid drift.
You already exportTokenUsagefrom@app/claude-history/lib, so importing it here keeps the server and cache models aligned.♻️ Suggested refactor
getStatsForDateRange, getAvailableProjects, type SearchFilters, + type TokenUsage, type DateRange, } from '@app/claude-history/lib' @@ -export interface TokenUsage { - inputTokens: number - outputTokens: number - cacheCreateTokens: number - cacheReadTokens: number -}src/claude-history/lib.ts (1)
1284-1286: Use the centralized logger for errors.
Swapconsole.errorfor the shared logger to keep output consistent and configurable. As per coding guidelines, use the centralized logger fromsrc/logger.ts.♻️ Suggested refactor
import { createReadStream } from "fs"; import { stat } from "fs/promises"; import { createInterface } from "readline"; +import logger from "@app/logger"; @@ - } catch (error) { - console.error(`Error processing file ${filePath}:`, error); - return null; - } + } catch (error) { + logger.error(`Error processing file ${filePath}:`, error); + return null; + }
| {Array.from({ length: 14 }).map((_, i) => ( | ||
| <div | ||
| key={i} | ||
| className="flex-1 flex flex-col items-center justify-end h-full gap-1" | ||
| > | ||
| <Skeleton | ||
| variant="data-stream" | ||
| className="w-full rounded-t" | ||
| style={{ | ||
| height: `${20 + Math.random() * 60}%`, | ||
| animationDelay: `${i * 100}ms`, | ||
| }} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check if the file exists and read its content
cat -n "src/claude-history-dashboard/src/components/stats/ChartSkeleton.tsx"Repository: genesiscz/GenesisTools
Length of output: 1501
🏁 Script executed:
# Check project setup to understand if SSR is used
head -50 "src/claude-history-dashboard/package.json"Repository: genesiscz/GenesisTools
Length of output: 1680
🏁 Script executed:
# Look for Next.js configuration to determine SSR setup
find src/claude-history-dashboard -name "next.config.*" -o -name "tsconfig.json" | head -5Repository: genesiscz/GenesisTools
Length of output: 108
🏁 Script executed:
# Verify if there are other similar patterns in the codebase using Math.random during render
rg "Math\.random\(\)" "src/claude-history-dashboard/src/components" -A 2 -B 2Repository: genesiscz/GenesisTools
Length of output: 1734
🏁 Script executed:
# Check if Skeleton is properly imported from shadcn/ui (per coding guidelines)
rg "from.*skeleton" "src/claude-history-dashboard/src" -iRepository: genesiscz/GenesisTools
Length of output: 806
Avoid Math.random() during render in SSR context to prevent hydration mismatch.
This application uses TanStack Start (an SSR framework), making random values during render problematic. The server renders one set of heights while the client hydrates with different random values, causing hydration mismatches and visual flicker. Use deterministic heights based on the index instead.
Deterministic height example
{Array.from({ length: 14 }).map((_, i) => (
<div
key={i}
className="flex-1 flex flex-col items-center justify-end h-full gap-1"
>
<Skeleton
variant="data-stream"
className="w-full rounded-t"
style={{
- height: `${20 + Math.random() * 60}%`,
+ height: `${20 + ((i * 37) % 60)}%`,
animationDelay: `${i * 100}ms`,
}}
/>
<Skeleton className="h-2 w-3" variant="default" />
</div>
))}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {Array.from({ length: 14 }).map((_, i) => ( | |
| <div | |
| key={i} | |
| className="flex-1 flex flex-col items-center justify-end h-full gap-1" | |
| > | |
| <Skeleton | |
| variant="data-stream" | |
| className="w-full rounded-t" | |
| style={{ | |
| height: `${20 + Math.random() * 60}%`, | |
| animationDelay: `${i * 100}ms`, | |
| }} | |
| {Array.from({ length: 14 }).map((_, i) => ( | |
| <div | |
| key={i} | |
| className="flex-1 flex flex-col items-center justify-end h-full gap-1" | |
| > | |
| <Skeleton | |
| variant="data-stream" | |
| className="w-full rounded-t" | |
| style={{ | |
| height: `${20 + ((i * 37) % 60)}%`, | |
| animationDelay: `${i * 100}ms`, | |
| }} |
🤖 Prompt for AI Agents
In `@src/claude-history-dashboard/src/components/stats/ChartSkeleton.tsx` around
lines 16 - 27, The render uses Math.random() inside ChartSkeleton's map to set
Skeleton heights, which causes SSR/client hydration mismatches; replace the
random height calculation in the style prop with a deterministic function of the
index (i) instead (for example a seeded or formulaic value derived from i) so
heights are stable between server and client—update the Array.from({ length: 14
}).map((_, i) => ...) block and the style.height expression (used for Skeleton
variant="data-stream") to compute height from i rather than Math.random().
| const BUCKETS = [ | ||
| { label: '1-10', min: 1, max: 10 }, | ||
| { label: '11-25', min: 11, max: 25 }, | ||
| { label: '26-50', min: 26, max: 50 }, | ||
| { label: '51-100', min: 51, max: 100 }, | ||
| { label: '101-250', min: 101, max: 250 }, | ||
| { label: '250+', min: 251, max: Infinity }, |
There was a problem hiding this comment.
Fix the “250+” bucket label/range mismatch.
The last bucket is labeled “250+” but starts at 251, so 250 is excluded from that label. Consider renaming to “251+” or adjusting the min to 250.
📝 Option A: rename label
- { label: '250+', min: 251, max: Infinity },
+ { label: '251+', min: 251, max: Infinity },📝 Option B: adjust range
- { label: '250+', min: 251, max: Infinity },
+ { label: '250+', min: 250, max: Infinity },📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const BUCKETS = [ | |
| { label: '1-10', min: 1, max: 10 }, | |
| { label: '11-25', min: 11, max: 25 }, | |
| { label: '26-50', min: 26, max: 50 }, | |
| { label: '51-100', min: 51, max: 100 }, | |
| { label: '101-250', min: 101, max: 250 }, | |
| { label: '250+', min: 251, max: Infinity }, | |
| const BUCKETS = [ | |
| { label: '1-10', min: 1, max: 10 }, | |
| { label: '11-25', min: 11, max: 25 }, | |
| { label: '26-50', min: 26, max: 50 }, | |
| { label: '51-100', min: 51, max: 100 }, | |
| { label: '101-250', min: 101, max: 250 }, | |
| { label: '250+', min: 250, max: Infinity }, |
🤖 Prompt for AI Agents
In
`@src/claude-history-dashboard/src/components/stats/ConversationLengthHistogram.tsx`
around lines 9 - 15, The BUCKETS array has a mismatch: the last bucket's label
is '250+' but its min is 251 so value 250 is excluded; update the last bucket in
BUCKETS (where label === '250+' and min === 251) to either rename the label to
'251+' or change min to 250 so the range matches the label—pick one and make the
corresponding edit in the BUCKETS definition.
| <defs> | ||
| <linearGradient id="messagesGradient" x1="0" y1="0" x2="0" y2="1"> | ||
| <stop offset="0%" stopColor="rgb(0, 240, 255)" stopOpacity="0.4" /> | ||
| <stop offset="100%" stopColor="rgb(0, 240, 255)" stopOpacity="0.05" /> | ||
| </linearGradient> | ||
| <linearGradient id="tokensGradient" x1="0" y1="0" x2="0" y2="1"> | ||
| <stop offset="0%" stopColor="rgb(255, 149, 0)" stopOpacity="0.3" /> | ||
| <stop offset="100%" stopColor="rgb(255, 149, 0)" stopOpacity="0.05" /> | ||
| </linearGradient> | ||
| </defs> | ||
|
|
||
| {/* Tokens area (background) */} | ||
| {maxTokens > 0 && ( | ||
| <path | ||
| d={` | ||
| M 0 100 | ||
| ${displayData.map((d, i) => { | ||
| const x = i * 10 | ||
| const y = 100 - (d.cumulativeTokens / maxTokens) * 95 | ||
| return `L ${x} ${y}` | ||
| }).join(' ')} | ||
| L ${(displayData.length - 1) * 10} 100 | ||
| Z | ||
| `} | ||
| fill="url(#tokensGradient)" | ||
| /> | ||
| )} | ||
|
|
||
| {/* Messages area */} | ||
| <path | ||
| d={` | ||
| M 0 100 | ||
| ${displayData.map((d, i) => { | ||
| const x = i * 10 | ||
| const y = 100 - (d.cumulativeMessages / maxMessages) * 95 | ||
| return `L ${x} ${y}` | ||
| }).join(' ')} | ||
| L ${(displayData.length - 1) * 10} 100 | ||
| Z | ||
| `} | ||
| fill="url(#messagesGradient)" | ||
| /> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd -t f "CumulativeChart.tsx" src/Repository: genesiscz/GenesisTools
Length of output: 135
🏁 Script executed:
cat -n src/claude-history-dashboard/src/components/stats/CumulativeChart.tsx | sed -n '1,150p'Repository: genesiscz/GenesisTools
Length of output: 6359
🏁 Script executed:
rg -t tsx -t ts -t jsx -t js "CumulativeChart" src/claude-history-dashboard/ --no-headingRepository: genesiscz/GenesisTools
Length of output: 93
🏁 Script executed:
rg "CumulativeChart" src/claude-history-dashboard/Repository: genesiscz/GenesisTools
Length of output: 796
🏁 Script executed:
cat -n src/claude-history-dashboard/src/routes/stats.tsx | sed -n '1,100p'Repository: genesiscz/GenesisTools
Length of output: 4991
🏁 Script executed:
rg -A 2 "CumulativeChart dailyActivity" src/claude-history-dashboard/src/routes/stats.tsxRepository: genesiscz/GenesisTools
Length of output: 361
Use unique SVG gradient IDs to prevent collisions when multiple chart instances render.
Fixed IDs (messagesGradient, tokensGradient) collide when multiple CumulativeChart instances render on the same page, causing fills to reference the wrong gradient. Use React's useId hook to generate per-instance IDs and reference them in the url(#...) fill attributes.
🛠️ Suggested fix (unique IDs per instance)
import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card'
import { TrendingUp } from 'lucide-react'
+import { useId } from 'react'
@@
-export function CumulativeChart({ dailyActivity, dailyTokens, title = 'Cumulative Growth' }: CumulativeChartProps) {
+export function CumulativeChart({ dailyActivity, dailyTokens, title = 'Cumulative Growth' }: CumulativeChartProps) {
+ const gradientId = useId()
+ const messagesGradientId = `${gradientId}-messages`
+ const tokensGradientId = `${gradientId}-tokens`
@@
- <linearGradient id="messagesGradient" x1="0" y1="0" x2="0" y2="1">
+ <linearGradient id={messagesGradientId} x1="0" y1="0" x2="0" y2="1">
@@
- <linearGradient id="tokensGradient" x1="0" y1="0" x2="0" y2="1">
+ <linearGradient id={tokensGradientId} x1="0" y1="0" x2="0" y2="1">
@@
- fill="url(`#tokensGradient`)"
+ fill={`url(#${tokensGradientId})`}
@@
- fill="url(`#messagesGradient`)"
+ fill={`url(#${messagesGradientId})`}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <defs> | |
| <linearGradient id="messagesGradient" x1="0" y1="0" x2="0" y2="1"> | |
| <stop offset="0%" stopColor="rgb(0, 240, 255)" stopOpacity="0.4" /> | |
| <stop offset="100%" stopColor="rgb(0, 240, 255)" stopOpacity="0.05" /> | |
| </linearGradient> | |
| <linearGradient id="tokensGradient" x1="0" y1="0" x2="0" y2="1"> | |
| <stop offset="0%" stopColor="rgb(255, 149, 0)" stopOpacity="0.3" /> | |
| <stop offset="100%" stopColor="rgb(255, 149, 0)" stopOpacity="0.05" /> | |
| </linearGradient> | |
| </defs> | |
| {/* Tokens area (background) */} | |
| {maxTokens > 0 && ( | |
| <path | |
| d={` | |
| M 0 100 | |
| ${displayData.map((d, i) => { | |
| const x = i * 10 | |
| const y = 100 - (d.cumulativeTokens / maxTokens) * 95 | |
| return `L ${x} ${y}` | |
| }).join(' ')} | |
| L ${(displayData.length - 1) * 10} 100 | |
| Z | |
| `} | |
| fill="url(#tokensGradient)" | |
| /> | |
| )} | |
| {/* Messages area */} | |
| <path | |
| d={` | |
| M 0 100 | |
| ${displayData.map((d, i) => { | |
| const x = i * 10 | |
| const y = 100 - (d.cumulativeMessages / maxMessages) * 95 | |
| return `L ${x} ${y}` | |
| }).join(' ')} | |
| L ${(displayData.length - 1) * 10} 100 | |
| Z | |
| `} | |
| fill="url(#messagesGradient)" | |
| /> | |
| <defs> | |
| <linearGradient id={messagesGradientId} x1="0" y1="0" x2="0" y2="1"> | |
| <stop offset="0%" stopColor="rgb(0, 240, 255)" stopOpacity="0.4" /> | |
| <stop offset="100%" stopColor="rgb(0, 240, 255)" stopOpacity="0.05" /> | |
| </linearGradient> | |
| <linearGradient id={tokensGradientId} x1="0" y1="0" x2="0" y2="1"> | |
| <stop offset="0%" stopColor="rgb(255, 149, 0)" stopOpacity="0.3" /> | |
| <stop offset="100%" stopColor="rgb(255, 149, 0)" stopOpacity="0.05" /> | |
| </linearGradient> | |
| </defs> | |
| {/* Tokens area (background) */} | |
| {maxTokens > 0 && ( | |
| <path | |
| d={` | |
| M 0 100 | |
| ${displayData.map((d, i) => { | |
| const x = i * 10 | |
| const y = 100 - (d.cumulativeTokens / maxTokens) * 95 | |
| return `L ${x} ${y}` | |
| }).join(' ')} | |
| L ${(displayData.length - 1) * 10} 100 | |
| Z | |
| `} | |
| fill={`url(#${tokensGradientId})`} | |
| /> | |
| )} | |
| {/* Messages area */} | |
| <path | |
| d={` | |
| M 0 100 | |
| ${displayData.map((d, i) => { | |
| const x = i * 10 | |
| const y = 100 - (d.cumulativeMessages / maxMessages) * 95 | |
| return `L ${x} ${y}` | |
| }).join(' ')} | |
| L ${(displayData.length - 1) * 10} 100 | |
| Z | |
| `} | |
| fill={`url(#${messagesGradientId})`} | |
| /> |
🤖 Prompt for AI Agents
In `@src/claude-history-dashboard/src/components/stats/CumulativeChart.tsx` around
lines 104 - 145, The SVG gradient IDs messagesGradient and tokensGradient in
CumulativeChart can collide across instances; update the component to generate
unique per-instance IDs (use React's useId or similar) and replace the hardcoded
ids and their references in the fill attributes (e.g.,
fill={`url(#${messagesId})`} and fill={`url(#${tokensId})`}); ensure the
<linearGradient id=...> elements use the generated messagesId and tokensId so
displayData-driven paths for tokens and messages correctly reference
instance-scoped gradients while leaving the existing maxTokens, maxMessages and
path d-calculation logic unchanged.
| {Array.from({ length: 5 }).map((_, i) => ( | ||
| <div key={i} className="flex items-center gap-3"> | ||
| <div className="flex-1"> | ||
| <div className="flex items-center justify-between mb-1"> | ||
| <Skeleton | ||
| className="h-4" | ||
| style={{ | ||
| width: `${60 + Math.random() * 30}%`, | ||
| animationDelay: `${i * 150}ms`, | ||
| }} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, verify the file exists and read its content
if [ -f "src/claude-history-dashboard/src/components/stats/ProjectListSkeleton.tsx" ]; then
wc -l "src/claude-history-dashboard/src/components/stats/ProjectListSkeleton.tsx"
echo "---"
cat -n "src/claude-history-dashboard/src/components/stats/ProjectListSkeleton.tsx"
else
echo "File not found. Searching for similar files..."
find . -name "ProjectListSkeleton*" 2>/dev/null
fiRepository: genesiscz/GenesisTools
Length of output: 2998
🏁 Script executed:
#!/bin/bash
# Check project configuration to see if SSR is relevant
echo "=== Checking for Next.js or SSR configuration ==="
if [ -f "package.json" ]; then
head -50 package.json
fi
echo -e "\n=== Checking for next.config files ==="
find . -maxdepth 2 -name "next.config.*" -o -name "nuxt.config.*" 2>/dev/null
echo -e "\n=== Checking if project appears to be Next.js ==="
grep -E "(next|nextjs)" package.json || echo "No 'next' found in package.json"Repository: genesiscz/GenesisTools
Length of output: 2089
🏁 Script executed:
#!/bin/bash
# Check exports and usage of ProjectListSkeleton
echo "=== Searching for exports of ProjectListSkeleton ==="
rg -n "ProjectListSkeleton" src/claude-history-dashboard/
echo -e "\n=== Check tsconfig to understand build target ==="
find src/claude-history-dashboard -name "tsconfig.json" -exec cat {} \;
echo -e "\n=== Check for README or package.json in claude-history-dashboard ==="
ls -la src/claude-history-dashboard/ | head -20
cat src/claude-history-dashboard/package.json 2>/dev/null || echo "No package.json"Repository: genesiscz/GenesisTools
Length of output: 3975
Use deterministic skeleton widths to prevent hydration mismatches in SSR context.
Math.random() produces different values on server and client, causing hydration warnings and visual jank. Replace with deterministic values derived from the loop index i.
♻️ Suggested deterministic widths
- width: `${60 + Math.random() * 30}%`,
+ width: `${60 + (i * 13) % 30}%`,
animationDelay: `${i * 150}ms`,
}}
...
- width: `${60 + Math.random() * 40}px`,
+ width: `${60 + (i * 17) % 40}px`,
animationDelay: `${i * 50}ms`,
}}Applies to lines 23 and 64.
🤖 Prompt for AI Agents
In `@src/claude-history-dashboard/src/components/stats/ProjectListSkeleton.tsx`
around lines 16 - 25, The skeleton widths currently use Math.random() which
causes SSR hydration mismatches; in ProjectListSkeleton replace both usages of
Math.random() (the width style inside the Array.from({ length: 5 }).map callback
and the second occurrence near line 64) with a deterministic calculation derived
from the loop index i (for example a small formula or lookup array based on i)
so the width values are stable between server and client; update the width
expression used in the Skeleton style to use that deterministic value and keep
the existing animationDelay logic that uses i.
| function calculateWeeklyStats(dailyActivity: Record<string, number>, dailyTokens?: Record<string, TokenUsage>) { | ||
| const today = new Date() | ||
| const thisWeekStart = new Date(today) | ||
| thisWeekStart.setDate(today.getDate() - 6) | ||
| const lastWeekStart = new Date(today) | ||
| lastWeekStart.setDate(today.getDate() - 13) | ||
|
|
||
| let thisWeekMessages = 0 | ||
| let lastWeekMessages = 0 | ||
| let thisWeekTokens = 0 | ||
| let lastWeekTokens = 0 | ||
| let thisWeekDays = 0 | ||
| let lastWeekDays = 0 | ||
|
|
||
| for (const [dateStr, count] of Object.entries(dailyActivity)) { | ||
| const date = new Date(dateStr) | ||
| const dayTokens = dailyTokens?.[dateStr] | ||
| const totalDayTokens = dayTokens | ||
| ? dayTokens.inputTokens + dayTokens.outputTokens + dayTokens.cacheCreateTokens + dayTokens.cacheReadTokens | ||
| : 0 | ||
|
|
||
| if (date >= thisWeekStart && date <= today) { | ||
| thisWeekMessages += count | ||
| thisWeekTokens += totalDayTokens | ||
| thisWeekDays++ | ||
| } else if (date >= lastWeekStart && date < thisWeekStart) { | ||
| lastWeekMessages += count | ||
| lastWeekTokens += totalDayTokens | ||
| lastWeekDays++ | ||
| } | ||
| } | ||
|
|
||
| // Calculate percentage changes | ||
| const messageChange = lastWeekMessages > 0 | ||
| ? ((thisWeekMessages - lastWeekMessages) / lastWeekMessages) * 100 | ||
| : thisWeekMessages > 0 ? 100 : 0 | ||
|
|
||
| const tokenChange = lastWeekTokens > 0 | ||
| ? ((thisWeekTokens - lastWeekTokens) / lastWeekTokens) * 100 | ||
| : thisWeekTokens > 0 ? 100 : 0 | ||
|
|
||
| // Calculate averages | ||
| const avgMessagesThisWeek = thisWeekDays > 0 ? Math.round(thisWeekMessages / thisWeekDays) : 0 | ||
| const avgMessagesLastWeek = lastWeekDays > 0 ? Math.round(lastWeekMessages / lastWeekDays) : 0 | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the file in question
cat -n src/claude-history-dashboard/src/components/stats/WeeklyTrendsCard.tsxRepository: genesiscz/GenesisTools
Length of output: 6929
🏁 Script executed:
# Search for how dailyActivity is generated/formatted
rg -n "dailyActivity" src/claude-history-dashboard --type ts --type tsx -B 2 -A 2Repository: genesiscz/GenesisTools
Length of output: 93
🏁 Script executed:
# Look for date parsing patterns in the codebase
rg -n "toISOString|toLocaleDateString|new Date\(" src/claude-history-dashboard --type ts --type tsx | head -30Repository: genesiscz/GenesisTools
Length of output: 93
🏁 Script executed:
# Search for where dailyActivity is populated or fetched
rg -n "dailyActivity|toISOString|toLocaleDateString" src/claude-history-dashboard --type ts --type js -B 3 -A 3Repository: genesiscz/GenesisTools
Length of output: 11633
🏁 Script executed:
# Look for API calls or data fetching that generates this data
rg -n "Record<string, number>" src/claude-history-dashboard -B 5 -A 5Repository: genesiscz/GenesisTools
Length of output: 14167
🏁 Script executed:
# Check the structure and where WeeklyTrendsCard is used
rg -n "WeeklyTrendsCard" src/claude-history-dashboard -B 3 -A 3Repository: genesiscz/GenesisTools
Length of output: 3458
🏁 Script executed:
# Look for date formatting patterns in the entire codebase
find src/claude-history-dashboard -type f \( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" \) | xargs grep -l "YYYY-MM-DD\|toLocaleDateString\|split.*-" | head -10Repository: genesiscz/GenesisTools
Length of output: 176
🏁 Script executed:
# Search for where dailyActivity is populated/assigned
rg -n "dailyActivity\s*[:\=]|dailyActivity\[" src/claude-history-dashboard/src/server --type ts -B 3 -A 3Repository: genesiscz/GenesisTools
Length of output: 187
🏁 Script executed:
# Look at the conversations.ts file to see the stats calculation
cat -n src/claude-history-dashboard/src/server/conversations.ts | head -200Repository: genesiscz/GenesisTools
Length of output: 8448
🏁 Script executed:
# Search for where getConversationStats is implemented
find src -name "*.ts" -o -name "*.tsx" | xargs grep -l "getConversationStats" | head -5Repository: genesiscz/GenesisTools
Length of output: 184
🏁 Script executed:
# Look for where dailyActivity is built in the lib
rg -n "dailyActivity" src/app/claude-history --type ts -B 5 -A 5Repository: genesiscz/GenesisTools
Length of output: 128
🏁 Script executed:
# Look at the claude-history lib file
cat -n src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 50378
🏁 Script executed:
# Search for where dailyActivity is built
rg -n "dailyActivity" src/claude-history --type ts -B 5 -A 5Repository: genesiscz/GenesisTools
Length of output: 8607
🏁 Script executed:
# Look at the exact line where dailyActivity keys are created
sed -n '959,962p' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 361
🏁 Script executed:
# Check if there's any code that fills in zero-activity days
rg -n "zero|empty|missing.*day|fill.*day" src/claude-history --type ts -iRepository: genesiscz/GenesisTools
Length of output: 218
🏁 Script executed:
# Verify the exact format of date strings by checking what toISOString().split("T")[0] produces
# and confirm the behavior of new Date() with YYYY-MM-DD strings
cat > /tmp/test_dates.js << 'EOF'
// Test how dates are created and parsed
// Simulate backend behavior
const timestamp = "2024-12-20T14:30:00Z"; // ISO timestamp
const backendDateStr = new Date(timestamp).toISOString().split("T")[0];
console.log("Backend generates:", backendDateStr); // "2024-12-20"
// Simulate frontend parsing
const frontendParsed = new Date(backendDateStr);
console.log("Frontend new Date('2024-12-20'):", frontendParsed.toISOString());
// Simulate today
const today = new Date();
console.log("Today with time:", today);
console.log("Today hour:", today.getHours());
// The comparison issue
const testDate = new Date("2024-12-20");
const testToday = new Date("2024-12-20");
testToday.setHours(23, 59, 59);
console.log("Test date:", testDate);
console.log("Test today (same day, end):", testToday);
console.log("testDate <= testToday?", testDate <= testToday);
EOF
node /tmp/test_dates.jsRepository: genesiscz/GenesisTools
Length of output: 327
Normalize week boundaries to day-start to avoid misaligned date comparisons.
The code compares today (which includes local time-of-day) against parsed date strings (YYYY-MM-DD) that lack time information. This can cause off-by-one edge cases depending on the current time and timezone. Additionally, dailyActivity contains only dates with messages—zero-activity days are absent—so averages divide by active days rather than calendar days. Normalize today to the start of day (00:00:00) and consider whether averages should represent per-active-day or per-calendar-day values.
🧭 Normalize day boundaries
function calculateWeeklyStats(dailyActivity: Record<string, number>, dailyTokens?: Record<string, TokenUsage>) {
+ const startOfDay = (d: Date) => {
+ const copy = new Date(d)
+ copy.setHours(0, 0, 0, 0)
+ return copy
+ }
+
- const today = new Date()
+ const today = startOfDay(new Date())
const thisWeekStart = new Date(today)
thisWeekStart.setDate(today.getDate() - 6)
const lastWeekStart = new Date(today)
lastWeekStart.setDate(today.getDate() - 13)🤖 Prompt for AI Agents
In `@src/claude-history-dashboard/src/components/stats/WeeklyTrendsCard.tsx`
around lines 34 - 78, calculateWeeklyStats is comparing Date objects with time
components causing timezone/off-by-one errors and is averaging over active days
instead of calendar days; normalize all dates to day-start by calling
setHours(0,0,0,0) on today, thisWeekStart, lastWeekStart and on each parsed date
(from dailyActivity keys) before comparisons, and change average calculations
(avgMessagesThisWeek, avgMessagesLastWeek and token averages) to divide by 7
(calendar days per week) rather than thisWeekDays/lastWeekDays if you want
per-calendar-day averages; keep dailyActivity/dailyTokens lookups using the
original date string keys but use the normalized Date for range checks.
| {Array.from({ length: 6 }).map((_, i) => ( | ||
| <Skeleton | ||
| key={i} | ||
| className="flex-1 rounded-t" | ||
| style={{ height: `${20 + Math.random() * 60}%`, animationDelay: `${i * 75}ms` }} | ||
| /> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check if the file exists and get its size
wc -l src/claude-history-dashboard/src/routes/stats.tsxRepository: genesiscz/GenesisTools
Length of output: 119
🏁 Script executed:
# Read the specific lines mentioned in the review
sed -n '695,710p' src/claude-history-dashboard/src/routes/stats.tsxRepository: genesiscz/GenesisTools
Length of output: 664
🏁 Script executed:
# Search for Math.random usage in the stats.tsx file
rg -n "Math\.random" src/claude-history-dashboard/src/routes/stats.tsxRepository: genesiscz/GenesisTools
Length of output: 164
🏁 Script executed:
# Check if this is a Next.js or other SSR framework
cat src/claude-history-dashboard/package.json | head -30Repository: genesiscz/GenesisTools
Length of output: 1064
🏁 Script executed:
# Look for the context of where this Skeleton component comes from
rg -n "import.*Skeleton" src/claude-history-dashboard/src/routes/stats.tsx | head -5Repository: genesiscz/GenesisTools
Length of output: 294
🏁 Script executed:
# Check if there's a layout or server configuration
ls -la src/claude-history-dashboard/src/ | grep -E "layout|app|server"Repository: genesiscz/GenesisTools
Length of output: 122
🏁 Script executed:
# Check if stats.tsx has a 'use client' directive
head -20 src/claude-history-dashboard/src/routes/stats.tsxRepository: genesiscz/GenesisTools
Length of output: 1513
🏁 Script executed:
# Check if there are other Math.random() uses in the skeleton files
rg -n "Math\.random" src/claude-history-dashboard/src/components/stats/Repository: genesiscz/GenesisTools
Length of output: 462
🏁 Script executed:
# Check the Skeleton component implementation
cat src/claude-history-dashboard/src/components/ui/skeleton.tsxRepository: genesiscz/GenesisTools
Length of output: 969
Avoid non-deterministic skeleton heights during render.
Math.random() causes hydration mismatches in this SSR-rendered app. Use deterministic values derived from the loop index i instead.
♻️ Suggested deterministic heights
- style={{ height: `${20 + Math.random() * 60}%`, animationDelay: `${i * 75}ms` }}
+ style={{ height: `${20 + (i * 11 % 60)}%`, animationDelay: `${i * 75}ms` }}Note: The same pattern appears in ProjectListSkeleton.tsx and ChartSkeleton.tsx.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {Array.from({ length: 6 }).map((_, i) => ( | |
| <Skeleton | |
| key={i} | |
| className="flex-1 rounded-t" | |
| style={{ height: `${20 + Math.random() * 60}%`, animationDelay: `${i * 75}ms` }} | |
| /> | |
| {Array.from({ length: 6 }).map((_, i) => ( | |
| <Skeleton | |
| key={i} | |
| className="flex-1 rounded-t" | |
| style={{ height: `${20 + (i * 11 % 60)}%`, animationDelay: `${i * 75}ms` }} | |
| /> |
🤖 Prompt for AI Agents
In `@src/claude-history-dashboard/src/routes/stats.tsx` around lines 699 - 704,
The Skeleton elements use Math.random() to compute heights (e.g., the map in
stats.tsx producing style height: `${20 + Math.random() * 60}%`), which breaks
SSR hydration; replace the non-deterministic random with a deterministic
calculation derived from the loop index `i` (or a fixed array of heights) so the
height for each Skeleton is stable across server and client renders. Update the
same pattern in ProjectListSkeleton.tsx and ChartSkeleton.tsx and ensure the
Skeleton component instances keep the same keying (key={i}) while using the
deterministic height formula based on `i`.
| // Check if file is already indexed and unchanged | ||
| const existing = getFileIndex(filePath); | ||
| if (existing && existing.mtime === mtime) { | ||
| return null; // File unchanged, skip | ||
| } | ||
|
|
||
| // Compute stats for this file | ||
| const fileStats = await computeFileStats(filePath); | ||
|
|
||
| // Update file index | ||
| upsertFileIndex({ | ||
| filePath, | ||
| mtime, | ||
| messageCount: fileStats.messages, | ||
| firstDate: fileStats.firstDate, | ||
| lastDate: fileStats.lastDate, | ||
| project, | ||
| isSubagent, | ||
| lastIndexed: new Date().toISOString(), | ||
| }); | ||
|
|
||
| // Update daily stats for each date in this file | ||
| for (const [dateStr, messageCount] of Object.entries(fileStats.dailyActivity)) { | ||
| const existingDaily = getDailyStats(dateStr); | ||
| const toolCountsForDate: Record<string, number> = {}; | ||
|
|
||
| // Distribute tool counts proportionally (simplified: assign to first date) | ||
| if (dateStr === fileStats.firstDate) { | ||
| Object.assign(toolCountsForDate, fileStats.toolCounts); | ||
| } | ||
|
|
||
| const hourlyForDate: Record<string, number> = {}; | ||
| if (dateStr === fileStats.firstDate) { | ||
| Object.assign(hourlyForDate, fileStats.hourlyActivity); | ||
| } | ||
|
|
||
| // Token, model, and branch data - assign to first date of file | ||
| const tokenForDate: TokenUsage = dateStr === fileStats.firstDate | ||
| ? fileStats.tokenUsage | ||
| : { inputTokens: 0, outputTokens: 0, cacheCreateTokens: 0, cacheReadTokens: 0 }; | ||
|
|
||
| const modelCountsForDate: Record<string, number> = dateStr === fileStats.firstDate | ||
| ? fileStats.modelCounts | ||
| : {}; | ||
|
|
||
| const branchCountsForDate: Record<string, number> = dateStr === fileStats.firstDate | ||
| ? fileStats.branchCounts | ||
| : {}; | ||
|
|
||
| const newDaily: DailyStats = { | ||
| date: dateStr, | ||
| project: "__all__", | ||
| conversations: (existingDaily?.conversations || 0) + (dateStr === fileStats.firstDate ? 1 : 0), | ||
| messages: (existingDaily?.messages || 0) + messageCount, | ||
| subagentSessions: (existingDaily?.subagentSessions || 0) + (dateStr === fileStats.firstDate && isSubagent ? 1 : 0), | ||
| toolCounts: mergeCounts(existingDaily?.toolCounts || {}, toolCountsForDate), | ||
| hourlyActivity: mergeCounts(existingDaily?.hourlyActivity || {}, hourlyForDate), | ||
| tokenUsage: mergeTokenUsage(existingDaily?.tokenUsage, tokenForDate), | ||
| modelCounts: mergeCounts(existingDaily?.modelCounts || {}, modelCountsForDate), | ||
| branchCounts: mergeCounts(existingDaily?.branchCounts || {}, branchCountsForDate), | ||
| }; | ||
|
|
||
| upsertDailyStats(newDaily); | ||
| } |
There was a problem hiding this comment.
Avoid double-counting when a previously indexed file changes.
When a file’s mtime changes, you recompute stats and add them to existing daily totals without removing the file’s prior contribution. This will inflate totals whenever a file is edited/appended. Consider either:
- Storing per-file daily stats (or deltas) so you can subtract old values before adding new ones, or
- Falling back to a targeted rebuild (or full cache refresh) when
existing && existing.mtime !== mtime.
🤖 Prompt for AI Agents
In `@src/claude-history/lib.ts` around lines 1218 - 1281, Existing code re-adds
recomputed file stats on mtime change without removing the file's previous
contribution, causing double-counting; update the logic in the block that
handles changed files (use getFileIndex, upsertFileIndex, upsertDailyStats,
mergeCounts, mergeTokenUsage) to first retrieve the prior per-file stats from
the stored file index (extend the stored FileIndex entry to include per-file
dailyActivity, toolCounts, hourlyActivity, tokenUsage, modelCounts, branchCounts
when first indexing), subtract those prior per-file metrics from each affected
DailyStats (by calling upsertDailyStats with negative deltas or by fetching
existingDaily and decrementing its fields) and only then add the newly computed
fileStats; alternatively, if you prefer simpler semantics, detect mtime mismatch
in getFileIndex and trigger a targeted rebuild of affected days (or a full
refresh) instead of incrementally merging so totals are not inflated.
| // Get project counts from file index | ||
| const db = getDatabase(); | ||
| const projectRows = db | ||
| .query( | ||
| ` | ||
| SELECT project, COUNT(*) as count | ||
| FROM file_index | ||
| WHERE project IS NOT NULL | ||
| GROUP BY project | ||
| ORDER BY count DESC | ||
| `, | ||
| ) | ||
| .all() as Array<{ project: string; count: number }>; | ||
|
|
||
| const projectCounts: Record<string, number> = {}; | ||
| for (const row of projectRows) { | ||
| projectCounts[row.project] = row.count; | ||
| } | ||
|
|
||
| // Update totals cache | ||
| updateCachedTotals({ | ||
| totalConversations: aggregated.totalConversations, | ||
| totalMessages: aggregated.totalMessages, | ||
| totalSubagents: aggregated.subagentCount, | ||
| projectCount: Object.keys(projectCounts).length, | ||
| }); | ||
|
|
||
| // Get conversation lengths for histogram | ||
| const conversationLengths = await getConversationLengths(); | ||
|
|
||
| return { | ||
| totalConversations: aggregated.totalConversations, | ||
| totalMessages: aggregated.totalMessages, | ||
| projectCounts, | ||
| toolCounts: aggregated.toolCounts, | ||
| dailyActivity: aggregated.dailyActivity, | ||
| hourlyActivity: aggregated.hourlyActivity, | ||
| subagentCount: aggregated.subagentCount, | ||
| tokenUsage: aggregated.tokenUsage, | ||
| dailyTokens: aggregated.dailyTokens, | ||
| modelCounts: aggregated.modelCounts, | ||
| branchCounts: aggregated.branchCounts, | ||
| conversationLengths, | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, explore the file structure to understand the codebase
git ls-files src | head -20Repository: genesiscz/GenesisTools
Length of output: 704
🏁 Script executed:
#!/bin/bash
# Run the suggested shell command and expand the search
rg -n -C3 '\bgetStatsInRange\b|\bgetFullStats\b|\bprojectCounts\b|\bconversationLengths\b' srcRepository: genesiscz/GenesisTools
Length of output: 19466
🏁 Script executed:
#!/bin/bash
# Get line count for the file to understand its size
wc -l src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 96
🏁 Script executed:
#!/bin/bash
# Read the specific sections mentioned (1339-1382 and 1426-1461)
sed -n '1339,1382p' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 1321
🏁 Script executed:
#!/bin/bash
# Also read the second section
sed -n '1426,1461p' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 1101
🏁 Script executed:
#!/bin/bash
# Read context around line 1339 to find the function name
sed -n '1320,1340p' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 667
🏁 Script executed:
#!/bin/bash
# Read context around line 1426 to find the function name
sed -n '1410,1430p' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 632
🏁 Script executed:
#!/bin/bash
# Search for getConversationLengths function definition
rg -n 'async function getConversationLengths|const getConversationLengths' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 131
🏁 Script executed:
#!/bin/bash
# Read the getConversationLengths function to see if it takes date parameters
rg -n -A10 'async function getConversationLengths|const getConversationLengths' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 548
🏁 Script executed:
#!/bin/bash
# Find the function name for the code at line 1339 by reading backwards
sed -n '1300,1345p' src/claude-history/lib.ts | grep -n 'export\|function\|async'Repository: genesiscz/GenesisTools
Length of output: 48
🏁 Script executed:
#!/bin/bash
# Get full context of the function containing line 1339
sed -n '1300,1385p' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 2456
🏁 Script executed:
#!/bin/bash
# Check if there's a version of these functions that takes dateRange params
rg -n 'getConversationLengths.*dateRange\|getConversationLengths.*DateRange' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 48
🏁 Script executed:
#!/bin/bash
# Look at the dashboard to see how it handles filtering when dateRange is applied
sed -n '56,75p' src/claude-history-dashboard/src/routes/stats.tsxRepository: genesiscz/GenesisTools
Length of output: 625
🏁 Script executed:
#!/bin/bash
# Find the function signature for the code at line 1339
sed -n '1280,1305p' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 876
🏁 Script executed:
#!/bin/bash
# Check the file_index schema to see what date columns exist
rg -n 'first_date|last_date|date' src/claude-history/lib.ts | head -30Repository: genesiscz/GenesisTools
Length of output: 1989
🏁 Script executed:
#!/bin/bash
# Check the file_index schema to see what columns are available for date filtering
rg -n 'CREATE TABLE.*file_index|first_date|last_date' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 48
🏁 Script executed:
#!/bin/bash
# Look at the getDailyStatsInRange function to see how it filters by date
rg -n -A15 'function getDailyStatsInRange' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 48
🏁 Script executed:
#!/bin/bash
# Check how aggregateDailyStats works to understand what dateRange filtering does
rg -n -A20 'function aggregateDailyStats' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 48
🏁 Script executed:
#!/bin/bash
# Search for getDailyStatsInRange with different patterns
rg -n 'getDailyStatsInRange' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 209
🏁 Script executed:
#!/bin/bash
# Search for aggregateDailyStats
rg -n 'aggregateDailyStats' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 207
🏁 Script executed:
#!/bin/bash
# Let's look at the cache.ts file which seems to have database functions
rg -n 'getDailyStats|aggregateDailyStats' src/claude-history/cache.ts | head -20Repository: genesiscz/GenesisTools
Length of output: 323
🏁 Script executed:
#!/bin/bash
# Look at the entire cache.ts to understand the structure
wc -l src/claude-history/cache.tsRepository: genesiscz/GenesisTools
Length of output: 97
🏁 Script executed:
#!/bin/bash
# Read the getDailyStatsInRange function
sed -n '344,380p' src/claude-history/cache.tsRepository: genesiscz/GenesisTools
Length of output: 967
🏁 Script executed:
#!/bin/bash
# Read the aggregateDailyStats function
sed -n '436,480p' src/claude-history/cache.tsRepository: genesiscz/GenesisTools
Length of output: 1573
🏁 Script executed:
#!/bin/bash
# Find the function name/signature at line 1426 in lib.ts
sed -n '1400,1430p' src/claude-history/lib.ts | head -40Repository: genesiscz/GenesisTools
Length of output: 875
🏁 Script executed:
#!/bin/bash
# Check what columns exist in file_index table
rg -n 'file_index' src/claude-history/cache.ts | grep -i 'create\|alter\|column'Repository: genesiscz/GenesisTools
Length of output: 272
🏁 Script executed:
#!/bin/bash
# Search for file_index schema definition
rg -n -B5 -A10 'CREATE TABLE.*file_index' src/claude-historyRepository: genesiscz/GenesisTools
Length of output: 986
🏁 Script executed:
#!/bin/bash
# Look for database initialization or schema
fd -e ts -e sql | xargs rg -l 'file_index' | head -5Repository: genesiscz/GenesisTools
Length of output: 119
🏁 Script executed:
#!/bin/bash
# Check what columns are being inserted into file_index
rg -n 'INSERT INTO file_index' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 48
🏁 Script executed:
#!/bin/bash
# Read the full schema of file_index to understand what's available
sed -n '73,82p' src/claude-history/cache.tsRepository: genesiscz/GenesisTools
Length of output: 372
🏁 Script executed:
#!/bin/bash
# Now check the second function getStatsForDateRange to understand its full context
sed -n '1411,1462p' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 1537
🏁 Script executed:
#!/bin/bash
# Check if there are any other calls to projectCounts or conversationLengths in getStatsForDateRange
sed -n '1411,1475p' src/claude-history/lib.tsRepository: genesiscz/GenesisTools
Length of output: 1829
Both getConversationStatsWithCache and getStatsForDateRange fail to scope projectCounts and conversationLengths to the provided date range.
While getDailyStatsInRange(dateRange) filters daily stats correctly, the project counts are queried from the entire file_index without date filtering, and getConversationLengths() has no date parameter. The file_index table contains first_date and last_date columns that should be used to filter projects and conversation lengths to only those with activity overlapping the requested range. Without this, the UI will display misleading stats when a date filter is applied—showing projects and conversation distribution from all time rather than the selected period.
🤖 Prompt for AI Agents
In `@src/claude-history/lib.ts` around lines 1339 - 1382, The projectCounts query
and conversationLengths calculation are not scoped to the requested dateRange;
update getConversationStatsWithCache and getStatsForDateRange to pass the
dateRange (start, end) into the logic: modify the file_index query used to build
projectCounts to include an overlap filter using first_date <= end AND last_date
>= start, and change getConversationLengths to accept a dateRange parameter and
compute lengths only for conversations whose activity overlaps that range
(update callers accordingly); ensure updateCachedTotals and returned
projectCounts/conversationLengths now reflect the filtered results.
| const watchedDirs: string[] = await new Promise((resolve) => { | ||
| (client as any).command(["watch-list"], (err: unknown, resp: any) => { | ||
| if (err || !resp || !resp.roots) return resolve([]); | ||
| if (err || !resp || !resp.roots) { | ||
| client.end(); // Close client on error | ||
| return resolve([]); | ||
| } | ||
| resolve(resp.roots); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat src/watchman/index.tsRepository: genesiscz/GenesisTools
Length of output: 7848
Avoid resource leak: close the module-level client after use in getDirOfInterest().
The module-level client is only used for the watch-list query within getDirOfInterest(). On error paths (lines 70, 95, 103), the client is correctly closed. However, on success paths—when a directory is selected and returned (lines 56, 73, 105)—the module-level client is never closed, remaining open while watchWithRetry() creates its own separate client instance. This creates a resource leak.
Either close the module-level client at the end of getDirOfInterest() (after the function completes), or refactor to use a local client created within getDirOfInterest() instead of the module-level one.
🤖 Prompt for AI Agents
In `@src/watchman/index.ts` around lines 65 - 73, The module-level client used in
getDirOfInterest is left open on successful returns, causing a resource leak;
either instantiate a local Watchman client inside getDirOfInterest and replace
references to the module-level client, or call client.end() on every control
path before returning (including the success path after resolve/return and
before handing off to watchWithRetry). Update getDirOfInterest to ensure the
Watchman client (client) is closed (client.end()) after the watch-list response
is processed and before any return, or refactor to create and close a local
client variable within getDirOfInterest instead of using the module-level
client.
| | `Ctrl+b d` | **Detach** from session | | ||
| | `Ctrl+b $` | Rename session | | ||
| | `Ctrl+b s` | **Session picker** (fuzzy search) | | ||
| | `Ctrl+b s` | **Session tree** (native list) | |
There was a problem hiding this comment.
Keep the workflow wording consistent with “session tree.”
The Sessions table now says “Session tree,” but Line 121 in “Jump Between Projects” still says “session picker.” Please align the later example to avoid mixed terminology.
💡 Suggested wording alignment
- Ctrl+b s # session picker
+ Ctrl+b s # session tree🤖 Prompt for AI Agents
In `@TMux.md` at line 18, Update the wording in the "Jump Between Projects"
section to use the term "session tree" instead of "session picker": locate the
paragraph or example that currently references "session picker" (within the
"Jump Between Projects" heading) and replace that phrase with "session tree" so
terminology matches the Sessions table and other examples.
- Fix critical stale cleanup scope bug (Thread #4/#11): project-scoped listing no longer deletes other projects' cache entries - Preserve matchSnippet when merging search results (Thread #19) - Fix rg-only results getting 0 slots when meta returns 20 (Thread #13) - Use rmSync with force flag instead of repetitive try/catch (Thread #1) - Add try/catch fallback for METADATA_VERSION (Thread #3/#6) - Add -- and -F flags to ripgrep commands (Thread #5/#17) - Reuse extractTextFromMessage in rgExtractSnippet (Thread #2) - Add 10MB file size cap + early exit in extraction (Thread #7/#18) - Add error handling for ripgrep functions (Thread #10/#16)
- Fix critical stale cleanup scope bug (Thread #4/#11): project-scoped listing no longer deletes other projects' cache entries - Preserve matchSnippet when merging search results (Thread #19) - Fix rg-only results getting 0 slots when meta returns 20 (Thread #13) - Use rmSync with force flag instead of repetitive try/catch (Thread #1) - Add try/catch fallback for METADATA_VERSION (Thread #3/#6) - Add -- and -F flags to ripgrep commands (Thread #5/#17) - Reuse extractTextFromMessage in rgExtractSnippet (Thread #2) - Add 10MB file size cap + early exit in extraction (Thread #7/#18) - Add error handling for ripgrep functions (Thread #10/#16)
#25) * fix(claude-history): full metadata extraction, rg search, auto-reindex - Read entire JSONL files for metadata (was limited to 50 lines/64KB) - Store full firstPrompt (was truncated to 120 chars) - Index ALL user messages into allUserText field (capped 5000 chars) - Summary/custom-title: latest wins (captures tail of file) - Add ripgrep full-content search fallback (rgSearchFiles/rgExtractSnippet) - Auto-reindex via MD5 hash of lib.ts+cache.ts (METADATA_VERSION) - Clean up stale cache entries for deleted session files - Rename DB from stats-cache.db to index.db - Show match snippets in claude-resume session picker - Search firstPrompt in matchByIdOrName * feat(claude-resume): show conversation excerpts in session picker Adds a second line below each session name showing additional context: summary (if name is from title), first prompt (if name is from summary), or match snippet (if from search). * fix(claude-resume): run metadata + rg search in parallel, merge results Previously rg only ran as fallback when metadata returned 0 results. Now both phases run concurrently via Promise.all. Results are deduped and merged: metadata matches first (ranked by relevance), then rg-only matches with snippets. Catches content in assistant messages and user text past the 5000-char metadata cap. * fix(claude-history): resolve dashed project names, add nerdy stats - extractProjectName: progressively resolve encoded dir against filesystem to recover dashed names (col-fe was showing as "fe") - Cache resolved project names to avoid repeated filesystem lookups - Show detailed stats in session picker: session count, project count, scope, indexed/stale/reindex status * feat(claude-resume): nerdy search diagnostics Show detailed breakdown of where results came from: - Index matches: count + what was searched (name/branch/project/prompt) - Content search: meta hits, rg total hits, overlap, rg-only unique hits * fix(claude-history): address PR #25 review comments - Fix critical stale cleanup scope bug (Thread #4/#11): project-scoped listing no longer deletes other projects' cache entries - Preserve matchSnippet when merging search results (Thread #19) - Fix rg-only results getting 0 slots when meta returns 20 (Thread #13) - Use rmSync with force flag instead of repetitive try/catch (Thread #1) - Add try/catch fallback for METADATA_VERSION (Thread #3/#6) - Add -- and -F flags to ripgrep commands (Thread #5/#17) - Reuse extractTextFromMessage in rgExtractSnippet (Thread #2) - Add 10MB file size cap + early exit in extraction (Thread #7/#18) - Add error handling for ripgrep functions (Thread #10/#16) * fix(claude-history): add cwd to early-exit check, use null for empty allUserText - Include cwd in early-exit condition to avoid skipping cwd extraction - Return null instead of "" for allUserText when no user text found - Anonymize path examples in comments
* fix(claude-usage): atomic token persist on refresh with file lock Refresh tokens are single-use — old one is invalidated when a new one is issued. Previously tokens were persisted to disk only after all accounts finished fetching, creating a crash window where new tokens could be lost and old (invalid) ones remained on disk permanently. Now ensureValidToken() acquires a file lock, re-reads config from disk (to pick up cross-process refreshes), persists immediately after refresh, then releases the lock. * fix(claude-usage): recover from invalid_grant via keychain fallback When refresh token is permanently invalid, try reading Claude Code's keychain credentials as a fallback. If the keychain token is valid, persist it and continue. Otherwise show a clear re-login message. * fix(claude-usage): add signal handling and reload config on each poll Add SIGINT/SIGTERM handlers in watch mode for clean exit. Remove stale account caching in TUI poller so it reloads config from disk every cycle, picking up tokens refreshed by the daemon or other processes. * fix(claude-usage): remove keychain recovery (steals Claude Code's token) Keychain recovery blindly copies Claude Code's token into any account that fails refresh, regardless of which account it belongs to. This causes: wrong usage data, shared single-use refresh tokens between accounts, and Claude Code's own token getting invalidated on next refresh. Just show a clear re-login message instead. * fix(claude-usage): remove keychain import option entirely The "Add from Keychain" flow forks Claude Code's refresh token, which always invalidates Claude Code's session as a side effect. Use `tools claude login` (OAuth) instead — it creates fully independent tokens without affecting Claude Code. * fix(claude-usage): remove all keychain fallbacks, use config lock path Remove getKeychainCredentials usage from poller, poll-daemon, usage legacy, and usage index. When no accounts configured, show a clear "run tools claude login" message instead of silently reading keychain. Move CONFIG_LOCK_PATH to config module (derived from Storage) instead of hardcoding the path in api.ts. * fix(claude-usage): address PR review feedback - Fix accountFilter fallback: return empty set instead of all accounts when filtered account not found (coderabbit #3) - Fix lock timeout: bump to 60s and correct misleading comment — the timeout is for lock acquisition, not the refresh call (copilot #7) * fix(file-lock): use atomic O_CREAT|O_EXCL to prevent TOCTOU race Replace existsSync+write with writeFile(flag:'wx') in tryAcquireLock so two processes cannot simultaneously acquire the same lock. Also add Storage.withConfigLock() convenience method and expose it from the config module, removing the need for callers to import withFileLock or construct lock paths manually. * refactor(storage): declarative withFileLock options object Storage.withFileLock now takes { file, fn, timeout?, onTimeout? } instead of positional args. withConfigLock remains a 1-line convenience wrapper. atomicUpdate migrated to the new API. The underlying acquireFileLock from file-lock.ts is an internal detail callers no longer touch directly. * fix(claude-usage): address remaining review feedback - api.ts: check disk token freshness before early-returning on missing refreshToken so a newer accessToken from another process is always used - usage/index.tsx: error on unknown --account arg instead of silently falling back to all accounts - poll-daemon.ts: let → const (biome useConst) * fix(file-lock): harden stale-lock reclaim and propagate real I/O errors - Re-read lock file before unlink to confirm it still holds the stale PID; aborts reclaim if another process has since rewritten it, preventing us from deleting a freshly acquired lock - Throw non-EEXIST errors (EACCES, ENOSPC) immediately instead of returning false, so callers fail fast rather than burning through the full timeout * fix(file-lock): verify PID ownership on release, typed LockTimeoutError - releaseLock now reads the lock file and only deletes if our PID is still in it, preventing accidental deletion of another process's lock - Replace raw Error with LockTimeoutError class so Storage.withFileLock's onTimeout handler uses instanceof instead of string matching
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.