-
Notifications
You must be signed in to change notification settings - Fork 11
🤖 Optimize startup performance and bundle size #247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
bd18b4a to
b95402e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
ef0c006 to
2a907ec
Compare
ammar-agent
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment referred to StreamingTokenTracker.ts which was removed from this PR during cleanup. The PR now only contains mermaid lazy loading and tokenizer changes.
2a8acfc to
33b5b69
Compare
eb2f610 to
20634dc
Compare
✅ Startup Freeze FixedRoot Cause: AIService had a static import of the tokenizer that loaded 30MB+ of modules at app startup, blocking for 3-4 seconds. // Line 31 - BLOCKING STARTUP
import { getTokenizerForModel } from "@/utils/main/tokenizer";Import chain: The tokenizer was used to count system message tokens for a "System" consumer in the token breakdown UI. Solution: Remove systemMessageTokens EntirelyRemoved the calculation because:
Changes (commit 77fe4bd)
Impact:
Net LoC: -15 This completes the token calculation refactor - frontend now owns all breakdown logic via Full PR SummaryThree performance fixes in this PR:
App now:
|
🎉 Startup Freeze Fixed!Root Cause AnalysisThe 8-10 second freeze was NOT the tokenizer. Investigation revealed: Timing breakdown: The culprit: Fixes Applied1. Make Git Status Non-Blocking (Commit daa485c)- // Run immediately
- void this.updateGitStatus();
+ // Run first update immediately but asynchronously (don't block UI)
+ // setTimeout ensures this runs on next tick, allowing React to finish rendering
+ setTimeout(() => void this.updateGitStatus(), 0);Impact:
2. Remove Noisy Debug Logs (Commit 51e5c85)Removed console spam from git status failures:
Git operations fail regularly (large repos, network issues, timeouts). These are expected and retry automatically - no need to spam console. Summary of All PR ChangesThis PR now includes 3 performance optimizations:
Total impact:
Test it: |
928f3f2 to
a84d060
Compare
4b5fbd5 to
7960652
Compare
Reduces startup time from 8.6s to <500ms and eliminates the 3s "evil spinner" that blocked the UI after the window appeared. Root Cause Analysis: - AIService was importing the massive "ai" package (~3s) during startup - Git status IPC handler triggered AIService lazy-load immediately after mount - Combined effect: window appeared but then froze for 3s before becoming interactive Key Performance Fixes: 1. Lazy-load AIService in IpcMain - defer AI SDK import until first actual use 2. Fix git status IPC to use Config.findWorkspace() instead of AIService 3. Defer git status polling to next tick (non-blocking on mount) 4. Defer auto-resume checks to next tick (non-blocking on mount) 5. Make React DevTools installation non-blocking in main process Token Stats Refactoring: - Remove ~425 LoC of approximation infrastructure (tokenStats.worker, etc.) - Move token calculation to backend via IPC (TOKENS_COUNT_BULK handler) - Add tokenizerWorkerPool for non-blocking tokenization in main process - Eliminate ChatProvider context - move stats to TokenConsumerBreakdown component - Add per-model cache keys to prevent cross-model contamination - Remove unused stats.calculate IPC endpoint (-170 LoC dead code) Bundle Optimizations: - Lazy load Mermaid component to defer 631KB chunk - Disable production source maps (saves ~50MB in .app) - Add manual chunks for better caching (react-vendor, syntax-highlighter) - Remove worker bundle checks (worker removed in refactoring) Import Linting Enhancements: - Enhanced check_eager_imports.sh to detect AI SDK in critical startup files - Added renderer/worker checks to prevent heavy packages (ai-tokenizer, models.json) - CI guards for bundle sizes (400KB main budget) Performance Results: - Startup: 8.6s → <500ms (94% improvement) - Window: appears instantly, no freeze - AI SDK: loads on-demand when first message sent - Git status: non-blocking background operation Testing: - All 546 tests pass (removed 1 test file for dead code) - Integration tests for tokens.countBulk IPC handler - Tokenizer cache isolation tests - StreamingTokenTracker model-change safety tests _Generated with `cmux`_
The error message changed when we stopped using AIService.getWorkspaceMetadata() and started using Config.findWorkspace() directly (commit cdd3302). Old: 'Failed to get workspace metadata' New: 'Workspace not found: nonexistent-workspace'
The lazy-load getter was using require('@/services/aiService') which works
during development but fails in production because Node.js doesn't resolve
TypeScript path aliases at runtime.
Changed to require('./aiService') (relative path) which works both in
development and in the compiled dist/main.js.
This was causing E2E tests to fail - streams never completed because
AIService was never successfully instantiated in the built Electron app.
318ec10 to
8f5daa2
Compare
f609a86 to
c62b5e9
Compare
The tokenizerWorkerPool was trying to load from dist/workers/tokenizerWorker.js but workers weren't being compiled because tsconfig.main.json didn't include them. This caused silent failures in E2E tests when the token counting IPC endpoint tried to initialize the worker pool, which likely affected stream event timing.
5f554dd to
e9f4b29
Compare
e9f4b29 to
8589123
Compare
The original lazy-load implementation used require('@/services/aiService')
which fails at runtime because Node.js doesn't resolve TypeScript path aliases.
Changed to require('./aiService') which resolves correctly from dist/services/.
Summary
This PR optimizes startup performance and eliminates UI freezes through three key improvements:
Changes
1. Bundle Size Optimization (Commit: 20634dc)
import()patternResults:
Performance Impact:
2. IPC Architecture Refinement (Commit: 221e81e)
Inverted ownership for token consumer calculation - backend now only tokenizes, frontend handles display logic.
Before:
After:
Benefits:
New Files:
src/utils/tokens/consumerCalculator.ts(159 LoC) - Pure calculation functionssrc/utils/tokens/consumerCalculator.test.ts(240 LoC) - 12 comprehensive testsChanges:
tokens:countBulk(model, texts[])TokenConsumerBreakdownto calculate stats in frontend3. Fix UI Freeze with Worker Thread (Commit: aac20c7)
The app was freezing for 3-4 seconds when first opening Costs tab because tokenizer loading and computation happened synchronously in main process, blocking all window events.
Solution: Move tokenization to Node.js worker thread
Architecture Change:
Before:
After:
New Files:
src/workers/tokenizerWorker.ts(53 LoC) - Worker thread for CPU-intensive tokenizationsrc/services/tokenizerWorkerPool.ts(161 LoC) - Manages worker lifecycle and request queueImpact:
Testing
dist/src/workers/tokenizerWorker.jsNet Changes
Added:
src/utils/tokens/consumerCalculator.ts(159 LoC)src/utils/tokens/consumerCalculator.test.ts(240 LoC)src/workers/tokenizerWorker.ts(53 LoC)src/services/tokenizerWorkerPool.ts(161 LoC)BUNDLE_SIZE_ANALYSIS.md(documentation)Modified:
TokenConsumerBreakdown.tsx- now calculates stats in frontendCostsTab.tsx- passes messages instead of workspaceIdtsconfig.main.json- includes workers and servicesDocumentation
Added
BUNDLE_SIZE_ANALYSIS.mdwith:Next Optimization Opportunities
Generated with
cmux