Skip to content

Conversation

@ammar-agent
Copy link
Collaborator

Summary

Performance optimizations to reduce memory pressure and GC pauses caused by inefficient string handling during streaming.

Problem

Heap analysis revealed:

  • 582 MB in 30.5M concatenated strings - each streaming delta was stored as a separate part, and merging used repeated + concatenation creating O(n²) intermediate strings
  • 1.39s Major GC pauses during animation events due to memory pressure
  • 35M total heap objects accumulated during typical usage

Changes

1. Stabilize foregroundToolCallIds Set reference

Avoid invalidating React Compiler memoization by checking content equality before creating new Set instances.

2. Use array.join() for message part merging

Replace O(n²) string concatenation with array accumulation. V8 optimizes join() much better than repeated +.

3. Compact message parts on stream end

When streaming completes, merge thousands of delta parts into single strings immediately. This converts memory from O(deltas) small objects to O(content_types) merged objects, preventing accumulation.

4. Extract mergeAdjacentParts() helper

Deduplicate the merge logic between compactMessageParts() and getDisplayedMessages().

Validation

  • make static-check passes
  • make typecheck passes
  • Heap timeline analysis confirms these patterns as the primary memory sources

Generated with mux • Model: claude-sonnet-4-20250514 • Thinking: low

Prevent unnecessary re-renders by only updating the Set when contents
actually change. Previously, every subscription event created a new Set
reference, invalidating React Compiler's auto-memoization for the entire
message list.

Also added React Compiler guidance to AGENTS.md.
Replace O(n²) string concatenation with array accumulation and join()
to avoid creating millions of V8 "cons strings" during streaming.

Heap analysis showed 30M+ concatenated strings (582MB) from the
repeated `lastMerged.text + part.text` pattern in getDisplayedMessages().
The new approach collects text fragments in arrays and joins once at
the end, which V8 optimizes much better.

---
_Generated with `mux` • Model: `claude-sonnet-4-20250514` • Thinking: `low`_
Two related improvements:

1. Compact parts when streaming ends - converts thousands of delta parts
   into merged strings immediately, reducing memory from O(deltas) to
   O(content_types). This prevents accumulation of 30M+ small objects
   observed in heap timeline (582MB of concatenated strings).

2. Extract `mergeAdjacentParts()` helper to deduplicate the merge logic
   between compactMessageParts() and getDisplayedMessages().

---
_Generated with `mux` • Model: `claude-sonnet-4-20250514` • Thinking: `low`_
@chatgpt-codex-connector
Copy link

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Repo admins can enable using credits for code reviews in their settings.

@ammario ammario merged commit 7ff598c into main Dec 14, 2025
20 checks passed
@ammario ammario deleted the performance-b4s0 branch December 14, 2025 19:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants