Skip to content

Minor chat optimizations#301752

Merged
roblourens merged 4 commits intomainfrom
roblou/urban-ant
Mar 14, 2026
Merged

Minor chat optimizations#301752
roblourens merged 4 commits intomainfrom
roblou/urban-ant

Conversation

@roblourens
Copy link
Member

A collection of minor memory use optimizations from copilot

Identified by copilot
1. Lazy _responseRepr — Changed from eagerly rebuilt on every streaming token to computed on demand in toString(). This avoids running the expensive partsToRepr() (which iterates all parts and builds strings) on every incoming token during streaming. It's only computed when actually needed (copy, accessibility, telemetry, history).
2. Lazy _markdownContent — Same pattern: computed on demand in getMarkdown(). During streaming, getMarkdown() is called via countWords() in the view model, but crucially the string is only built once per invalidation cycle rather than eagerly on every updateContent call. If multiple parts are updated before getMarkdown() is accessed, only one computation happens.
3. _invalidateRepr() — New method that simply sets both cached strings to undefined, replacing the old _updateRepr() which did the expensive computation.
4. Response._contentChanged(quiet?) — Replaces the old _updateRepr(quiet?) override. Calls _invalidateRepr() and fires the change event when not quiet. The citation append logic moved into a computeRepr() override so it's part of the lazy computation.
Copilot AI review requested due to automatic review settings March 14, 2026 20:20
@roblourens roblourens enabled auto-merge (squash) March 14, 2026 20:20
@roblourens roblourens self-assigned this Mar 14, 2026
@vs-code-engineering vs-code-engineering bot added this to the 1.112.0 milestone Mar 14, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR applies several minor memory and performance optimizations to the chat subsystem: lazy computation of response string representations, deferred word-count updates, pre-computed render indices, shared closures/static methods to reduce allocations, and proper MutableDisposable/DisposableStore usage to fix potential disposal leaks.

Changes:

  • Lazy-compute _responseRepr and _markdownContent in AbstractResponse, invalidating cached values on content change instead of eagerly rebuilding.
  • Replace repeated O(n) getter computations for codeBlockStartIndex/treeStartIndex with incrementally tracked counters, and use RunOnceScheduler to coalesce word-count updates.
  • Switch to MutableDisposable/DisposableStore for managing rendered markdown results and highlighted labels to prevent disposal leaks on repeated updates.

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated no comments.

Show a summary per file
File Description
chatModel.ts Lazy repr/markdown computation with invalidation pattern
chatViewModel.ts Coalesce word-count updates via RunOnceScheduler
chatListRenderer.ts Replace O(n) getter recomputations with incremental counters
chatThinkingContentPart.ts Use MutableDisposable for markdown results; hoist static helper and callback
chatSubagentContentPart.ts Use MutableDisposable for title detail rendering
chatContentMarkdownRenderer.ts Hoist shared closure to module level
iconLabel.ts Fix disposal leak with DisposableStore for highlighted labels

@roblourens roblourens merged commit 86ffb98 into main Mar 14, 2026
24 checks passed
@roblourens roblourens deleted the roblou/urban-ant branch March 14, 2026 20:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants