Skip to content

During streaming, I still see truncated sentences. The se...#35

Merged
lukemarsden merged 5 commits intomainfrom
feature/001694-during-streaming-i-still
Apr 6, 2026
Merged

During streaming, I still see truncated sentences. The se...#35
lukemarsden merged 5 commits intomainfrom
feature/001694-during-streaming-i-still

Conversation

@lukemarsden
Copy link
Copy Markdown

During streaming, I still see truncated sentences. The sentences are almost always truncated prior to a tool call, I think. It's like there's a 100 ms poll or whatever it is inside Zed that somehow doesn't update some sentences some of the time, although when the interaction completes, we do correctly get them all updated. Can you investigate this? I think there's multiple levels here:

  1. Zed sends updates to the API.
  2. The API also sends updates to the front end with patches on the response entries.

🔗 Open in Helix

📋 Spec:

🚀 Built with Helix

…cated streaming

When tool calls arrived during streaming, the preceding text appeared truncated
because the NewEntry handler sent the tool_call entry without flushing the text
entry's pending throttled content first. The stale-pending flush only existed in
throttled_send_message_added, which NewEntry bypasses.

Changes:
- Extract flush_stale_pending_for_thread() helper from inline stale-pending flush
- Call it in NewEntry handler before sending, ensuring text content is complete
- Bypass 100ms throttle for tool_call entries (infrequent, need prompt delivery)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

Spec-Ref: helix-specs@df2833ab1:001694_during-streaming-i-still
… stale pending

The previous fix (flush_stale_pending_for_thread) was insufficient because
the throttle buffer's pending_content is a snapshot from the last EntryUpdated
event. When AcpThread::push_entry() calls flush_streaming_text() before
emitting NewEntry, the final tokens get flushed into the Markdown entity but
no EntryUpdated is emitted — so the pending snapshot is stale.

Fix: in the NewEntry handler, re-read ALL preceding entries' current content
directly from the thread model (which has the complete text after
flush_streaming_text()) and send fresh message_added events for each.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

Spec-Ref: helix-specs@393a7da43:001694_during-streaming-i-still
Upstream PR zed-industries#51499 added a StreamingTextBuffer that rate-limits how many
bytes get revealed into the Markdown entity per tick, creating a smooth
typewriter animation. However, content_only(cx) reads markdown.source()
which only returns revealed bytes. This means EntryUpdated -> WebSocket
sync sends incomplete content, causing the Go accumulator's baseline to
drift behind reality. When patches are computed from this stale baseline,
the frontend sees backwards edits that truncate text before tool calls.

Fix: each 16ms timer tick now drains the entire pending buffer instead of
limiting to bytes_to_reveal_per_tick. The timer still batches (avoids
per-character markdown.append calls), but no bytes are withheld. This
ensures content_only(cx) always returns all received content.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

Spec-Ref: helix-specs@b244b04d4:001694_during-streaming-i-still
…eaks

The NewEntry handler re-sent ALL preceding entries (from index 0) when a
new entry arrived, but this included entries from previous turns. The Go
server added these old message_ids to the current interaction's
response_entries, causing isolation violations in the store.

Apply the same turn-scoping logic already used by the Stopped handler:
find the last UserMessage and only re-send entries after it.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

Spec-Ref: helix-specs@b244b04d4:001694_during-streaming-i-still
…-streaming-i-still

Spec-Ref: helix-specs@15bac144f:001694_during-streaming-i-still
@lukemarsden lukemarsden merged commit ba7c15a into main Apr 6, 2026
25 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant