Skip to content

fix: drain entire queue per flush cycle#520

Merged
heyitsaamir merged 1 commit intomainfrom
fix/stream-flush-drain
Apr 13, 2026
Merged

fix: drain entire queue per flush cycle#520
heyitsaamir merged 1 commit intomainfrom
fix/stream-flush-drain

Conversation

@heyitsaamir
Copy link
Copy Markdown
Collaborator

Removes the 10-item batch cap in HttpStream flush. Previously, each flush only processed 10 queued items, causing a long streaming tail when the LLM finishes faster than chunks are sent to Teams.

@heyitsaamir
Copy link
Copy Markdown
Collaborator Author

heyitsaamir commented Apr 11, 2026

This change is part of the following stack:

Change managed by git-spice.

@heyitsaamir heyitsaamir force-pushed the fix/stream-flush-drain branch from 86ae8e3 to 488199c Compare April 11, 2026 00:43
@rajan-chari rajan-chari self-requested a review April 13, 2026 14:22
rajan-chari
rajan-chari previously approved these changes Apr 13, 2026
Copy link
Copy Markdown
Contributor

@rajan-chari rajan-chari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. Some thoughts as to why this limit might be there and also rebuttals. TLDR. seems like a clear win to me.

  1. Perhaps there was concern about this text getting too long?
              this.text += activity. Text;
    However we just keep adding to it afik so not clear why limit it to 10.

  2. Keep the user waiting while we append strings
    String append operation is orders of magnitude faster than the streaming so the user does not have to wait

Verdict: PR seems like a clear win.

Other future improvement: I remember seeing something about 1/sec update limit. However, origin for this seems unclear. Changing that to 1/500 ms would also streaming responsiveness benefits.

@rajan-chari rajan-chari requested a review from aacebo April 13, 2026 14:25
@rajan-chari
Copy link
Copy Markdown
Contributor

would love to hear from @aacebo

@rajan-chari
Copy link
Copy Markdown
Contributor

Thanks @heyitsaamir

@heyitsaamir
Copy link
Copy Markdown
Collaborator Author

@copilot can you go through the history of that file and figure out why that chunk limit was added? Use old PRs if needed too.

Copy link
Copy Markdown
Contributor

Copilot AI commented Apr 13, 2026

@copilot can you go through the history of that file and figure out why that chunk limit was added? Use old PRs if needed too.

Here's what the git history shows:

The 10-item cap was never an intentional design decision — it was a simplification that slipped in during PR #256 ("fix streaming retry logic", commit ada46913) by @aacebo.

Original code (from the very first commit 7f38971d): The flush used a proportional limit — processing ~10% of the queue per flush cycle:

const size = Math.round(this.queue.length / 10);
while (this.queue.length && i <= size) {

In PR #256 (commit ada46913), the primary fix was correcting how promises are passed to the retry helper: promises.retry(this.send(activity))promises.retry(() => this.send(activity)) (a real bug — reused rejected promises). As part of that refactor, the dynamic Math.round(this.queue.length / 10) was silently replaced with a hardcoded i < 10:

-    const size = Math.round(this.queue.length / 10);
-    while (this.queue.length && i <= size) {
+    while (this.queue.length && i < 10) {

This turned a proportional 10%-per-flush rule into a fixed hard cap of 10 items — even if the queue had 200+ items. The original proportional logic would have also had a tail problem (it would always leave ~90% behind), but the fixed cap made it worse and more predictable in a bad way.

Since each flush cycle consolidates all dequeued items into a single TypingActivity chunk, there's no good reason to cap how much of the queue is drained per cycle. This PR removes that cap entirely.

lilyydu
lilyydu previously approved these changes Apr 13, 2026
Base automatically changed from feat/stream-cancellation to main April 13, 2026 16:44
@heyitsaamir heyitsaamir dismissed stale reviews from lilyydu and rajan-chari April 13, 2026 16:44

The base branch was changed.

@heyitsaamir heyitsaamir merged commit 0a7ce39 into main Apr 13, 2026
10 of 11 checks passed
@heyitsaamir heyitsaamir deleted the fix/stream-flush-drain branch April 13, 2026 17:12
Copilot stopped work on behalf of heyitsaamir due to an error April 13, 2026 17:12
heyitsaamir added a commit to microsoft/teams.py that referenced this pull request Apr 13, 2026
Remove the 10-item batch cap in HttpStream flush so each cycle
processes all queued items, preventing a long streaming tail when
the LLM finishes faster than chunks are sent to Teams.

Port of microsoft/teams.ts#520.

Co-Authored-By: Claude <noreply@anthropic.com>
Copilot stopped work on behalf of heyitsaamir due to an error April 13, 2026 17:24
heyitsaamir added a commit to microsoft/teams.net that referenced this pull request Apr 13, 2026
Remove the 10-item batch cap in Stream.Flush() so each cycle
processes all queued items, preventing a long streaming tail when
the LLM finishes faster than chunks are sent to Teams.

Port of microsoft/teams.py#384 / microsoft/teams.ts#520.

Co-Authored-By: Claude <noreply@anthropic.com>
heyitsaamir added a commit to microsoft/teams.py that referenced this pull request Apr 13, 2026
## Summary
- Removes the 10-item batch cap in `HttpStream._flush()` so each flush
cycle drains the entire queue
- Prevents a long streaming tail when the LLM finishes generating faster
than chunks are sent to Teams
- Port of
[microsoft/teams.ts#520](microsoft/teams.ts#520)

## Test plan
- [x] Existing `test_http_stream.py` tests pass (11/11)
- [x] Updated `test_stream_multiple_emits_with_timer` to verify the
first flush drains all 12 messages with no second flush scheduled
- [x] Pre-commit hooks (ruff, pyright) pass

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
@heyitsaamir heyitsaamir mentioned this pull request Apr 16, 2026
heyitsaamir added a commit that referenced this pull request Apr 16, 2026
## Summary
- Merges all changes from `main` since v2.0.7 into `release`
- Sets `version.json` to stable `2.0.8`

### What's included
- **feat:** Sovereign cloud support (GCCH, DoD, China) (#500)
- **feat:** Add missing endpoints — Paged Members, Meeting Notifications
& client gaps (#516)
- **feat:** Graceful stream cancellation on 403 (#513)
- **feat:** GitHub issue analysis → Teams notification workflow (#517)
- **fix:** Improve error message when app credentials are missing (#527)
- **fix:** Surface Graph API error body in GraphError (#524)
- **fix:** Resolve missing deps, broken JSON, type errors, and typos in
CLI templates (#521)
- **fix:** Drain entire queue per flush cycle (#520)
- **fix:** Merge User-Agent headers when cloning HTTP client (#508)
- Dependency bumps: hono, axios, vite, @hono/node-server

## Post-merge
1. Trigger the [release
pipeline](https://dev.azure.com/DomoreexpGithub/Github_Pipelines/_build?definitionId=52&_a=summary)
for `release` with **Public** publish type
2. Bump `version.json` on `main` to `2.0.9-preview.{height}`

🤖 Generated with [Claude Code](https://claude.com/claude-code)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants