fix: drain entire queue per flush to avoid streaming tail lag#424
fix: drain entire queue per flush to avoid streaming tail lag#424heyitsaamir wants to merge 1 commit intomainfrom
Conversation
Remove the 10-item batch cap in Stream.Flush() so each cycle processes all queued items, preventing a long streaming tail when the LLM finishes faster than chunks are sent to Teams. Port of microsoft/teams.py#384 / microsoft/teams.ts#520. Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Note
Copilot was unable to run its full agentic suite in this review.
Removes the fixed 10-item cap in Stream.Flush() so each flush cycle drains the entire pending activity queue, reducing “tail lag” when generation outpaces outbound sending.
Changes:
- Remove the 10-item batch limit in
Flush()and drain_queuefully per flush - Replace the “no work done” check (
i == 0) with a queue-count-based early return guard
Comments suppressed due to low confidence (1)
Libraries/Microsoft.Teams.Plugins/Microsoft.Teams.Plugins.AspNetCore/AspNetCorePlugin.Stream.cs:1
startCountis computed from_queue.Count, which is not a reliable indicator of whether this flush actually dequeued items in concurrent scenarios. If_queue.Countis 0 at line 138 but items are enqueued immediately after and then dequeued in the loop,startCount == 0will still be true and the method will return early, potentially dropping/defering sends even though activities were drained and_countwas incremented. Prefer tracking whether anything was actually dequeued (e.g.,var dequeued = 0;increment inside the loop andif (dequeued == 0) return;), which matches the previousi == 0behavior and avoids reliance on_queue.Count.
// Copyright (c) Microsoft Corporation. All rights reserved.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| } | ||
|
|
||
| if (i == 0) return; | ||
| if (startCount == 0) return; |
There was a problem hiding this comment.
startCount is computed from _queue.Count, which is not a reliable indicator of whether this flush actually dequeued items in concurrent scenarios. If _queue.Count is 0 at line 138 but items are enqueued immediately after and then dequeued in the loop, startCount == 0 will still be true and the method will return early, potentially dropping/defering sends even though activities were drained and _count was incremented. Prefer tracking whether anything was actually dequeued (e.g., var dequeued = 0; increment inside the loop and if (dequeued == 0) return;), which matches the previous i == 0 behavior and avoids reliance on _queue.Count.
| var i = 0; | ||
|
|
||
| Queue<TypingActivity> informativeUpdates = new(); | ||
| var startCount = _queue.Count; |
There was a problem hiding this comment.
ConcurrentQueue<T>.Count can be O(n) and is an imprecise snapshot under concurrency. Since this value is only used to decide whether to return, it’s more efficient and accurate to avoid Count entirely and instead track a local dequeued counter (or a boolean flag) while draining.
Summary
Stream.Flush()so each flush cycle drains the entire queueTest plan
AspNetCorePluginStreamTestspass (15/15 on both net8.0 and net10.0)🤖 Generated with Claude Code