Skip to content

Conversation

waldekmastykarz
Copy link
Collaborator

Adds support for logging LM token usage for streamed responses. Closes #1345

@Copilot Copilot AI review requested due to automatic review settings July 25, 2025 11:38
@waldekmastykarz waldekmastykarz added the pr-bugfix Fixes a bug label Jul 25, 2025
@waldekmastykarz waldekmastykarz requested a review from a team as a code owner July 25, 2025 11:38
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds support for logging language model token usage for streamed responses in the OpenAI telemetry plugin. The enhancement addresses the need to properly extract and log token usage information from Server-Sent Events (SSE) streaming responses.

  • Adds logic to detect streaming responses by checking for text/event-stream content type
  • Implements parsing of streaming response chunks to extract the final token usage data
  • Modifies the response processing flow to handle both regular and streaming responses

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@waldekmastykarz waldekmastykarz merged commit dd64272 into dotnet:main Jul 25, 2025
4 checks passed
@waldekmastykarz waldekmastykarz deleted the fix-llm-streaming branch July 25, 2025 12:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pr-bugfix Fixes a bug
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG]: Emitting OpenAI usage information fails for streaming requests
2 participants