Skip to content

Make LanguageModelSession transcript observable while streaming responses #103

@mattt

Description

@mattt

As described by @belozierov in #98 (comment):

During an LLM request, the model may perform many turns, producing intermediate outputs such as reasoning, user-visible messages, tool calls, etc. Ideally, these intermediate steps should be observable by the user via LanguageModelSession.

For this to work, it would be useful to update LanguageModelSession.transcript while LanguageModel.streamResponse is running.

One possible approach would be for LanguageModel.streamResponse to return not just a stream of Content, but an enum that can represent either Content or Transcript.Entry, where Transcript.Entry values would be appended to LanguageModelSession.transcript during the stream.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions