Skip to content

fix: Multi-turn tools use error when using streaming output #1990

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 2, 2025

Conversation

Soulter
Copy link
Member

@Soulter Soulter commented Jul 2, 2025

解决了 #1989

Check

  • 😊 我的 Commit Message 符合良好的规范
  • 👀 我的更改经过良好的测试
  • 🤓 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到了 requirements.txtpyproject.toml 文件相应位置。
  • 😮 我的更改没有引入恶意代码

好的,这是翻译成中文的 pull request 摘要:

Sourcery 总结

修复了多轮工具调用中的流式输出问题,通过处理 Gemini 和 Anthropic 聊天源中工具调用结果的列表来实现。

Bug 修复:

  • tool_calls_result 作为列表处理,以便在流式模式下扩展上下文消息来支持多轮工具。

增强功能:

  • text_chat_stream 签名中删除静态类型提示,以提高灵活性。
Original summary in English

Summary by Sourcery

Fix streaming output for multi-turn tool calls by handling lists of tool call results in Gemini and Anthropic chat sources

Bug Fixes:

  • Handle tool_calls_result as a list when extending context messages to support multi-turn tools in streaming mode

Enhancements:

  • Remove static type hints from text_chat_stream signatures for greater flexibility

@Soulter Soulter requested review from Raven95676 and Copilot July 2, 2025 02:20
Copy link
Contributor

sourcery-ai bot commented Jul 2, 2025

## 审查者指南

重构流式聊天方法,以正确支持流式输出中的多轮工具调用,并清理格式。

#### 流式聊天中多轮工具调用的序列图

```mermaid
sequenceDiagram
    participant User as actor User
    participant ChatSource as Chat Source (Gemini/Anthropic)
    participant ToolCallsResult as ToolCallsResult
    participant LLM as LLM Model

    User->>ChatSource: text_chat_stream(..., tool_calls_result)
    alt tool_calls_result is not None
        ChatSource->>ToolCallsResult: to_openai_messages()
        ChatSource->>ChatSource: context_query.extend(...)
    end
    ChatSource->>LLM: Send context_query (with tool call messages)
    LLM-->>ChatSource: Streaming response
    ChatSource-->>User: Streamed output

流式聊天中 ToolCallsResult 处理的类图

classDiagram
    class ToolCallsResult {
        +to_openai_messages()
    }
    class ChatSource {
        +text_chat_stream(..., tool_calls_result)
    }
    ChatSource --> ToolCallsResult : uses

    class LLMResponse
    ChatSource ..> LLMResponse : yields
Loading

文件级别变更

变更 详情 文件
标准化列表字面量和字典格式
  • 为了可读性,重新格式化了 any([...]) 调用
  • 用一致的缩进包装了附加的字典字面量
astrbot/core/provider/sources/gemini_source.py
移除流方法签名中的显式类型注解
  • 用非类型化的默认值替换了流式参数的类型化参数
astrbot/core/provider/sources/gemini_source.py
增强了对多个工具调用结果的 tool_calls_result 处理
  • 添加了 isinstance 检查以区分单个结果与列表结果
  • 迭代列表以适当地扩展 context_query
astrbot/core/provider/sources/gemini_source.py
astrbot/core/provider/sources/anthropic_source.py

可能相关的 issue


提示和命令

与 Sourcery 互动

  • 触发新的审查: 在 pull request 上评论 @sourcery-ai review
  • 继续讨论: 直接回复 Sourcery 的审查评论。
  • 从审查评论生成 GitHub issue: 通过回复审查评论,要求 Sourcery 从审查评论创建一个 issue。您也可以回复审查评论并使用 @sourcery-ai issue 从中创建一个 issue。
  • 生成 pull request 标题: 在 pull request 标题中的任何位置写入 @sourcery-ai 以随时生成标题。您也可以在 pull request 上评论 @sourcery-ai title 以随时(重新)生成标题。
  • 生成 pull request 摘要: 在 pull request 正文中的任何位置写入 @sourcery-ai summary 以随时在您想要的位置生成 PR 摘要。您也可以在 pull request 上评论 @sourcery-ai summary 以随时(重新)生成摘要。
  • 生成审查者指南: 在 pull request 上评论 @sourcery-ai guide 以随时(重新)生成审查者指南。
  • 解决所有 Sourcery 评论: 在 pull request 上评论 @sourcery-ai resolve 以解决所有 Sourcery 评论。如果您已经解决了所有评论并且不想再看到它们,这将非常有用。
  • 驳回所有 Sourcery 审查: 在 pull request 上评论 @sourcery-ai dismiss 以驳回所有现有的 Sourcery 审查。如果您想从新的审查开始,这将特别有用 - 不要忘记评论 @sourcery-ai review 以触发新的审查!

自定义您的体验

访问您的 仪表板 以:

  • 启用或禁用审查功能,例如 Sourcery 生成的 pull request 摘要、审查者指南等。
  • 更改审查语言。
  • 添加、删除或编辑自定义审查说明。
  • 调整其他审查设置。

获取帮助

```
Original review guide in English

Reviewer's Guide

Refactors streaming chat methods to properly support multi-turn tool calls in streaming output and cleans up formatting.

Sequence diagram for multi-turn tool calls in streaming chat

sequenceDiagram
    participant User as actor User
    participant ChatSource as Chat Source (Gemini/Anthropic)
    participant ToolCallsResult as ToolCallsResult
    participant LLM as LLM Model

    User->>ChatSource: text_chat_stream(..., tool_calls_result)
    alt tool_calls_result is not None
        ChatSource->>ToolCallsResult: to_openai_messages()
        ChatSource->>ChatSource: context_query.extend(...)
    end
    ChatSource->>LLM: Send context_query (with tool call messages)
    LLM-->>ChatSource: Streaming response
    ChatSource-->>User: Streamed output
Loading

Class diagram for ToolCallsResult handling in streaming chat

classDiagram
    class ToolCallsResult {
        +to_openai_messages()
    }
    class ChatSource {
        +text_chat_stream(..., tool_calls_result)
    }
    ChatSource --> ToolCallsResult : uses

    class LLMResponse
    ChatSource ..> LLMResponse : yields
Loading

File-Level Changes

Change Details Files
Standardized list literal and dict formatting
  • Reformatted any([...]) calls for readability
  • Wrapped appended dict literals with consistent indentation
astrbot/core/provider/sources/gemini_source.py
Removed explicit type annotations in stream method signature
  • Replaced typed parameters with untyped defaults for streaming args
astrbot/core/provider/sources/gemini_source.py
Enhanced tool_calls_result handling for multiple tool call results
  • Added isinstance check to distinguish single vs list result
  • Iterated over list to extend context_query appropriately
astrbot/core/provider/sources/gemini_source.py
astrbot/core/provider/sources/anthropic_source.py

Possibly linked issues


Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Soulter - 我已经查看了你的更改 - 这里有一些反馈:

  • 删除 text_chat_stream 中的显式类型提示会降低类型清晰度 - 考虑重新引入注解或验证新签名是否仍然与所有调用者对齐。
  • 你在 Gemini 和 Anthropic 来源中复制了相同的 tool_calls_result 逻辑 - 将其提取到共享助手以减少重复。
  • 有几个纯粹的样式格式调整(例如,包装多行文字)会使差异膨胀 - 考虑限制格式更改,以便功能修复更加清晰。
Prompt for AI Agents
请解决此代码审查中的评论:
## 总体评论
- 删除 `text_chat_stream` 中的显式类型提示会降低类型清晰度 - 考虑重新引入注解或验证新签名是否仍然与所有调用者对齐。
- 你在 Gemini 和 Anthropic 来源中复制了相同的 `tool_calls_result` 逻辑 - 将其提取到共享助手以减少重复。
- 有几个纯粹的样式格式调整(例如,包装多行文字)会使差异膨胀 - 考虑限制格式更改,以便功能修复更加清晰。

## 单独评论

### 评论 1
<location> `astrbot/core/provider/sources/gemini_source.py:570` </location>
<code_context>

         # tool calls result
         if tool_calls_result:
-            context_query.extend(tool_calls_result.to_openai_messages())
+            if not isinstance(tool_calls_result, list):
</code_context>

<issue_to_address>
考虑在循环之前将 tool_calls_result 规范化为列表,以避免重复的逻辑。

```python
# Replace the nested isinstance branch with a single normalized loop
if tool_calls_result:
    # Normalize to list so we only have one loop
    tcr_items = (
        tool_calls_result
        if isinstance(tool_calls_result, list)
        else [tool_calls_result]
    )
    for tcr in tcr_items:
        context_query.extend(tcr.to_openai_messages())
```

这消除了重复的 `extend` 逻辑,同时保留了对单个和多个 ToolCallsResult 的行为。
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
帮助我更有用!请点击每个评论上的 👍 或 👎,我将使用反馈来改进你的评论。
Original comment in English

Hey @Soulter - I've reviewed your changes - here's some feedback:

  • The removal of explicit type hints in text_chat_stream reduces type clarity—consider reintroducing annotations or verifying that the new signature still aligns with all callers.
  • You’re duplicating the same tool_calls_result logic in both Gemini and Anthropic sources—extract it into a shared helper to reduce repetition.
  • There are several purely stylistic formatting tweaks (e.g. wrapping multi-line literals) that bloat the diff—consider limiting formatting changes so the functional fix stands out more clearly.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The removal of explicit type hints in `text_chat_stream` reduces type clarity—consider reintroducing annotations or verifying that the new signature still aligns with all callers.
- You’re duplicating the same `tool_calls_result` logic in both Gemini and Anthropic sources—extract it into a shared helper to reduce repetition.
- There are several purely stylistic formatting tweaks (e.g. wrapping multi-line literals) that bloat the diff—consider limiting formatting changes so the functional fix stands out more clearly.

## Individual Comments

### Comment 1
<location> `astrbot/core/provider/sources/gemini_source.py:570` </location>
<code_context>

         # tool calls result
         if tool_calls_result:
-            context_query.extend(tool_calls_result.to_openai_messages())
+            if not isinstance(tool_calls_result, list):
</code_context>

<issue_to_address>
Consider normalizing tool_calls_result to a list before looping to avoid duplicated logic.

```python
# Replace the nested isinstance branch with a single normalized loop
if tool_calls_result:
    # Normalize to list so we only have one loop
    tcr_items = (
        tool_calls_result
        if isinstance(tool_calls_result, list)
        else [tool_calls_result]
    )
    for tcr in tcr_items:
        context_query.extend(tcr.to_openai_messages())
```

This removes the duplicated `extend` logic while preserving the behavior for both single and multiple ToolCallsResult.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@@ -566,7 +568,11 @@ async def text_chat_stream(

# tool calls result
if tool_calls_result:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (complexity): 考虑在循环之前将 tool_calls_result 规范化为列表,以避免重复的逻辑。

# Replace the nested isinstance branch with a single normalized loop
if tool_calls_result:
    # Normalize to list so we only have one loop
    tcr_items = (
        tool_calls_result
        if isinstance(tool_calls_result, list)
        else [tool_calls_result]
    )
    for tcr in tcr_items:
        context_query.extend(tcr.to_openai_messages())

这消除了重复的 extend 逻辑,同时保留了对单个和多个 ToolCallsResult 的行为。

Original comment in English

issue (complexity): Consider normalizing tool_calls_result to a list before looping to avoid duplicated logic.

# Replace the nested isinstance branch with a single normalized loop
if tool_calls_result:
    # Normalize to list so we only have one loop
    tcr_items = (
        tool_calls_result
        if isinstance(tool_calls_result, list)
        else [tool_calls_result]
    )
    for tcr in tcr_items:
        context_query.extend(tcr.to_openai_messages())

This removes the duplicated extend logic while preserving the behavior for both single and multiple ToolCallsResult.

Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR fixes errors in multi-turn tool usage during streaming outputs by updating how tool call results are handled and streamlining function signatures and data formatting.

  • Allow tool_calls_result to be a list of results and handle each appropriately
  • Simplify and reformat text_chat_stream method signature and message assembly
  • Remove unused import and tidy up append formatting

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.

File Description
astrbot/core/provider/sources/gemini_source.py Adjusted text_chat_stream signature, added list handling for tool_calls_result, removed unused import, and reformatted append calls
astrbot/core/provider/sources/anthropic_source.py Added list handling for tool_calls_result in the streaming chat method
Comments suppressed due to low confidence (2)

astrbot/core/provider/sources/gemini_source.py:570

  • Change the conditional to if tool_calls_result is not None: so that an empty list ([]) is handled correctly instead of being skipped.
        if tool_calls_result:

astrbot/core/provider/sources/anthropic_source.py:302

  • Use if tool_calls_result is not None: to ensure empty lists are processed and not skipped.
        if tool_calls_result:

Comment on lines +549 to +555
prompt,
session_id=None,
image_urls=None,
func_tool=None,
contexts=None,
system_prompt=None,
tool_calls_result=None,
Copy link
Preview

Copilot AI Jul 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Consider reintroducing type annotations (e.g., prompt: str, session_id: Optional[str], image_urls: Optional[list[str]], tool_calls_result: ToolCallsResult | list[ToolCallsResult]) to improve readability and type safety.

Suggested change
prompt,
session_id=None,
image_urls=None,
func_tool=None,
contexts=None,
system_prompt=None,
tool_calls_result=None,
prompt: str,
session_id: Optional[str] = None,
image_urls: Optional[list[str]] = None,
func_tool: Optional[FuncCall] = None,
contexts: Optional[list[dict]] = None,
system_prompt: Optional[str] = None,
tool_calls_result: ToolCallsResult | list[ToolCallsResult] = None,

Copilot uses AI. Check for mistakes.

@Soulter Soulter merged commit dd4319d into master Jul 2, 2025
3 checks passed
@Soulter Soulter deleted the fix-stream-multi-tool-use-err branch July 4, 2025 09:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants