Conversation
…cn-llm The `threatbook-cn-llm` gateway exposes MiniMax models over an OpenAI-compatible endpoint but its streaming chunks omit the `tool_calls` field entirely (observed 2026-04 with `minimax-m2.7`: first `ChoiceDelta` only carries `content` / `role`, `model_extra_keys=[]`). With native function-calling the model can never emit a tool call, so every turn ends with `finish_reason=stop` and `tool_calls=0` — users in IM channels just see a short text reply (e.g. a couple of IPs) and no tool execution. Add `threatbook-cn-llm` to the MiniMax text-call provider whitelist so `_should_use_text_tool_call_mode()` returns True for this provider+model pair. The runner then injects the `<minimax:tool_call>` XML instructions and the existing text parser picks the calls up from the content stream, restoring tool execution end-to-end. - Other models routed through the same gateway (qwen, GLM, etc.) remain on the standard OpenAI native function-calling path. - Comment in `_should_use_text_tool_call_mode()` records the root cause for future maintainers. Tests: - New regression cases in `TestMiniMaxTextToolMode`: - threatbook-cn-llm + minimax-m2.7 → XML mode enabled - case-insensitive provider/model id handling - threatbook-cn-llm + qwen3.6-plus → XML mode disabled - `tests/session/test_runner_step.py` (48 cases) all pass. Made-with: Cursor
xiami762
approved these changes
Apr 24, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The
threatbook-cn-llmgateway exposes MiniMax models over an OpenAI-compatible endpoint but its streaming chunks omit thetool_callsfield entirely (observed 2026-04 withminimax-m2.7: firstChoiceDeltaonly carriescontent/role,model_extra_keys=[]). With native function-calling the model can never emit a tool call, so every turn ends withfinish_reason=stopandtool_calls=0— users in IM channels just see a short text reply (e.g. a couple of IPs) and no tool execution.Add
threatbook-cn-llmto the MiniMax text-call provider whitelist so_should_use_text_tool_call_mode()returns True for this provider+model pair. The runner then injects the<minimax:tool_call>XML instructions and the existing text parser picks the calls up from the content stream, restoring tool execution end-to-end._should_use_text_tool_call_mode()records the root cause for future maintainers.Tests:
TestMiniMaxTextToolMode:tests/session/test_runner_step.py(48 cases) all pass.