Skip to content

fix: prioritize tool_calls over text when available_functions is None#4879

Closed
education-01 wants to merge 1 commit intocrewAIInc:mainfrom
education-01:fix-tool-call-return-order-4788
Closed

fix: prioritize tool_calls over text when available_functions is None#4879
education-01 wants to merge 1 commit intocrewAIInc:mainfrom
education-01:fix-tool-call-return-order-4788

Conversation

@education-01
Copy link
Copy Markdown

@education-01 education-01 commented Mar 14, 2026

Fixes #4788

Problem

When LLM returns both text and tool_calls simultaneously, and executor passes available_functions=None to get_llm_response, the current logic returns text instead of tool_calls. This causes native tool calls to be discarded.

Root Cause

In llm.py, the branch order was:

  1. If (no tool_calls OR no available_functions) AND text_response → return text
  2. If tool_calls AND no available_functions → return tool_calls

When both text and tool_calls exist with available_functions=None, branch 1 fires first, and branch 2 is never reached.

Fix

Reorder to prioritize tool_calls:

  1. If tool_calls AND no available_functions → return tool_calls
  2. If (no tool_calls OR no available_functions) AND text_response → return text

Changes

  • lib/crewai/src/crewai/llm.py: Reorder return logic in both sync and async paths
  • tests/test_llm.py: Add regression tests for this exact scenario

Testing

Added two unit tests:

  • test_non_streaming_returns_tool_calls_when_text_and_tool_calls_exist_without_available_functions
  • test_async_non_streaming_returns_tool_calls_when_text_and_tool_calls_exist_without_available_functions

Both verify that tool_calls are returned when available_functions=None, even when text content is present.


Note

Low Risk
Low risk logic reordering in LiteLLM non-streaming sync/async paths; behavior changes only when responses include both content and tool_calls and available_functions is None/empty.

Overview
Fixes LiteLLM non-streaming response handling so that when an LLM returns both content and tool_calls and available_functions is not provided, the LLM returns the raw tool_calls instead of discarding them by returning text.

Adds sync and async regression tests (mocking litellm.completion/litellm.acompletion) to assert LLM.call/LLM.acall prefer tool_calls over text in this scenario.

Written by Cursor Bugbot for commit 684de7e. This will update automatically on new commits. Configure here.

Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

# --- 5) If there are tool calls but no available functions, return the tool calls
# This allows the caller (e.g., executor) to handle tool execution
if tool_calls and not available_functions:
return tool_calls
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing LLMCallCompletedEvent when returning tool_calls with text

Low Severity

When both tool_calls and text_response are present with available_functions=None, the new early-return path at return tool_calls skips emitting LLMCallCompletedEvent. Previously this scenario took the text-response branch, which did emit the event. Since LLMCallStartedEvent is always emitted by the caller, this creates an unpaired start event. The event listener uses LLMCallCompletedEvent to call handle_llm_stream_completed(), which resets formatter state like _is_streaming and stops the live display — so skipping it could leave the console formatter in an inconsistent state.

Additional Locations (1)
Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] Native tool calls are discarded if LLM returns a text response

2 participants