refactor: implement _loop_response_model to manage response_model dur…#5613
Open
lorenzejay wants to merge 3 commits intomainfrom
Open
refactor: implement _loop_response_model to manage response_model dur…#5613lorenzejay wants to merge 3 commits intomainfrom
lorenzejay wants to merge 3 commits intomainfrom
Conversation
…ing tool loops This change introduces the _loop_response_model method in BaseAgentExecutor to conditionally return the response_model based on the presence of tools. The CrewAgentExecutor and AgentExecutor classes have been updated to utilize this new method, ensuring that the response_model is not sent to the LLM during tool loops, which prevents issues with structured-output schemas. Additionally, tests have been added to verify the correct behavior of this implementation.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
…ing tool loops
This change introduces the _loop_response_model method in BaseAgentExecutor to conditionally return the response_model based on the presence of tools. The CrewAgentExecutor and AgentExecutor classes have been updated to utilize this new method, ensuring that the response_model is not sent to the LLM during tool loops, which prevents issues with structured-output schemas. Additionally, tests have been added to verify the correct behavior of this implementation.
Note
Medium Risk
Changes the parameters sent to LLM providers during tool-execution loops, which can affect structured-output behavior across multiple providers and execution paths. Test cassette updates reduce risk but regressions could still alter agent outputs or tool-calling flows.
Overview
Prevents
response_modelfrom being forwarded to the LLM when an executor has tools, via a newBaseAgentExecutor._loop_response_model()helper, to avoid providers returning schema-constrained placeholder JSON instead of making tool calls.Updates all in-loop LLM call sites in
CrewAgentExecutor(sync/async, ReAct and native tools) andexperimental.AgentExecutorto routeresponse_modelthrough this helper, and adds regression tests enforcing the behavior.Extends VCR header filtering (e.g.,
x-amz-security-token,x-crewai-organization-id) and refreshes Anthropic/Bedrock structured-output-with-tools cassettes and test models to match the new request/response flow.Reviewed by Cursor Bugbot for commit e77d1c0. Bugbot is set up for automated code reviews on this repo. Configure here.