-
Notifications
You must be signed in to change notification settings - Fork 2k
Open
Description
Bug description
In Spring AI 1.0.2 (with submission #3915), when the LLM attempts to call multiple tools simultaneously, if any one of the tools throws an exception (such as response too large, timeout, etc.), the entire executeToolCalls method fails to handle the exception on a per-tool basis. As a result, all tool responses — including those from successfully completed tools — are discarded.
Environment
Spring AI version: 1.0.2
Java version: 21
Additional Context
Expected behavior
Each tool call should be executed and handled independently. Even if one or more tool calls fail, the results of successfully completed tool calls should be returned to the LLM for further processing.
Related code
public ToolExecutionResult executeToolCalls(Prompt prompt, ChatResponse chatResponse) {
Assert.notNull(prompt, "prompt cannot be null");
Assert.notNull(chatResponse, "chatResponse cannot be null");
Optional<Generation> toolCallGeneration = chatResponse.getResults().stream().filter((g) -> !CollectionUtils.isEmpty(g.getOutput().getToolCalls())).findFirst();
if (toolCallGeneration.isEmpty()) {
throw new IllegalStateException("No tool call requested by the chat model");
} else {
AssistantMessage assistantMessage = ((Generation)toolCallGeneration.get()).getOutput();
ToolContext toolContext = buildToolContext(prompt, assistantMessage);
InternalToolExecutionResult internalToolExecutionResult = this.executeToolCall(prompt, assistantMessage, toolContext);
List<Message> conversationHistory = this.buildConversationHistoryAfterToolExecution(prompt.getInstructions(), assistantMessage, internalToolExecutionResult.toolResponseMessage());
return ToolExecutionResult.builder().conversationHistory(conversationHistory).returnDirect(internalToolExecutionResult.returnDirect()).build();
}