Skip to content

Comments

core[patch]: check "tool_call_id" existence to prevent KeyError (#27249)#30244

Closed
lumiaspic (lumiaspic) wants to merge 1 commit intolangchain-ai:masterfrom
lumiaspic:dorian-bucaille/fix-tool-call-id-key-error
Closed

core[patch]: check "tool_call_id" existence to prevent KeyError (#27249)#30244
lumiaspic (lumiaspic) wants to merge 1 commit intolangchain-ai:masterfrom
lumiaspic:dorian-bucaille/fix-tool-call-id-key-error

Conversation

@lumiaspic
Copy link

@lumiaspic lumiaspic (lumiaspic) commented Mar 12, 2025

Description: Fixes a bug where a KeyError is thrown when tool_call_id is not found, by calling values.get("tool_call_id") instead of values["tool_call_id"]. This error happens when running an agent using a RemoteRunnable from langserve like in this langserve agent with history implementation example.

Issue: Fixes #27249

Dependencies: none

Fixes a bug where a KeyError is thrown when "tool_call_id" is not found.

See details in #27249.
@dosubot dosubot bot added the size:XS label Mar 12, 2025
@vercel
Copy link

vercel bot commented Mar 12, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Visit Preview Mar 12, 2025 8:49am

@dosubot dosubot bot added Ɑ: core bug Related to a bug, vulnerability, unexpected error with an existing feature labels Mar 12, 2025
Copy link
Collaborator

@eyurtsev Eugene Yurtsev (eyurtsev) left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dorian-bucaille

The change does not look correct as tool_call_id is a required field.

It's likely that you're using an older chat model and need to update both langchain-core and the chat model to get tool call ids to appear.

If you take a look at the schema tool_call_id is a required field

@lumiaspic
Copy link
Author

Hi Eugene Yurtsev (@eyurtsev),

Thanks for your comment. I understand tool_call_id being a required field, and I appreciate your suggestion to update both langchain-core and the chat model to get tool_call_id to appear. I've updated my dependencies to the latest versions of langchain-core and langchain-openai this morning, and I still got the error.

I'm using the langserve agent implementation example with history. I use vLLM to run my models. The problem occurs when the agent tries to formulate a response after calling a tool. If I modify my local version of libs/core/langchain_core/messages/tool.py with the changes from my PR, the error is not blocking, and the agent can continue to stream its response after calling the tool.

Here's some detail about my code:

Server-side (langserve):

llm_from_config = ChatOpenAI(
    openai_api_base=llm_config.get("openai_api_base"),
    openai_api_key=llm_config.get("openai_api_key"),
    model=llm_config.get("model"),
    temperature=llm_config.get("temperature"),
    streaming=True,
    default_headers={
        "Authorization": "Bearer " + llm_config.get("openai_api_key")
    },
)

llm_with_tools = llm_from_config .bind_tools([tool for tool in known_tools])
agent = create_tool_calling_agent(llm_with_tools, known_tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=known_tools, verbose=True)

app = FastAPI(
    title=config.title,
    version=config.version,
    description=config.description,
)

add_routes(
    app,
    RunnableLambda(_create_agent_executor).with_types(
        input_type=Input, output_type=Output
    ),
)

Client-side:

try:
    async for event in remote_runnable.astream_events(
        {
            "input": prompt,
            "chat_history": chat_history,
            "llm_config": llm_config,
        },
        version="v1",
    ):
        kind = event["event"]
        if kind == "on_chain_start":
            # Événement de démarrage de l'agent
            if event["name"] == "agent":
                [...]
        elif kind == "on_chain_end":
           [...]
        if kind == "on_chat_model_stream":
            [...]
        elif kind == "on_tool_start":
            [...]
        elif kind == "on_tool_end":
            [...]

I've added client-side custom logs to remote_runnable.astream_events and libs/core/langchain_core/messages/tool.py to better understand the issue.

The error message indicates that ToolMessage must have a tool_call_id, but the received values dictionary does not contain this field. Maybe it has to do with the on_prompt_start event? I've included an example below:

{'event': 'on_prompt_start', 'data': {'input': {'input': 'Quelle heure est-il à Moscou ?', 'tools': None, 'llm_config': {'openai_api_base': 'https://iaka-api-dev-2.apps.ocp4.on-prem.innershift.ssghosting.net/llama-3-3-70b/v1', 'openai_api_key': 'ZJUWDkkPOzAPuBfn', 'model': 'llama-3-3-70b', 'temperature': 0.5}, 'chat_history': [], 'intermediate_steps': [[AgentActionMessageLog(tool='get_current_time', tool_input={'timezone': 'Europe/Moscow', 'run_manager': None, 'config': None}, log="\nInvoking: `get_current_time` with `{'timezone': 'Europe/Moscow'}`\n\n\n", message_log=[AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'chatcmpl-tool-1f60ee030e91450dabb97a444dec205f', 'function': {'arguments': '{"timezone": "Europe/Moscow"}', 'name': 'get_current_time'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'model_name': 'llama-3-3-70b'}, id='run-eeaf8390-687b-466d-b0a2-a10ee953f71d', tool_calls=[{'name': 'get_current_time', 'args': {'timezone': 'Europe/Moscow'}, 'id': 'chatcmpl-tool-1f60ee030e91450dabb97a444dec205f', 'type': 'tool_call'}], tool_call_chunks=[{'name': 'get_current_time', 'args': '{"timezone": "Europe/Moscow"}', 'id': 'chatcmpl-tool-1f60ee030e91450dabb97a444dec205f', 'index': 0, 'type': 'tool_call_chunk'}])]), '{\n  "timezone": "Europe/Moscow",\n  "datetime": "2025-03-13T14:19:08+03:00",\n  "is_dst": false\n}']], 'agent_scratchpad': [AIMessageChunk(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'chatcmpl-tool-1f60ee030e91450dabb97a444dec205f', 'function': {'arguments': '{"timezone": "Europe/Moscow"}', 'name': 'get_current_time'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls', 'model_name': 'llama-3-3-70b'}, id='run-eeaf8390-687b-466d-b0a2-a10ee953f71d', tool_calls=[{'name': 'get_current_time', 'args': {'timezone': 'Europe/Moscow'}, 'id': 'chatcmpl-tool-1f60ee030e91450dabb97a444dec205f', 'type': 'tool_call'}], tool_call_chunks=[{'name': 'get_current_time', 'args': '{"timezone": "Europe/Moscow"}', 'id': 'chatcmpl-tool-1f60ee030e91450dabb97a444dec205f', 'index': 0, 'type': 'tool_call_chunk'}]), ToolMessage(content='{\n  "timezone": "Europe/Moscow",\n  "datetime": "2025-03-13T14:19:08+03:00",\n  "is_dst": false\n}', tool_call_id='chatcmpl-tool-1f60ee030e91450dabb97a444dec205f')]}}, 'name': 'ChatPromptTemplate', 'tags': ['seq:step:2'], 'run_id': '079e2a3e-33fa-47e2-b28a-92d2e81af890', 'metadata': {}, 'parent_ids': ['2ffef733-5f39-4bc8-a695-450f16f3a62f', 'a0c872b3-3f24-40df-a495-175552529b34', '59649ecb-7760-4935-ac23-2944723b9fa5']}
ToolMessage must have a tool_call_id. Received:

values={'content': '{\n  "timezone": "Europe/Moscow",\n  "datetime": "2025-03-13T14:19:08+03:00",\n  "is_dst": false\n}', 'additional_kwargs': {'name': 'get_current_time'}, 'response_metadata': {}, 'type': 'tool', 'name': None, 'id': None}

If there's any additional information or context you need from me, please let me know. I'm happy to provide more details or clarify any concerns you may have.

Thank you for your time and consideration!

@eyurtsev
Copy link
Collaborator

Hi @dorian-bucaille,

If I modify my local version of libs/core/langchain_core/messages/tool.py with the changes from my PR, the error is not blocking, and the agent can continue to stream its response after calling the tool.

Yes, it makes sense that it will resolve the issue in your case, but it's fixing the problem in the wrong place since by this place in the code the tool call id is required, and we don't want to relax this requirement.


Could you include a minimal reproducible example? (i.e., all imports). I should be able to run the example when I copy and paste it. Otherwise it's difficult for me to determine where the issue is.


I'd also recommend swapping to langgraph. We primarily recommend langgraph for orchestration these days.

https://langchain-ai.github.io/langgraph/how-tos/create-react-agent/#setup

Tutorial is here:

https://langchain-ai.github.io/langgraph/tutorials/introduction/

Copy link
Collaborator

@eyurtsev Eugene Yurtsev (eyurtsev) left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't want to merge a fix in this part of the code. the underlying issue is happening somewhere else in the code.

@eyurtsev
Copy link
Collaborator

Closing this PR for now since the fix is in the wrong place in the code

@lumiaspic
Copy link
Author

Thank you for your answers. I'll try to provide more detail on the issue later on.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Related to a bug, vulnerability, unexpected error with an existing feature

Projects

None yet

Development

Successfully merging this pull request may close these issues.

KeyError: 'tool_call_id' when there is no Tool message in given chat history (Only System Message)

2 participants