Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: AgentExecutor doesn't use its local callbacks during planning #22703

Open
5 tasks done
jamesbraza opened this issue Jun 8, 2024 · 1 comment
Open
5 tasks done
Labels
Ɑ: agent Related to agents module 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature 🔌: openai Primarily related to OpenAI integrations

Comments

@jamesbraza
Copy link
Contributor

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

from langchain.agents import AgentExecutor
from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent
from langchain_community.callbacks import OpenAICallbackHandler
from langchain_community.tools import SleepTool
from langchain_core.runnables import RunnableConfig
from langchain_openai import ChatOpenAI

# We should incur some OpenAI costs here from agent planning
cost_callback = OpenAICallbackHandler()
tools = [SleepTool()]
agent_instance = AgentExecutor.from_agent_and_tools(
    tools=tools,
    agent=OpenAIFunctionsAgent.from_llm_and_tools(
        ChatOpenAI(model="gpt-4", request_timeout=15.0), tools  # type: ignore[call-arg]
    ),
    return_intermediate_steps=True,
    max_execution_time=10,
    callbacks=[cost_callback],  # "Local" callbacks
)

# NOTE: intentionally, I am not specifying the callback to invoke, as that
# would make the cost_callback be considered "inheritable" (which I don't want)
outputs = agent_instance.invoke(
    input={"input": "Sleep a few times for 100-ms."},
    # config=RunnableConfig(callbacks=[cost_callback]),  # "Inheritable" callbacks
)
assert len(outputs["intermediate_steps"]) > 0, "Agent should have slept a bit"
assert cost_callback.total_cost > 0, "Agent planning should have been accounted for"  # Fails

Error Message and Stack Trace (if applicable)

Traceback (most recent call last):
File "/Users/user/code/repo/app/agents/a.py", line 28, in
assert cost_callback.total_cost > 0, "Agent planning should have been accounted for"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Agent planning should have been accounted for

Description

LangChain has a useful concept of "inheritable" callbacks vs "local" callbacks, all managed by CallbackManger (source reference 1 and 2)

  • Inheritable callback: callback is automagically reused by nested invoke
  • Local callback: no reuse by nested invoke

Yesterday I discovered AgentExecutor does not use local callbacks for its planning step. I consider this a bug, as planning (e.g BaseSingleActionAgent.plan) is a core behavior of AgentExecutor.

The fix would be supporting AgentExecutor's local callbacks during planning

System Info

langchain==0.2.3
langchain-community==0.2.4
langchain-core==0.2.5
langchain-openai==0.1.8

@dosubot dosubot bot added Ɑ: agent Related to agents module 🔌: openai Primarily related to OpenAI integrations 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Jun 8, 2024
@jamesbraza
Copy link
Contributor Author

jamesbraza commented Jun 8, 2024

Here is a workaround:

from typing import cast
from unittest.mock import patch

from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.callbacks import BaseCallbackHandler, BaseCallbackManager, Callbacks

orig_plan = OpenAIFunctionsAgent.plan

...

def plan_with_injected_callbacks(
    self,
    intermediate_steps: list[tuple[AgentAction, str]],
    callbacks: Callbacks = None,
    **kwargs
) -> AgentAction | AgentFinish:
    if self == agent_instance.agent:
        # Work around https://github.com/langchain-ai/langchain/issues/22703
        for callback in cast(list[BaseCallbackHandler], agent_instance.callbacks):
            cast(BaseCallbackManager, callbacks).add_handler(callback, inherit=False)
    return orig_plan(self, intermediate_steps, callbacks, **kwargs)


# NOTE: intentionally, I am not specifying the callback to invoke, as that
# would make the cost_callback be considered "inheritable" (which I don't want)
with patch.object(OpenAIFunctionsAgent, "plan", plan_with_injected_callbacks):
    outputs = agent_instance.invoke(
        input={"input": "Sleep a few times for 100-ms."},
        # config=RunnableConfig(callbacks=[cost_callback]),  # "Inheritable" callbacks
    )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Ɑ: agent Related to agents module 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature 🔌: openai Primarily related to OpenAI integrations
Projects
None yet
Development

No branches or pull requests

1 participant