-
Notifications
You must be signed in to change notification settings - Fork 14k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core[patch]: fix no current event loop for sql history in async mode #22933
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Ignored Deployment
|
@pprados Please help to review this PR. |
1be040f
to
657e73b
Compare
the issue you linked to is closed. can you share the code you are running to reproduce this error? |
@hwchase17 Here is a sample langserve code to reproduce this error. Requirementslangchain-openai==0.1.8 LangServe server side codefrom typing import AsyncIterator, Any
from fastapi import APIRouter, FastAPI
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_community.chat_message_histories.sql import SQLChatMessageHistory
from langchain_core.runnables import Runnable
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_core.runnables.utils import ConfigurableFieldSpec
from langchain_core.tools import Tool
from langchain_openai import AzureChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain.prompts import (
ChatPromptTemplate, MessagesPlaceholder, HumanMessagePromptTemplate,
SystemMessagePromptTemplate
)
from langchain.schema.runnable import RunnableLambda
from langserve import add_routes
from langserve.pydantic_v1 import BaseModel, Field
from sqlalchemy.ext.asyncio import create_async_engine
# Config
sql_server_url = "mysql+asyncmy://dbuser:dbpass@localhost:3306/history"
azure_endpoint = "fixme"
azure_deployment = "fixme"
openai_api_version = "fixme"
openai_api_key = "fixme"
# Model
class LLMInput(BaseModel):
question: str = Field(
...,
description="user question",
)
system_prompt: str = Field(
default="You are a helpful AI assistant",
description="system prompt",
)
# History Stuff
async_sql_engine = create_async_engine(sql_server_url, pool_recycle=600)
history_factory_config = [
ConfigurableFieldSpec(
id="user_id",
annotation=str,
name="User ID",
description="Unique identifier for the user.",
default="",
is_shared=True,
),
ConfigurableFieldSpec(
id="conversation_id",
annotation=str,
name="Conversation ID",
description="Unique identifier for the conversation.",
default="",
is_shared=True,
),
]
def get_session_history(
user_id: str,
conversation_id: str,
) -> BaseChatMessageHistory:
"""Get a chat history from a session ID."""
session_id = f"{user_id}_{conversation_id}"
return SQLChatMessageHistory(
session_id=session_id,
connection=async_sql_engine,
)
# Tool Stuff
def search(query: str, **kwargs) -> str:
return f"{query} found"
class SearchToolInput(BaseModel):
"""Input for music tool."""
query: str = Field(description="keyword for search")
search_tool = Tool(
name="web_search",
description="web_search is a tool to search online information.",
func=search,
args_schema=SearchToolInput,
)
def build_llm_chain(llm_input: LLMInput) -> Runnable:
llm = AzureChatOpenAI(
azure_endpoint=azure_endpoint,
azure_deployment=azure_deployment,
openai_api_version=openai_api_version,
openai_api_key=openai_api_key,
streaming=True,
temperature=0.01,
max_tokens=4000,
)
tools = [search_tool]
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template(llm_input.system_prompt),
MessagesPlaceholder(variable_name="history"),
HumanMessagePromptTemplate.from_template("{question}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
])
agent = create_tool_calling_agent(
llm, tools, prompt,
)
return AgentExecutor(
agent=agent,
tools=tools,
)
def build_chain(llm_input: LLMInput) -> Runnable:
llm_chain = build_llm_chain(llm_input)
return RunnableWithMessageHistory(
llm_chain,
get_session_history,
input_messages_key="question",
history_messages_key="history",
history_factory_config=history_factory_config
).with_types(
input_type=LLMInput
)
async def llm_main(_input: dict[str, Any]) -> AsyncIterator[Any]:
_input["question"] = _input["question"].lstrip("\n").rstrip()
_input["system_prompt"] = _input["system_prompt"] or "You are a helpful AI assistant"
llm_input: LLMInput = LLMInput.parse_obj(_input)
chain = build_chain(llm_input)
config = {
"configurable": {
"user_id": "user1",
"conversation_id": "conv1",
}
}
async for event in chain.astream_events(_input, config, version="v1"):
yield event
router = APIRouter()
add_routes(
router,
RunnableLambda(llm_main).with_types(input_type=LLMInput),
enabled_endpoints=["invoke", "batch", "stream"]
)
app = FastAPI(
title="Test Server",
version="0.01",
description="Test Server Powered by LangChain&LLM",
)
app.include_router(router) Save it as uvicorn async_agent:app --host 0.0.0.0 LangServe client side codefrom langserve import RemoteRunnable
client = RemoteRunnable('http://localhost:8000/')
for chunk in yuan.stream({'question': "What's langchain"}):
print(chunk) Save it as python agent_client.py ResultOn the client side, you will see many output about the event, and final answer about the question is got.
where the error is throwed by
|
657e73b
to
4db110a
Compare
@hwchase17, @mackong With this PR, it's possible to remove the hack I mentioned here in def add_messages(self, messages: Sequence[BaseMessage]) -> None:
# The method RunnableWithMessageHistory._exit_history() call
# add_message method by mistake and not aadd_message.
# See https://github.com/langchain-ai/langchain/issues/22021
if self.async_mode:
loop = asyncio.get_event_loop()
loop.run_until_complete(self.aadd_messages(messages))
else:
... You can remove this hack. |
eeee215
to
b1c2f85
Compare
OK, removed. |
@mackong, Some checks were not successful |
b1c2f85
to
d5ee9c1
Compare
d5ee9c1
to
1c68ae8
Compare
@pprados already passed |
1c68ae8
to
a9261a8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks overall reasonable to me. It'll generate a trace that's a bit uglier than before, but I think more important to fix the async path
@@ -483,6 +499,23 @@ def _exit_history(self, run: Run, config: RunnableConfig) -> None: | |||
output_messages = self._get_output_messages(output_val) | |||
hist.add_messages(input_messages + output_messages) | |||
|
|||
async def _aexit_history(self, run: Run, config: RunnableConfig) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mackong would you mind unit testing code to cover the async path?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eyurtsev unit testing added, but there is a unit will be failed caused by a unrelated issue. see https://github.com/langchain-ai/langchain/actions/runs/9594957636/job/26458682186?pr=22933#step:6:164
Now AsyncRootListenersTracer's schema format is original, so on_chat_model_start
will fallback to on_llm_start
, then type of Run's input will be str not BaseMessage, then it will be ignored by ChatMessageHistory's add_message.
langchain/libs/core/tests/unit_tests/fake/memory.py
Lines 20 to 22 in ad7f2ec
if not isinstance(message, BaseMessage): | |
raise ValueError | |
self.messages.append(message) |
I have create a PR #23214 which fix the issue, please review #23214 first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eyurtsev now added unit tests passed
a9261a8
to
71bd4da
Compare
71bd4da
to
c9d101a
Compare
c9d101a
to
9daee92
Compare
…angchain-ai#22933) - **Description:** When use RunnableWithMessageHistory/SQLChatMessageHistory in async mode, we'll get the following error: ``` Error in RootListenersTracer.on_chain_end callback: RuntimeError("There is no current event loop in thread 'asyncio_3'.") ``` which throwed by https://github.com/langchain-ai/langchain/blob/ddfbca38dfa22954eaeda38614c6e1ec0cdecaa9/libs/community/langchain_community/chat_message_histories/sql.py#L259. and no message history will be add to database. In this patch, a new _aexit_history function which will'be called in async mode is added, and in turn aadd_messages will be called. In this patch, we use `afunc` attribute of a Runnable to check if the end listener should be run in async mode or not. - **Issue:** langchain-ai#22021, langchain-ai#22022 - **Dependencies:** N/A
which throwed by
langchain/libs/community/langchain_community/chat_message_histories/sql.py
Line 259 in ddfbca3
In this patch, a new _aexit_history function which will'be called in async mode is added, and in turn aadd_messages will be called.
In this patch, we use
afunc
attribute of a Runnable to check if the end listener should be run in async mode or not.