-
Notifications
You must be signed in to change notification settings - Fork 14.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
astream_events (V1 and V2) gives duplicate content in on_chat_model_stream #22227
Comments
@eyurtsev or @jacoblee93 can you please look into this example? |
@Sanzid88888 you pasted your API Key for openai when you included the example. I redacted it with the edit, but please login into openai and disable it. Assume it's been publicly leaked now! |
MRE: from langchain_core.callbacks import Callbacks
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
model = ChatOpenAI(temperature=0)
@tool
async def get_items(place: str, callbacks): # <--- Accept callbacks
"""Use this tool to look up which items are in the given place."""
template = ChatPromptTemplate.from_messages(
[
(
"human",
"Can you tell me what kind of items i might find in the following place: '{place}'. "
"List at least 3 such items separating them by a comma. And include a brief description of each item..",
)
]
)
chain = template | model.with_config(
{
"run_name": "Get Items LLM",
"tags": ["tool_llm"],
"callbacks": callbacks, # <-- Propagate callbacks
}
)
r = await chain.ainvoke({"place": place})
return r
async for event in get_items.astream_events('hello', version='v1'):
if event['event'] == "on_chat_model_stream":
content = event["data"]["chunk"].content
print(content) Produces tokens like
|
@Sanzid88888 Looks like issue is with some magic that we do behind the scenes to propagate callbacks on behalf of the user. Remove the explicit callback passing if you're on python >=3.11 @tool
async def get_items(place: str):
"""Use this tool to look up which items are in the given place."""
template = ChatPromptTemplate.from_messages(
[
(
"human",
"Can you tell me what kind of items i might find in the following place: '{place}'. "
"List at least 3 such items separating them by a comma. And include a brief description of each item..",
)
]
)
chain = template | model.with_config(
{
"run_name": "Get Items LLM",
"tags": ["tool_llm"],
}
)
r = await chain.ainvoke({"place": place})
return r I'll try to fix this in the meantime. Thanks for reporting the issue |
@Sanzid88888 you can do this if you're on older python versions. chain = template | model_langchain.with_config(
{
"run_name": "Get Items LLM",
"tags": ["tool_llm"],
}
)
r = await chain.ainvoke({"place": place}, {'callbacks': callbacks}) I'm still investigating, but I suspect this isn't exactly a bug, but bad semantics -- we're attaching callbacks to the model + then langchain (if you're on python >=3.11, attempts to automatically propagate callbacks). We might remove the ability to specify callbacks via with_config or else attempt to dedup the callback handler |
This PR adds deduplication of callback handlers in merge_configs. Fix for this issue: #22227 The issue appears when the code is: 1) running python >=3.11 2) invokes a runnable from within a runnable 3) binds the callbacks to the child runnable from the parent runnable using with_config In this case, the same callbacks end up appearing twice: (1) the first time from with_config, (2) the second time with langchain automatically propagating them on behalf of the user. Prior to this PR this will emit duplicate events: ```python @tool async def get_items(question: str, callbacks: Callbacks): # <--- Accept callbacks """Ask question""" template = ChatPromptTemplate.from_messages( [ ( "human", "'{question}" ) ] ) chain = template | chat_model.with_config( { "callbacks": callbacks, # <-- Propagate callbacks } ) return await chain.ainvoke({"question": question}) ``` Prior to this PR this will work work correctly (no duplicate events): ```python @tool async def get_items(question: str, callbacks: Callbacks): # <--- Accept callbacks """Ask question""" template = ChatPromptTemplate.from_messages( [ ( "human", "'{question}" ) ] ) chain = template | chat_model return await chain.ainvoke({"question": question}, {"callbacks": callbacks}) ``` This will also work (as long as the user is using python >= 3.11) -- as langchain will automatically propagate callbacks ```python @tool async def get_items(question: str,): """Ask question""" template = ChatPromptTemplate.from_messages( [ ( "human", "'{question}" ) ] ) chain = template | chat_model return await chain.ainvoke({"question": question}) ```
Merged fix for this issue. Will be released in the next core release -- in the meantime use the work-arounds ^ |
This PR adds deduplication of callback handlers in merge_configs. Fix for this issue: #22227 The issue appears when the code is: 1) running python >=3.11 2) invokes a runnable from within a runnable 3) binds the callbacks to the child runnable from the parent runnable using with_config In this case, the same callbacks end up appearing twice: (1) the first time from with_config, (2) the second time with langchain automatically propagating them on behalf of the user. Prior to this PR this will emit duplicate events: ```python @tool async def get_items(question: str, callbacks: Callbacks): # <--- Accept callbacks """Ask question""" template = ChatPromptTemplate.from_messages( [ ( "human", "'{question}" ) ] ) chain = template | chat_model.with_config( { "callbacks": callbacks, # <-- Propagate callbacks } ) return await chain.ainvoke({"question": question}) ``` Prior to this PR this will work work correctly (no duplicate events): ```python @tool async def get_items(question: str, callbacks: Callbacks): # <--- Accept callbacks """Ask question""" template = ChatPromptTemplate.from_messages( [ ( "human", "'{question}" ) ] ) chain = template | chat_model return await chain.ainvoke({"question": question}, {"callbacks": callbacks}) ``` This will also work (as long as the user is using python >= 3.11) -- as langchain will automatically propagate callbacks ```python @tool async def get_items(question: str,): """Ask question""" template = ChatPromptTemplate.from_messages( [ ( "human", "'{question}" ) ] ) chain = template | chat_model return await chain.ainvoke({"question": question}) ```
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
1|1|.|.| Books| Books| -| -| On| On| the| the| shelf| shelf|,|,| you| you| may| may| find| find| a| a| variety| variety| of| of| books| books| ranging| ranging| from| from| fiction| fiction| to| to| non| non|-fiction|-fiction|,|,| covering| covering| different| different| genres| genres| and| and| topics| topics|.|.| Books| Books| are| are| typically| typically| arranged| arranged| in| in| a| a| neat| neat| and| and| organized| organized| manner| manner| for| for| easy| easy| browsing| browsing|.
|.
|2|2|.|.| Photo| Photo| frames| frames| -| -| Photo| Photo| frames| frames| are| are| commonly| commonly| placed| placed| on| on| shelves| shelves| to| to| display| display| cherished| cherished| memories| memories| and| and| moments| moments| captured| captured| in| in| photographs| photographs|.|.| They| They| come| come| in| in| various| various| sizes| sizes|,|,| shapes| shapes|,|,| and| and| designs| designs| to| to| complement| complement| the| the| decor| decor| of| of| the| the| room| room|.
|.
|3|3|.|.| Decor| Decor|ative|ative| figur| figur|ines|ines| -| -| Decor| Decor|ative|ative| figur| figur|ines|ines| such| such| as| as| sculptures| sculptures|,|,| v| v|ases|ases|,|,| or| or| small| small| statues| statues| are| are| often| often| placed| placed| on| on| shelves| shelves| to| to| add| add| a| a| touch| touch| of| of| personality| personality| and| and| style| style| to| to| the| the| space| space|.|.| These| These| items| items| can| can| be| be| made| made| of| of| different| different| materials| materials| like| like| ceramic| ceramic|,|,| glass| glass|,|,| or| or| metal| metal|.|.
Description
astream_events gives duplicate content in on_chat_model_stream.
1|1|.|.| Books| Books| -| -| On| On| the| the| shelf| shelf|,|,| you| you| may| may| find| find| a| a| variety| variety| of| of| books| books| ranging| ranging| from| from| fiction| fiction| to| to| non| non|-fiction|-fiction|,|,| covering| covering| different| different| genres| genres| and| and| topics| topics|.|.| Books| Books| are| are| typically| typically| arranged| arranged| in| in| a| a| neat| neat| and| and| organized| organized| manner| manner| for| for| easy| easy| browsing| browsing|.
Here Books| Books| On| On| getting twice in on_chat_model_stream content
Tried V2 same result as duplicate
I used examples from astream_events :
https://python.langchain.com/v0.1/docs/modules/agents/how_to/streaming/
@hwchase17 @leo-gan
System Info
langchain==0.2.1
langchain-community==0.2.1
langchain-core==0.2.1
langchain-google-genai==1.0.5
langchain-openai==0.1.7
langchain-text-splitters==0.2.0
langchainhub==0.1.15
Platform : Mac OS (Sonioma:14.4) , M1
Python 3.11.6
The text was updated successfully, but these errors were encountered: