-
Notifications
You must be signed in to change notification settings - Fork 13.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLMRouterChain uses deprecated predict_and_parse method #6819
Comments
I am seeing the same issue on v0.0.221 , Python 3.10.6 Windows 10 |
I have the same issue on v0.0.229, Python v3.10.12 |
I have the same issue on v0.0.230, Python v3.10.6 Windows 11 |
same issue here |
langchain v0.0.232
|
langchain v0.0.235, Python 3.9.17 Windows 10 |
langchain v0.0.240, Python 3.10.10 macOS Ventura |
langchain v0.0.244, Python 3.10.11 Windows 10 |
I am seeing the same issue on v0.0.257 Python 3.9.12 RedHat |
same issue v0.0.270 python 3.11.3 windows |
Same warning on v0.0.275 python 3.11.3 WSL on Windows 11 |
I also get the same error when using this option:
If I remove the page_content method, I do not get this error. Also if I use this (CharacterTextSplitter), I do not see the error
Here is a sample function where this happens:
|
y'all any fix to it? What are we supposed to do |
Got the same warning while using load_qa_chain |
I posted a very similar issue #10462, using |
I solved it by extending class AsyncSelfQueryRetriever(SelfQueryRetriever):
async def _aget_relevant_documents(
self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun
) -> List[Document]:
"""Asynchronously get documents relevant to a query.
Args:
query: String to find relevant documents for
run_manager: The callbacks handler to use
Returns:
List of relevant documents
"""
inputs = self.llm_chain.prep_inputs({"query": query})
structured_query = cast(
StructuredQuery,
# Instead of calling 'self.llm_chain.predict_and_parse' here,
# I changed it to leveraging 'self.llm_chain.prompt.output_parser.parse'
# and 'self.llm_chain.apredict'
# ↓↓↓↓↓↓↓
self.llm_chain.prompt.output_parser.parse(
await self.llm_chain.apredict(
callbacks=run_manager.get_child(), **inputs
)
),
)
if self.verbose:
print(structured_query)
new_query, new_kwargs = self.structured_query_translator.visit_structured_query(
structured_query
)
if structured_query.limit is not None:
new_kwargs["k"] = structured_query.limit
if self.use_original_query:
new_query = query
search_kwargs = {**self.search_kwargs, **new_kwargs}
docs = await self.vectorstore.asearch(
new_query, self.search_type, **search_kwargs
)
return docs |
Hi! @ellisxu Very appreciated with your sharing! Can you show the packages which are imported from? |
The package is |
I am still running into this with |
#6819 (comment) Try this. :) |
This is unfortunate as this part of Lang Chain is used in the DeepLearningAI course |
System Info
langchain v0.0.216, Python 3.11.3 on WSL2
Who can help?
@hwchase17
Information
Related Components
Reproduction
Follow the first example at https://python.langchain.com/docs/modules/chains/foundational/router
Expected behavior
This line gets triggered:
As suggested by the error, we can make the following code changes to pass the output parser directly to LLMChain by changing this line to this:
And calling
LLMChain.__call__
instead ofLLMChain.predict_and_parse
by changing these lines to this:Unfortunately, while this avoids the warning, it creates a new error:
because LLMChain currently assumes the existence of a single
self.output_key
and produces this as output:Even modifying that function to return the keys if the parsed output is a dict triggers the same error, but for the missing key of "text" instead.
predict_and_parse
avoids this fate by skipping output validation entirely.It appears changes may have to be a bit more involved here if LLMRouterChain is to keep using LLMChain.
The text was updated successfully, but these errors were encountered: