-
Notifications
You must be signed in to change notification settings - Fork 13.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory not supported with sources chain? #2256
Comments
+1 |
I'm having the same problem when trying to use memory with RetrievalQAWithSourcesChain. Found and followed Langchain tutorial but nothing works: |
Having the same issue here, it would be really nice to have an example of how to get this to work. |
+1 |
Receiving the same error when trying to use memory in |
same issue with ConversationalRetrievalChain |
+1 |
2 similar comments
+1 |
+1 |
You can do this workaround for the time being Edit Find this section:
Change:
To:
|
Seems to be similar to #2068 (comment) You probably have to define what your output_key actually is to get the chain to work |
i found the solution by reading the source code: |
This works like a charm! |
Can confirm it is working for
Thanks a bunch! |
Adding the |
The ConversationalRetrievalChain adds a memory by default, shouldn't it also set the output_key for that memory if no memory was passed? Seems strange allowing it to be instantiated without a memory and then not being able to run because a memory was not set up properly. I'm not sure exactly where we could add that, though. Maybe here:https://github.com/hwchase17/langchain/blob/980c8651743b653f994ad6b97a27b0fa31ee92b4/langchain/chains/conversational_retrieval/base.py#L117) |
Hello, @cyberjj999 , i am user a router chain with ConversationBufferMemory(), but when running the code, it doesn't seem that any information are being stored in the memory. Do you have any idea about router chain and memory? |
I tried using
I am able to achieve the output but followed by an error:
What should be done in this case? |
I'm having the same problems trying to use RetrievalQAWithSourcesChain with memory. Does anyone have a way in which it can be used? |
Do the following:
|
In my side, I was trying to keep this two argument return_source_documents=True and return_generated_question=True. if len(outputs) != 1:
raise ValueError(f"One output key expected, got {outputs.keys()}") This allow me to conserve "source_documents and "generated_question" inside the output without breaking the code. import langchain
from typing import Dict, Any, Tuple
from langchain.memory.utils import get_prompt_input_key
def _get_input_output(
self, inputs: Dict[str, Any], outputs: Dict[str, str]
) -> Tuple[str, str]:
if self.input_key is None:
prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
else:
prompt_input_key = self.input_key
if self.output_key is None:
output_key = list(outputs.keys())[0]
else:
output_key = self.output_key
return inputs[prompt_input_key], outputs[output_key]
langchain.memory.chat_memory.BaseChatMemory._get_input_output = _get_input_output Here, the original method : https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/memory/chat_memory.py#L11 |
@JonaTri thank you very much it works for me! I think the fix should be merged to langchain |
anyone know how to get this to work with an Agent? got it to work as a standalone chain but still get |
+1 |
with RetrievalQA.from_chain_type() you can use memory. To avoid ValueError: One output key expected, got dict_keys(['answer', 'sources']), you need to specify key values in memory function (e.g. ConversationBufferMemory(memory_key="chat_history", return_messages=True, input_key='query', output_key='result')). It would be noce to add this in official documentation, because it looks like it's not possible or you can do it only with ConversationalRetrievalChain.from_llm(). |
I propose a solution. "langchain/agents/agent.py" is the class from which all the extension chains mentioned above are derived. @property
@abstractmethod
def input_keys(self) -> List[str]:
"""Return the input keys.
:meta private:
""" @property
def output_keys(self) -> List[str]:
"""Return the singular output key.
:meta private:
"""
if self.return_intermediate_steps:
return self.agent.return_values + ["intermediate_steps"]
else:
return self.agent.return_values All memory-related objects return a key that exists through the above method, but when passing these keys to the output parser, only the memory key is not passed, so the functions implemented in each agent are unnecessary depending on the purpose. Useless Key values must be excluded. like.. @property
def input_keys(self) -> List[str]:
"""Return the input keys.
:meta private:
"""
return list(set(self.llm_chain.input_keys) - {"agent_scratchpad"}) The above source is defintion of "Agent(BaseSingleActionAgent)" Key values to be excluded from the methods mentioned above are also accepted as arguments, so clear unification of input_key and output_key is necessary to prevent branching problems in each chain. The same method is already implemented differently in many chains, which continues to create errors in related chains. |
Hi, @jordanparker6 I'm helping the LangChain team manage their backlog and am marking this issue as stale. The issue you reported is related to memory not being supported when using the 'sources' chains, causing errors with writing multiple output keys. There have been discussions and suggestions in the comments regarding workarounds, modifying the source code, specifying key values in the memory function, and potential changes to the official documentation. However, the issue remains unresolved. Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days. Thank you! |
Actually it would work for every type of memory object. |
Memory doesn't seem to be supported when using the 'sources' chains. It appears to have issues writing multiple output keys.
Is there a work around to this?
The text was updated successfully, but these errors were encountered: