Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory not supported with sources chain? #2256

Closed
jordanparker6 opened this issue Apr 1, 2023 · 28 comments
Closed

Memory not supported with sources chain? #2256

jordanparker6 opened this issue Apr 1, 2023 · 28 comments

Comments

@jordanparker6
Copy link

Memory doesn't seem to be supported when using the 'sources' chains. It appears to have issues writing multiple output keys.

Is there a work around to this?

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[13], line 1
----> 1 chain({ "question": "Do we have any agreements with INGRAM MICRO." }, return_only_outputs=True)

File [~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/chains/base.py:118](https://file+.vscode-resource.vscode-cdn.net/Users/jordanparker/helpmefindlaw/search-service/examples/notebooks/~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/chains/base.py:118), in Chain.__call__(self, inputs, return_only_outputs)
    116     raise e
    117 self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
--> 118 return self.prep_outputs(inputs, outputs, return_only_outputs)

File [~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/chains/base.py:170](https://file+.vscode-resource.vscode-cdn.net/Users/jordanparker/helpmefindlaw/search-service/examples/notebooks/~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/chains/base.py:170), in Chain.prep_outputs(self, inputs, outputs, return_only_outputs)
    168 self._validate_outputs(outputs)
    169 if self.memory is not None:
--> 170     self.memory.save_context(inputs, outputs)
    171 if return_only_outputs:
    172     return outputs

File [~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/memory/summary_buffer.py:59](https://file+.vscode-resource.vscode-cdn.net/Users/jordanparker/helpmefindlaw/search-service/examples/notebooks/~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/memory/summary_buffer.py:59), in ConversationSummaryBufferMemory.save_context(self, inputs, outputs)
     57 def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
     58     """Save context from this conversation to buffer."""
---> 59     super().save_context(inputs, outputs)
     60     # Prune buffer if it exceeds max token limit
     61     buffer = self.chat_memory.messages

File [~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/memory/chat_memory.py:37](https://file+.vscode-resource.vscode-cdn.net/Users/jordanparker/helpmefindlaw/search-service/examples/notebooks/~/helpmefindlaw/search-service/.venv/lib/python3.10/site-packages/langchain/memory/chat_memory.py:37), in BaseChatMemory.save_context(self, inputs, outputs)
...
---> 37         raise ValueError(f"One output key expected, got {outputs.keys()}")
     38     output_key = list(outputs.keys())[0]
     39 else:

ValueError: One output key expected, got dict_keys(['answer', 'sources'])
@moraneden
Copy link

+1

@mystvearn
Copy link

I'm having the same problem when trying to use memory with RetrievalQAWithSourcesChain. Found and followed Langchain tutorial but nothing works:

https://python.langchain.com/en/latest/modules/memory/examples/adding_memory_chain_multiple_inputs.html

@pirtlj
Copy link

pirtlj commented Apr 5, 2023

Having the same issue here, it would be really nice to have an example of how to get this to work.

@VladoPortos
Copy link

+1

@deathblade287
Copy link

Receiving the same error when trying to use memory in RetrievalQAWithSourcesChain

@atc0m
Copy link

atc0m commented May 14, 2023

same issue with ConversationalRetrievalChain

@shdmitry2000
Copy link

+1

2 similar comments
@gborgonovo
Copy link

+1

@Eliseowzy
Copy link

+1

@chiva
Copy link

chiva commented May 17, 2023

You can do this workaround for the time being
It should be pretty safe and not break that piece when using it on other use cases (as in other chains), but don't know Langchain deep enough as to ensure it.

Edit lib/python3.10/site-packages/langchain/memory/chat_memory.py

Find this section:

class BaseChatMemory(BaseMemory, ABC):
    chat_memory: BaseChatMessageHistory = Field(default_factory=ChatMessageHistory)
    output_key: Optional[str] = None
    input_key: Optional[str] = None
    return_messages: bool = False

    def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
        """Save context from this conversation to buffer."""
        if self.input_key is None:
            prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
        else:
            prompt_input_key = self.input_key
        if self.output_key is None:
            if len(outputs) != 1:
                raise ValueError(f"One output key expected, got {outputs.keys()}")
            output_key = list(outputs.keys())[0]
        else:
            output_key = self.output_key
        self.chat_memory.add_user_message(inputs[prompt_input_key])
        self.chat_memory.add_ai_message(outputs[output_key])

Change:

 if self.output_key is None:
            if len(outputs) != 1:
                raise ValueError(f"One output key expected, got {outputs.keys()}")
            output_key = list(outputs.keys())[0]

To:

      if self.output_key is None:
          if len(outputs) == 1:
              output_key = list(outputs.keys())[0]
          else:
              if "answer" in outputs.keys():
                  output_key = "answer"
              else:
                  raise ValueError(f"One output key expected, got {outputs.keys()}")

@KEKL-KEKW
Copy link

Seems to be similar to #2068 (comment)

You probably have to define what your output_key actually is to get the chain to work

@ikebo
Copy link
Contributor

ikebo commented May 21, 2023

i found the solution by reading the source code:
memory = ConversationSummaryBufferMemory(llm=llm, input_key='question', output_key='answer')

@portkeys
Copy link

i found the solution by reading the source code: memory = ConversationSummaryBufferMemory(llm=llm, input_key='question', output_key='answer')

This works like a charm!

@cyberjj999
Copy link

Can confirm it is working for ConversationBufferMemory too.

memory = ConversationBufferMemory(memory_key="chat_history", input_key='question', output_key='answer', return_messages=True)

Thanks a bunch!

leighklotz added a commit to leighklotz/talk-codebase that referenced this issue May 30, 2023
@dangarfield
Copy link

Adding the output_key as above worked for me also.

@ogabrielluiz
Copy link

The ConversationalRetrievalChain adds a memory by default, shouldn't it also set the output_key for that memory if no memory was passed?

Seems strange allowing it to be instantiated without a memory and then not being able to run because a memory was not set up properly.

I'm not sure exactly where we could add that, though. Maybe here:https://github.com/hwchase17/langchain/blob/980c8651743b653f994ad6b97a27b0fa31ee92b4/langchain/chains/conversational_retrieval/base.py#L117)
after we set the output we then set the output_key for the memory if it does not have one.

@Ali-Issa-aems
Copy link

Hello, @cyberjj999 , i am user a router chain with ConversationBufferMemory(), but when running the code, it doesn't seem that any information are being stored in the memory. Do you have any idea about router chain and memory?

@Shuhul24
Copy link

I tried using langchain.memory.ConversationBufferMemory() in RetreivalQAWithSourcesChain as:

qa = RetrievalQAWithSourcesChain(..., memory=ConversationBufferMemory(memory_key="history", input_key="query"))

I am able to achieve the output but followed by an error:

INFO:     127.0.0.1:63947 - "GET /extract/ HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/fastapi/applications.py", line 289, in __call__
    await super().__call__(scope, receive, send)
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
    raise exc
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
    raise e
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
    await self.app(scope, receive, send)
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
    response = await func(request)
               ^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 273, in app
    raw_response = await run_endpoint_function(
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/fastapi/routing.py", line 190, in run_endpoint_function
    return await dependant.call(**values)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/PDF-QA/main.py", line 40, in extract_file
    response = qa_chaining(qabuild, "What is the document about?")
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/PDF-QA/_functions.py", line 71, in qa_chaining
    result = qa({"question": user_question}, return_only_outputs=True)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 118, in __call__
    return self.prep_outputs(inputs, outputs, return_only_outputs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 170, in prep_outputs
    self.memory.save_context(inputs, outputs)
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/langchain/memory/chat_memory.py", line 34, in save_context
    input_str, output_str = self._get_input_output(inputs, outputs)
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/shuhulhandoo/MetaGeeks/.venv/lib/python3.11/site-packages/langchain/memory/chat_memory.py", line 26, in _get_input_output
    raise ValueError(f"One output key expected, got {outputs.keys()}")
ValueError: One output key expected, got dict_keys(['answer', 'sources', 'source_documents'])

What should be done in this case?

@User2345678910
Copy link

I'm having the same problems trying to use RetrievalQAWithSourcesChain with memory. Does anyone have a way in which it can be used?

@farbodnowzad
Copy link

Do the following:

  1. Create memory with input_key and output_key: memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True, input_key="question", output_key="answer")
  2. Initialize ConversationalRetrievalChain with memory: qa = ConversationalRetrievalChain.from_llm(ChatOpenAI(max_tokens=512, model="gpt-3.5-turbo"), retriever=retriever, return_source_documents=True, memory=memory)
  3. Make a query to the QA using the input_key: qa({"question": prompt})

@JonaTri
Copy link

JonaTri commented Aug 4, 2023

In my side, I was trying to keep this two argument return_source_documents=True and return_generated_question=True.
I've found a solution that works for me. In BaseChatMemory source code I delete two line with a raise function.

if len(outputs) != 1:
    raise ValueError(f"One output key expected, got {outputs.keys()}")

This allow me to conserve "source_documents and "generated_question" inside the output without breaking the code.
So to change the source code you just have to run the code below.

import langchain
from typing import Dict, Any, Tuple
from langchain.memory.utils import get_prompt_input_key

def _get_input_output(
    self, inputs: Dict[str, Any], outputs: Dict[str, str]
) -> Tuple[str, str]:
    if self.input_key is None:
        prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
    else:
        prompt_input_key = self.input_key
    if self.output_key is None:
        output_key = list(outputs.keys())[0]
    else:
        output_key = self.output_key
    return inputs[prompt_input_key], outputs[output_key]
  
langchain.memory.chat_memory.BaseChatMemory._get_input_output = _get_input_output

Here, the original method : https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/memory/chat_memory.py#L11

@antonkulaga
Copy link

@JonaTri thank you very much it works for me! I think the fix should be merged to langchain

@bhaktatejas922
Copy link

anyone know how to get this to work with an Agent? got it to work as a standalone chain but still get
lib/python3.9/site-packages/langchain/chains/base.py", line 133, in _chain_type raise NotImplementedError("Saving not supported for this chain type.") NotImplementedError: Saving not supported for this chain type.

@faisal-saddique
Copy link

+1

@reddiamond1234
Copy link

reddiamond1234 commented Sep 4, 2023

with RetrievalQA.from_chain_type() you can use memory. To avoid ValueError: One output key expected, got dict_keys(['answer', 'sources']), you need to specify key values in memory function (e.g. ConversationBufferMemory(memory_key="chat_history", return_messages=True, input_key='query', output_key='result')). It would be noce to add this in official documentation, because it looks like it's not possible or you can do it only with ConversationalRetrievalChain.from_llm().
Issue can now be closed @hwchase17

@YamonBot
Copy link

I propose a solution.

"langchain/agents/agent.py" is the class from which all the extension chains mentioned above are derived.

    @property
    @abstractmethod
    def input_keys(self) -> List[str]:
        """Return the input keys.

        :meta private:
        """
    @property
    def output_keys(self) -> List[str]:
        """Return the singular output key.

        :meta private:
        """
        if self.return_intermediate_steps:
            return self.agent.return_values + ["intermediate_steps"]
        else:
            return self.agent.return_values

All memory-related objects return a key that exists through the above method, but when passing these keys to the output parser, only the memory key is not passed, so the functions implemented in each agent are unnecessary depending on the purpose. Useless Key values ​​must be excluded.

like..

    @property
    def input_keys(self) -> List[str]:
        """Return the input keys.

        :meta private:
        """
        return list(set(self.llm_chain.input_keys) - {"agent_scratchpad"})

The above source is

defintion of "Agent(BaseSingleActionAgent)"

Key values ​​to be excluded from the methods mentioned above are also accepted as arguments, so clear unification of input_key and output_key is necessary to prevent branching problems in each chain. The same method is already implemented differently in many chains, which continues to create errors in related chains.

Copy link

dosubot bot commented Dec 22, 2023

Hi, @jordanparker6

I'm helping the LangChain team manage their backlog and am marking this issue as stale. The issue you reported is related to memory not being supported when using the 'sources' chains, causing errors with writing multiple output keys. There have been discussions and suggestions in the comments regarding workarounds, modifying the source code, specifying key values in the memory function, and potential changes to the official documentation. However, the issue remains unresolved.

Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days. Thank you!

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Dec 22, 2023
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 30, 2023
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Dec 30, 2023
@Ojasmodi
Copy link

Ojasmodi commented Mar 1, 2024

Adding the output_key as above worked for me also.

Actually it would work for every type of memory object.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests