Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error can only concatenate str (not "tuple") to str when using ConversationBufferWindowMemory #3077

Closed
homanp opened this issue Apr 18, 2023 · 11 comments

Comments

@homanp
Copy link
Contributor

homanp commented Apr 18, 2023

I'm facing a weird issue with the ConversationBufferWindowMemory

Running memory.load_memory_variables({}) prints:

{'chat_history': [HumanMessage(content='Hi my name is Ismail', additional_kwargs={}), AIMessage(content='Hello Ismail! How can I assist you today?', additional_kwargs={})]}

The error I get after sending a second message to the chain is:

> Entering new ConversationalRetrievalChain chain...
[2023-04-18 10:34:52,512] ERROR in app: Exception on /api/v1/chat [POST]
Traceback (most recent call last):
  File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/flask/app.py", line 2528, in wsgi_app
    response = self.full_dispatch_request()
  File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/flask/app.py", line 1825, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/flask/app.py", line 1823, in full_dispatch_request
    rv = self.dispatch_request()
  File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/flask/app.py", line 1799, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/Users/homanp/Projects/ad-gpt/app.py", line 46, in chat
    result = chain({"question": message, "chat_history": []})
  File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/langchain/chains/base.py", line 116, in __call__
    raise e
  File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/langchain/chains/base.py", line 113, in __call__
    outputs = self._call(inputs)
  File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/langchain/chains/conversational_retrieval/base.py", line 71, in _call
    chat_history_str = get_chat_history(inputs["chat_history"])
  File "/Users/homanp/Projects/ADGPT_ENV/lib/python3.9/site-packages/langchain/chains/conversational_retrieval/base.py", line 25, in _get_chat_history
    human = "Human: " + human_s
TypeError: can only concatenate str (not "tuple") to str

Current implementaion:

memory = ConversationBufferWindowMemory(memory_key='chat_history', k=2, return_messages=True)

chain = ConversationalRetrievalChain.from_llm(model,
                                                  memory=memory,
                                                  verbose=True,
                                                  retriever=retriever, 
                                                  qa_prompt=QA_PROMPT,
                                                  condense_question_prompt=CONDENSE_QUESTION_PROMPT,)
@kanukolluGVT
Copy link

Hi please assign this to me i will solve it

@homanp
Copy link
Contributor Author

homanp commented Apr 18, 2023

Hi please assign this to me i will solve it

I don't have privileges to assign issues.

@kanukolluGVT
Copy link

Ok, Thanks ,homanp

@homanp
Copy link
Contributor Author

homanp commented Apr 18, 2023

so _get_chat_history want's a string but the memory is in a tuple. This should be handled without the user having to pass their own get_chat_history

@homanp
Copy link
Contributor Author

homanp commented Apr 18, 2023

I managed to fix this using

def get_chat_history(inputs) -> str:
    res = []
    for human, ai in inputs:
        res.append(f"Human:{human}\nAI:{ai}")
    return "\n".join(res)

@kanukolluGVT
Copy link

kanukolluGVT commented Apr 18, 2023

Got it ,Thanks ,looking forward to learn more by doing ,Ismail Pelaseyed

hwchase17 pushed a commit that referenced this issue Apr 20, 2023
Add a Pipeline example and add other models in th ehub notebook

To close issue
[#3077](#3099)
vowelparrot added a commit that referenced this issue Apr 20, 2023
While we work on solidifying the memory interfaces, handle common chat
history formats.

This may break linting on anyone who has been passing in
`get_chat_history` .

Somewhat handles #3077

Alternative to #3078 that updates the typing
vowelparrot added a commit that referenced this issue Apr 21, 2023
While we work on solidifying the memory interfaces, handle common chat
history formats.

This may break linting on anyone who has been passing in
`get_chat_history` .

Somewhat handles #3077

Alternative to #3078 that updates the typing
vowelparrot added a commit that referenced this issue Apr 28, 2023
While we work on solidifying the memory interfaces, handle common chat
history formats.

This may break linting on anyone who has been passing in
`get_chat_history` .

Somewhat handles #3077

Alternative to #3078 that updates the typing
samching pushed a commit to samching/langchain that referenced this issue May 1, 2023
Add a Pipeline example and add other models in th ehub notebook

To close issue
[langchain-ai#3077](langchain-ai#3099)
samching pushed a commit to samching/langchain that referenced this issue May 1, 2023
While we work on solidifying the memory interfaces, handle common chat
history formats.

This may break linting on anyone who has been passing in
`get_chat_history` .

Somewhat handles langchain-ai#3077

Alternative to langchain-ai#3078 that updates the typing
@tarek-kerbedj
Copy link

tarek-kerbedj commented May 1, 2023

is this resolved ? I'm facing the same thing and i didn't know how to use the workaround @homanp

@homanp
Copy link
Contributor Author

homanp commented May 1, 2023

yanghua pushed a commit to yanghua/langchain that referenced this issue May 9, 2023
While we work on solidifying the memory interfaces, handle common chat
history formats.

This may break linting on anyone who has been passing in
`get_chat_history` .

Somewhat handles langchain-ai#3077

Alternative to langchain-ai#3078 that updates the typing
@ambikaiyer29
Copy link

When will this be fixed? I am facing the same issue

@waiyong
Copy link

waiyong commented Aug 7, 2023

Me too. Looking forward to the fix or any guidance

Copy link

dosubot bot commented Nov 6, 2023

Hi, @homanp! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, the issue you reported is related to an error that occurs when using ConversationBufferWindowMemory and trying to concatenate a tuple to a string. You mentioned that you were able to fix this by modifying the get_chat_history function. Other users like "kanukolluGVT", "tarek-kerbedj", "ambikaiyer29", and "waiyong" are also facing the same issue and are looking for a fix or guidance.

The good news is that you have already provided a solution by modifying the get_chat_history function. If you believe this issue is still relevant to the latest version of the LangChain repository, please let the LangChain team know by commenting on this issue. Otherwise, feel free to close the issue yourself. If no further action is taken, the issue will be automatically closed in 7 days.

Thank you for your contribution and for helping us improve LangChain! Let me know if you have any questions or need further assistance.

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Nov 6, 2023
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 13, 2023
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Nov 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants