Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues with ConversationalRetrievalQA chain #1697

Closed
EmilioJD opened this issue Jun 19, 2023 · 3 comments
Closed

Issues with ConversationalRetrievalQA chain #1697

EmilioJD opened this issue Jun 19, 2023 · 3 comments

Comments

@EmilioJD
Copy link

EmilioJD commented Jun 19, 2023

Hello, I'm working on implementing a website using theConversationRetrievalQA chain but continue to have errors when using it. It works for retrieving documents from the database (I am using Supabase for the VectorStore), but doesn't seem to support loading in the chat history (isn't able to reference previous things from the conversation although I am successfully passing in a chat history to buffer memory). I was able to successfully use the standard ConversationChain with no problem. Wanted to know if I am approaching this incorrectly but I am starting to believe it's a ConversationalRetrievalQA chain issue. Code snippet below, let me know if you would like more context:

`

const memory = new BufferMemory({
  memoryKey: "chat_history", // Must be set to "chat_history"
  chatHistory: chatHistory,
  returnMessages: true,
  inputKey: "question", // The key for the input to the chain
  outputKey: "text", // The key for the final conversational output of the chain
})

const chain = ConversationalRetrievalQAChain.fromLLM(
  model,
  vectorStore.asRetriever(),
  {
    memory: memory,
  },
);

let question = userQuery;
const response = await chain.call({question});

`

@anthonycoded
Copy link

Did you get a typescript error when adding the memory property?Im running into some weird issues with ConversationalRetrievalQAChain as well.

code:
const chain = ConversationalRetrievalQAChain.fromLLM(model, retriever, {
memory: new BufferMemory({
memoryKey: "chat_history", // Must be set to "chat_history"
}),
});

Error
Argument of type '{ memory: BufferMemory; }' is not assignable to parameter of type 'Partial<Omit<RetrievalQAChainInput, "combineDocumentsChain" | "index">> & StuffQAChainParams'.

@brianyun
Copy link

I am having this issue as well.
I implemented my memory in ConversationalRetrievalQA like provided in https://js.langchain.com/docs/modules/chains/index_related_chains/conversational_retrieval,
But my chat interface has no insight of memory.

const chain = ConversationalRetrievalQAChain.fromLLM( streamingModel, vectorstore.asRetriever(), { qaTemplate: QA_PROMPT, questionGeneratorTemplate: CONDENSE_PROMPT, returnSourceDocuments: false, memory: new BufferMemory({ memoryKey: 'chat_history', }), } );

@EmilioJD
Copy link
Author

EmilioJD commented Jun 20, 2023

Did you get a typescript error when adding the memory property?Im running into some weird issues with ConversationalRetrievalQAChain as well.

code: const chain = ConversationalRetrievalQAChain.fromLLM(model, retriever, { memory: new BufferMemory({ memoryKey: "chat_history", // Must be set to "chat_history" }), });

Error Argument of type '{ memory: BufferMemory; }' is not assignable to parameter of type 'Partial<Omit<RetrievalQAChainInput, "combineDocumentsChain" | "index">> & StuffQAChainParams'.

I did not receive this error, perhaps try making sure you are on the latest version of langchain since memory was recently introduced to this chain?

I am having this issue as well. I implemented my memory in ConversationalRetrievalQA like provided in https://js.langchain.com/docs/modules/chains/index_related_chains/conversational_retrieval, But my chat interface has no insight of memory.

const chain = ConversationalRetrievalQAChain.fromLLM( streamingModel, vectorstore.asRetriever(), { qaTemplate: QA_PROMPT, questionGeneratorTemplate: CONDENSE_PROMPT, returnSourceDocuments: false, memory: new BufferMemory({ memoryKey: 'chat_history', }), } );

On that note, the way I fixed my problem was by looking at the PR in which built-in memory was introduced to the ConversationalRetrievalQAChain nd realizing that the mock dataset is much richer than the one given in the Supabase Vector Store docs.

[ "Mitochondria are the powerhouse of the cell", "Foo is red", "Bar is red", "Buildings are made out of brick", "Mitochondria are made of lipids", ], [{ id: 2 }, { id: 1 }, { id: 3 }, { id: 4 }, { id: 5 }],
vs

["Hello world", "Bye bye", "What's this?"], [{ id: 2 }, { id: 1 }, { id: 3 }],

Maybe trying a more salient dataset is necessary for the RetrievalQA chain to work well. Best of luck.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants