How can I configure the final prompt of ConversationalRetrievalChain? #9818
-
I am using a ConversationalRetrievalChain and would like to change the final prompt of the chain. How would I go about that? I understand that the ConversationalRetrievalChain calls the StuffDocumentChain at some point, which collates documents from the retriever. This context ist then passed to an LLMChain for generating the final answer. It is the prompt of this last step that I want to change. I found out, that the prompt for that step is located here: langchain/libs/langchain/langchain/chains/question_answering/stuff_prompt.py If I change that prompt in the source code I get exactly what I want. I am trying to understand however, how I can pass the prompt I need as an argument to ConversationalRetrievalCHain in my python code without changing the source code of langchain. Thank you very much for any help with this! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 8 replies
-
🤖 Hello, Based on the information you provided and the context from the LangChain repository, there are a couple of ways you can change the final prompt of the
ConversationalRetrievalChain.from_llm(combine_docs_chain_kwargs={"prompt": your_prompt}))
from langchain.chains.question_answering import load_qa_chain
template = """
{Your_Prompt}
CONTEXT:
{context}
QUESTION:
{query}
CHAT HISTORY:
{chat_history}
ANSWER:
"""
prompt = PromptTemplate(input_variables=["chat_history", "query", "context"], template=template)
memory = ConversationBufferMemory(memory_key="chat_history", input_key="query")
chain = load_qa_chain(ChatOpenAI(temperature=0), chain_type="stuff", memory=memory, prompt=prompt)
qa = ConversationalRetrievalChain.from_llm(
llm,
retriever=vectorstore.as_retriever(search_kwargs={"k": 3}),
return_source_documents=True,
verbose=True,
condense_question_llm = llm,
chain_type="stuff",
get_chat_history=lambda h : h,
)
qa.combine_docs_chain.llm_chain.prompt.messages[0] = SystemMessagePromptTemplate.from_template(sys_prompt) Define your own Please note that these are general examples and might not work as is, because the actual implementation of the I hope this helps! If you have any other questions, feel free to ask. Best, Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
@mdsaifk I've tried your solution. It works fine, but there is a bit of problem. It tends to iterate back in the answer multiple times therefore producing multiple answers and questions. Refer to the picture below: Do you know why this is happening? |
Beta Was this translation helpful? Give feedback.
🤖
Hello,
Based on the information you provided and the context from the LangChain repository, there are a couple of ways you can change the final prompt of the
ConversationalRetrievalChain
without modifying the LangChain source code.ConversationalRetrievalChain
by passing it in viacombine_docs_chain_kwargs
if you instantiate the chain usingfrom_llm
. Here is how you can do it:load_qa_chain
with memory and a custom prompt. Here is an example: