Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue: Validation errors for ConversationalRetrievalChain (combine_docs_chain) #13607

Closed
0ENZO opened this issue Nov 20, 2023 · 3 comments
Closed
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules

Comments

@0ENZO
Copy link

0ENZO commented Nov 20, 2023

Issue you'd like to raise.

Hello, I am working on a conversational chatbot, here is a snippet of the code :

general_system_template = ""You a chatbot...

---
{summaries}"""

general_user_template = "Question: {question}"

messages = [
    SystemMessagePromptTemplate.from_template(general_system_template),
    HumanMessagePromptTemplate.from_template(general_user_template)
]

qa_prompt = ChatPromptTemplate(
    messages=messages,
    input_variables=['question', 'summaries']
)

q = Queue()

llm_chat = ChatVertexAI(
    temperature=0,
    model_name="chat-bison",
    streaming=True,
    callbacks=[QueueCallback(q)],
    verbose=False
)

retriever = docsearch.as_retriever(
    search_type="similarity",
    search_kwargs={
        'k': 2,
        'filter': {'source': {'$in': sources}}
    }
)

llm_text = VertexAI(
    temperature=0,
    model_name="text-bison"
)

combine_docs_chain = load_qa_with_sources_chain(
    llm=llm_chat,
    chain_type="stuff",
    prompt=qa_prompt
)

condense_question_template = (
    """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
    Chat History: {chat_history}
    Follow Up Input: {question}"""
)

condense_question_prompt = PromptTemplate.from_template(condense_question_template)
condense_chain = LLMChain(
    llm=llm_text,
    prompt=condense_question_prompt,
    verbose=True
)

chain = ConversationalRetrievalChain(
    combine_docs_chain_=combine_docs_chain,
    retriever=retriever,
    question_generator=condense_chain
)

When running the code I have the following error :

pydantic.error_wrappers.ValidationError: 2 validation errors for ConversationalRetrievalChain
combine_docs_chain
  field required (type=value_error.missing)
combine_docs_chain_
  extra fields not permitted (type=value_error.extra)

How could I solve this ? Is there any way to have a more detailed error ?

Suggestion:

No response

Copy link

dosubot bot commented Nov 20, 2023

🤖

Hello @0ENZO,

It's nice to meet you! I'm a bot here to assist you with your issue while we wait for a human maintainer to step in. I can help you solve bugs, answer questions, and guide you on your journey to becoming a contributor. I see you've encountered an issue with the LangChain repository, I'm currently reviewing it and will provide a more detailed response very soon. Hang tight!

@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Nov 20, 2023
Copy link

dosubot bot commented Nov 20, 2023

🤖

Hello @0ENZO,

I'm a bot designed to assist you with issues related to the LangChain repository. I can help you debug problems, answer your queries, and guide you on how to contribute to this project. While we wait for a human maintainer to join us, I'm here to offer my assistance.

The error you're encountering is due to incorrect usage of the ConversationalRetrievalChain class. The combine_docs_chain is a required field in the ConversationalRetrievalChain class, and it seems like you're trying to pass combine_docs_chain_ as an argument, which is not a valid field for this class.

To resolve these errors, you should provide a combine_docs_chain argument when creating an instance of ConversationalRetrievalChain, and ensure you're not providing a combine_docs_chain_ argument. Here's an example of how to correctly create an instance of ConversationalRetrievalChain:

combine_docs_chain = StuffDocumentsChain(...)
retriever = ...
question_generator_chain = LLMChain(...)
chain = ConversationalRetrievalChain(
    combine_docs_chain=combine_docs_chain,
    retriever=retriever,
    question_generator=question_generator_chain,
)

In your case, it seems like you're trying to use a custom combine_docs_chain. To do this, you can pass your custom combine_docs_chain as part of the combine_docs_chain_kwargs argument when calling from_llm. This will ensure that your custom combine_docs_chain is used when load_qa_chain is called, and it won't conflict with the combine_docs_chain attribute of the class. Here's how you can do it:

qa = ConversationalRetrievalChain.from_llm(
    llm,
    retriever=vectordb.as_retriever(),
    return_source_documents = True,
    memory=memory,
    verbose = True,
    combine_docs_chain_kwargs = {'combine_docs_chain': combine_docs_chain}
)

Please note that you need to define your own llm, vectordb, memory, and combine_docs_chain.

For more information, you can refer to these similar solved issues in the LangChain repository:

I hope this helps! Let me know if you have any other questions.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

Copy link

dosubot bot commented Feb 19, 2024

Hi, @0ENZO,

I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, you encountered validation errors for the ConversationalRetrievalChain in the provided code, and Dosubot provided a detailed response explaining the incorrect usage of the ConversationalRetrievalChain class and offered guidance on resolving the errors. Additionally, Dosubot referenced similar solved issues in the LangChain repository for further information.

Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you!

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Feb 19, 2024
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 26, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Feb 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules
Projects
None yet
Development

No branches or pull requests

1 participant