Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue: <Trouble with langchain Library: Error in LLMChain Validation> #14401

Closed
GRPUI opened this issue Dec 7, 2023 · 1 comment
Closed

Issue: <Trouble with langchain Library: Error in LLMChain Validation> #14401

GRPUI opened this issue Dec 7, 2023 · 1 comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules 🤖:question A specific question about the codebase, product, project, or how to use a feature

Comments

@GRPUI
Copy link

GRPUI commented Dec 7, 2023

Issue you'd like to raise.

Hello community,

I'm currently working on a project that involves using the langchain library for natural language processing. I'm encountering an issue with the LLMChain class, and I'm hoping someone can help me troubleshoot.

I've initialized a Hugging Face pipeline and constructed a prompt using PromptTemplate. However, when I attempt to load a QA chain using the load_qa_chain function, I get a ValidationError related to the Runnable type. The error suggests that an instance of Runnable is expected, but it seems there's a mismatch.

Here's a simplified version of my code:

from langchain.prompts import PromptTemplate
from langchain import load_qa_chain
from transformers import HuggingFacePipeline

hf = HuggingFacePipeline.from_model_id(
    model_id="ai-forever/rugpt3large_based_on_gpt2",
    task="text-generation",
    pipeline_kwargs={"max_new_tokens": 10},
)

prompt = """Question: {question}

Answer: {text}"""

# The next line is where the error occurs
chain = load_qa_chain(hf(prompt=prompt), chain_type="stuff")
ValidationError: 2 validation errors for LLMChain
llm
  instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
  instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)

I have checked the documentation and versions of the libraries, but I'm still having trouble understanding and resolving the issue. Could someone please provide guidance on what might be causing this ValidationError and how I can address it?

Thank you in advance for your help!

Suggestion:

No response

Copy link

dosubot bot commented Dec 7, 2023

🤖

Hello @GRPUI,

I'm Dosu, a friendly bot here to assist you while we're waiting for a human maintainer. I can help you troubleshoot bugs, answer questions about the project, and guide you on how to become a contributor. Don't hesitate if you need anything, I'm here to help!

Based on the error message you're receiving, it seems like the load_qa_chain function is expecting an instance of Runnable, but it's not getting one. This could be due to the way you're passing the HuggingFacePipeline instance to the function.

In your code, you're passing hf(prompt=prompt) to load_qa_chain, which is likely not returning a Runnable instance. Instead, you should be passing the HuggingFacePipeline instance directly. Here's how you can modify your code:

from langchain.prompts import PromptTemplate
from langchain import load_qa_chain
from transformers import HuggingFacePipeline

hf = HuggingFacePipeline.from_model_id(
    model_id="ai-forever/rugpt3large_based_on_gpt2",
    task="text-generation",
    pipeline_kwargs={"max_new_tokens": 10},
)

prompt = """Question: {question}

Answer: {text}"""

# Pass the HuggingFacePipeline instance directly
chain = load_qa_chain(hf, chain_type="stuff")

Also, please ensure that the chain_type you're passing to load_qa_chain is valid. In your code, you're passing "stuff" as the chain_type, which might not be a valid chain type.

If you're still encountering issues, it would be helpful to know more about the load_qa_chain function and its expected inputs. Unfortunately, I wasn't able to find detailed information about this function in the LangChain repository. If you could provide more context about this function and how you're using it, I might be able to provide more specific guidance.

Lastly, I found a similar issue in the LangChain repository: OpenAI instance of Runnable expected. The solution in this case was to import the correct package. While this might not directly solve your issue, it could provide some insight into how the Runnable type is used in LangChain.

I hope this helps! If you have any more questions or need further clarification, feel free to ask.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature 🤖:question A specific question about the codebase, product, project, or how to use a feature labels Dec 7, 2023
@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Mar 10, 2024
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 17, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Mar 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules 🤖:question A specific question about the codebase, product, project, or how to use a feature
Projects
None yet
Development

No branches or pull requests

1 participant