Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Iterating over LLM models does not work in LangChain #28

Open
yogeshhk opened this issue Apr 27, 2023 · 3 comments
Open

Iterating over LLM models does not work in LangChain #28

yogeshhk opened this issue Apr 27, 2023 · 3 comments

Comments

@yogeshhk
Copy link

Can LLMChain objects be stored and iterated over?

llms = [{'name': 'OpenAI', 'model': OpenAI(temperature=0)},
        {'name': 'Flan', 'model':  HuggingFaceHub(repo_id="google/flan-t5-xl", model_kwargs={"temperature": 1e-10})}]

for llm_dict in llms:
    llm_name = llm_dict['name']
    llm_model = llm_dict['model']
    chain = LLMChain(llm=llm_model, prompt=prompt)

The first LLM model runs well, but for the second iteration, gives following error:

    chain = LLMChain(llm=llm_model, prompt=prompt)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain
prompt
  value is not a valid dict (type=type_error.dict)

Am I missing something? in dictionary declarations?

More details at https://stackoverflow.com/questions/76110329/iterating-over-llm-models-does-not-work-in-langchain

@TechnoRahmon
Copy link

TechnoRahmon commented May 7, 2023

I have similar situation, here where I stored my llm object in separate file :

# Create an instance of OpenAI LLM with desired configuration
llm_davinci = OpenAI(
    model_name=models_names["completions-davinci"],
    temperature=0,
    max_tokens=256,
    top_p=1.0,

    frequency_penalty=0.0,
    presence_penalty=0.0,
    n=1,
    best_of=1,
    request_timeout=None
)

then I am using the llm_davinci instance in other function like this :

def ask_llm(query: str, filename: str):

    # prepare the prompt
    prompt = code_assistance.format(context="this is a test", command=query)
    tokens = tiktoken_len(prompt)
    print(f"prompt  : {prompt}")
    print(f"prompt tokens : {tokens}")

    # connect to the LLM
    llm_chain = LLMChain(prompt=prompt, llm=llm_davinci)

    # run the LLM
    with get_openai_callback() as cb:
        response = llm_chain.run()

    return jsonify({'query': query,
                    'response': str(response),
                    'usage': cb})

the issue is with line :

    # connect to the LLM
    llm_chain = LLMChain(prompt=prompt, llm=llm_davinci)

error :
llm_chain = LLMChain(prompt=prompt, llm=llm_davinci)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain
prompt
value is not a valid dict (type=type_error.dict)

any idea to solve this ?

@yogeshhk
Copy link
Author

yogeshhk commented May 8, 2023

@TechnoRahmon in my case it was confusing with "prompt" variable... try changing "prompt" inside ask_llm() to something else like "llm_prompt"

@TechnoRahmon
Copy link

TechnoRahmon commented May 9, 2023

@yogeshhk Thank you for replying.

Actually, it has been solved by feeding the prompt as PromptTemplate type to the LLMChain. my issue was I passed the prompt as a string to the LLMChain, hence I changed it to the PromptTemplate type, It worked

    # prepare the prompt
    prompt = PromptTemplate(
        input_variables=give_assistance_input_variables,
        template=give_assistance_prompt
    )
    # connect to the LLM
    llm_chain = LLMChain(prompt=prompt, llm=llm_davinci)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants