Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llma Embeddings error #6980

Closed
14 tasks
sirrrik opened this issue Jun 30, 2023 · 3 comments
Closed
14 tasks

llma Embeddings error #6980

sirrrik opened this issue Jun 30, 2023 · 3 comments
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@sirrrik
Copy link

sirrrik commented Jun 30, 2023

System Info

Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 85, in validate_environment
from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/mine.py", line 3, in
llama = LlamaCppEmbeddings(model_path="./models/ggml-gpt4all-j.bin")
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.init
File "pydantic/main.py", line 1102, in pydantic.main.validate_model
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 89, in validate_environment
raise ModuleNotFoundError(
ModuleNotFoundError: Could not import llama-cpp-python library. Please install the llama-cpp-python library to use this embedding model: pip install llama-cpp-python

Who can help?

@sirrrik

Information

  • The official example notebooks/scripts
  • My own modified scripts

Related Components

  • LLMs/Chat Models
  • Embedding Models
  • Prompts / Prompt Templates / Prompt Selectors
  • Output Parsers
  • Document Loaders
  • Vector Stores / Retrievers
  • Memory
  • Agents / Agent Executors
  • Tools / Toolkits
  • Chains
  • Callbacks/Tracing
  • Async

Reproduction

pip install the latest langChain package from pypy on mac

Expected behavior

Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 85, in validate_environment
from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/mine.py", line 3, in
llama = LlamaCppEmbeddings(model_path="./models/ggml-gpt4all-j.bin")
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.init
File "pydantic/main.py", line 1102, in pydantic.main.validate_model
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 89, in validate_environment
raise ModuleNotFoundError(
ModuleNotFoundError: Could not import llama-cpp-python library. Please install the llama-cpp-python library to use this embedding model: pip install llama-cpp-python

@dosubot dosubot bot added the 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature label Jun 30, 2023
@dosubot
Copy link

dosubot bot commented Jun 30, 2023

Answer generated by a 🤖

Answer

The error message indicates that the 'llama_cpp' module is not found. This is likely because the 'llama-cpp-python' library is not installed in your environment.

You can install it using pip:

!pip install llama-cpp-python

After installing the library, you should be able to import the 'LlamaCpp' class from the 'langchain.llms' module and use it as shown in the context shared:

from langchain.llms import LlamaCpp

# Example usage
model_path = get_model()
llm = LlamaCpp(model_path=model_path)
output = llm("Say foo:")

If you want to use the library with a BLAS backend for faster processing, you can install it with the following command:

!CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python

If you have already installed a CPU-only version of the package, you need to reinstall it from scratch:

!CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python

Please ensure that the path to the model file is correct. If the problem persists, it might be due to an issue with the installation of the 'llama-cpp-python' library or a compatibility issue with your Python version.

In the context shared, it seems you are trying to use a model from HuggingFace. If you want to use a model from HuggingFace, you can use the 'HuggingFaceHub' class from the 'langchain' module:

from langchain import HuggingFaceHub

repo_id = "google/flan-t5-xl"  # See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other options

llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature": 0, "max_length": 64})

And then you can use this model in a 'LLMChain':

from langchain import PromptTemplate, LLMChain

template = """Question: {question}

Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)

question = "Who won the FIFA World Cup in the year 1994? "

print(llm_chain.run(question))

Sources


This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

@rhubarb
Copy link

rhubarb commented Aug 25, 2023

I get this too... and the llama-cpp-python package is installed.
The issue turns out to be with sqllite on Apple silicon (Macbook Pro M2). The solution was found here:
https://stackoverflow.com/a/75821180/20479

TL;DR:
conda install libsqlite --force-reinstall -y

The way LangChain hides this exception is a bug IMO. I was only able to fix this by reading the source code, seeing that it tries to import from llama_cpp here in llamacpp.py:

        try:
            from llama_cpp import Llama

            values["client"] = Llama(model_path, **model_params)
        except ImportError:
            raise ModuleNotFoundError(
                "Could not import llama-cpp-python library. "
                "Please install the llama-cpp-python library to "
                "use this embedding model: pip install llama-cpp-python"
            )

And throws that exception no matter what the cause of the ImportError was!
It should output the original cause, and soften the instruction to suggest that you might need to install llama-cpp-python.

Copy link

dosubot bot commented Nov 24, 2023

Hi, @sirrrik! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

Based on the information provided, it seems that you encountered a ModuleNotFoundError when trying to import the llama_cpp module. The suggested solution is to install the llama-cpp-python library using pip install llama-cpp-python. Another user also faced a similar issue on Apple silicon and resolved it by running conda install libsqlite --force-reinstall -y. They also mentioned that improving the exception handling in the code could provide more helpful error messages.

Before we close this issue, we wanted to check if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your understanding and contribution to the LangChain project! Let us know if you have any further questions or concerns.

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Nov 24, 2023
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 1, 2023
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Dec 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

2 participants