-
Notifications
You must be signed in to change notification settings - Fork 14.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llma Embeddings error #6980
Comments
Answer generated by a 🤖 AnswerThe error message indicates that the 'llama_cpp' module is not found. This is likely because the 'llama-cpp-python' library is not installed in your environment. You can install it using pip: !pip install llama-cpp-python After installing the library, you should be able to import the 'LlamaCpp' class from the 'langchain.llms' module and use it as shown in the context shared: from langchain.llms import LlamaCpp
# Example usage
model_path = get_model()
llm = LlamaCpp(model_path=model_path)
output = llm("Say foo:") If you want to use the library with a BLAS backend for faster processing, you can install it with the following command: !CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python If you have already installed a CPU-only version of the package, you need to reinstall it from scratch: !CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python Please ensure that the path to the model file is correct. If the problem persists, it might be due to an issue with the installation of the 'llama-cpp-python' library or a compatibility issue with your Python version. In the context shared, it seems you are trying to use a model from HuggingFace. If you want to use a model from HuggingFace, you can use the 'HuggingFaceHub' class from the 'langchain' module: from langchain import HuggingFaceHub
repo_id = "google/flan-t5-xl" # See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other options
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature": 0, "max_length": 64}) And then you can use this model in a 'LLMChain': from langchain import PromptTemplate, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "Who won the FIFA World Cup in the year 1994? "
print(llm_chain.run(question)) Sources
This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. |
I get this too... and the llama-cpp-python package is installed. TL;DR: The way LangChain hides this exception is a bug IMO. I was only able to fix this by reading the source code, seeing that it tries to import from llama_cpp here in llamacpp.py:
And throws that exception no matter what the cause of the ImportError was! |
Hi, @sirrrik! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale. Based on the information provided, it seems that you encountered a Before we close this issue, we wanted to check if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days. Thank you for your understanding and contribution to the LangChain project! Let us know if you have any further questions or concerns. |
System Info
Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 85, in validate_environment
from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/mine.py", line 3, in
llama = LlamaCppEmbeddings(model_path="./models/ggml-gpt4all-j.bin")
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.init
File "pydantic/main.py", line 1102, in pydantic.main.validate_model
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 89, in validate_environment
raise ModuleNotFoundError(
ModuleNotFoundError: Could not import llama-cpp-python library. Please install the llama-cpp-python library to use this embedding model: pip install llama-cpp-python
Who can help?
@sirrrik
Information
Related Components
Reproduction
pip install the latest langChain package from pypy on mac
Expected behavior
Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 85, in validate_environment
from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/mine.py", line 3, in
llama = LlamaCppEmbeddings(model_path="./models/ggml-gpt4all-j.bin")
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.init
File "pydantic/main.py", line 1102, in pydantic.main.validate_model
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 89, in validate_environment
raise ModuleNotFoundError(
ModuleNotFoundError: Could not import llama-cpp-python library. Please install the llama-cpp-python library to use this embedding model: pip install llama-cpp-python
The text was updated successfully, but these errors were encountered: