Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCppEmbeddings #461

Closed
sandyrs9421 opened this issue May 24, 2023 · 13 comments
Labels
bug Something isn't working primordial Related to the primordial version of PrivateGPT, which is now frozen in favour of the new PrivateGPT

Comments

@sandyrs9421
Copy link

i am seeing below error when i run the ingest.py. any thoughts on how i can resolve it ? kindly advise

Error -
error loading model: this format is no longer supported (see ggerganov/llama.cpp#1305)
llama_init_from_file: failed to load model
Traceback (most recent call last):
File "/Users/FBT/Desktop/Projects/privategpt/privateGPT/ingest.py", line 39, in
main()
File "/Users/FBT/Desktop/Projects/privategpt/privateGPT/ingest.py", line 30, in main
llama = LlamaCppEmbeddings(model_path="./models/ggml-model-q4_0.bin")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCppEmbeddings
root
Could not load Llama model from path: ./models/ggml-model-q4_0.bin. Received error (type=value_error)

My ENV file -
PERSIST_DIRECTORY=db
MODEL_TYPE=GPT4All
MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin
EMBEDDINGS_MODEL_NAME=/Users/FBT/Desktop/Projects/privategpt/privateGPT/models/ggml-model-q4_0.bin
MODEL_N_CTX=1000

@sandyrs9421 sandyrs9421 added the bug Something isn't working label May 24, 2023
@christopherpickering
Copy link

Did you download and add a model?

@gvilarino
Copy link

@sandyRS: as stated in the README.md you should first download the model file into a models directory in the project root. I encountered your same issue and this worked.

@sandyrs9421
Copy link
Author

sandyrs9421 commented May 26, 2023

yes i have downloaded the models and mentioned the same path in env file but still seeing the issue.
Screenshot 2023-05-26 at 12 07 20 PM

My ENV file -
PERSIST_DIRECTORY=db
MODEL_TYPE=GPT4All
MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin
EMBEDDINGS_MODEL_NAME='sentence-transformers/all-MiniLM-L6-v2'
MODEL_N_CTX=1000

can you please help me on how i can resolve thie error ?
@christopherpickering / @gvilarino

@albertas
Copy link

Had the same issue. Moving downloaded models into models directory resolved it.

I see that you have MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin in your .env file, so its likely that the issue is the same for you. Just create models directory and move your downloaded models into it.

@sandyrs9421
Copy link
Author

sandyrs9421 commented May 29, 2023

@albertas - thanks for the reply i tried the recommended steps and seeing similar error.
i am seeing this error when i run the privategpt.py file. Data ingestion was successful.
Appreciate if you can guide me wtih possible resolution for this.

My ENV File -
PERSIST_DIRECTORY=db
MODEL_TYPE=GPT4All
MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin
EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2
MODEL_N_CTX=1000
TARGET_SOURCE_CHUNKS=4

Screenshot 2023-05-29 at 12 25 06 PM

@agentmith
Copy link

Similar issue, tried with both putting the model in the .\models subfolder and its own folder inside the .\models subdirectory. The only way I can get it to work is by using the originally listed model, which I'd rather not do as I have a 3090. It's most likely a configuration issue in the .env file, but I'm not 100% sure what all to change if you switch to a different model.

@ayteakkaya536
Copy link

I had encounter with same problem. I used absolute path for the model path, it resolved the issue.

@Rasmus-Riis
Copy link

I had encounter with same problem. I used absolute path for the model path, it resolved the issue.

Thanks. That fixed it for me :-)

@Yousef-Mush
Copy link

For me I used absolute path in "privateGPT.py"
previously it was
model_path = os.environ.get('MODEL_PATH')
I changed it to
model_path = "C:/Users/YM/Desktop/PrivateGPT/privateGPT/models/ggml-gpt4all-j-v1.3-groovy.bin"

in .env here is my config:

PERSIST_DIRECTORY=db
MODEL_TYPE=GPT4All
MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin
EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2
MODEL_N_CTX=1000
TARGET_SOURCE_CHUNKS=4

@tosundar40
Copy link

Changing the path in the privateGPT.py also does not fix the issue . Please help

@ciathyza
Copy link

Any solution found to this yet?
Using absolute path (on MacOS) does not fix it for me.

@abcnow
Copy link

abcnow commented Jul 22, 2023

same here on mac.. using the absolute path didn't fix the problem

@danielmiranda
Copy link

If you set model path correctly mentioned in .env file and remove the extra argument n_ctx=1000, Its working as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working primordial Related to the primordial version of PrivateGPT, which is now frozen in favour of the new PrivateGPT
Projects
None yet
Development

No branches or pull requests