-
Notifications
You must be signed in to change notification settings - Fork 7.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error on privateGPT.py execution #459
Comments
i too see the same error . any luck on how we can resolve this ? my env file - |
Did you download the model in this path |
Yes it is in that path, I copied the relative path to the file and inserted it in the env file, like described above. |
Does the groovy model work? I removed the "backend=gptj" in the privateGPT.py file to get the model to load. |
I removed backend=gptj now. This is my error message: Traceback (most recent call last): in my models folder are these two files: |
make sure the llm is inside the models folder |
You are speaking of: right? They are both in the models folder, in the real file system (C:\privateGPT-main\models) and inside Visual Studio Code (models\ggml-gpt4all-j-v1.3-groovy.bin) aswell. |
I guess the error might cause by "model" which it should be "model_path" The error show: I checked the privateGPT.py file. it only has defined :model_path = os.environ.get('MODEL_PATH') llm = GPT4All(model_path=model_path, n_ctx=model_n_ctx, callbacks=callbacks, verbose=False) Please do feel free to correct any of my mistakes.. thanks. |
I replaced your code with my line 35, but it just gets this error: Using embedded DuckDB with persistence: data will be stored in: db |
Same here. The "model" to "model_path" helped with the privateGPT file error, but now stuck with the gpt4all file error. |
When I try to run privateGPT: I have last version of python... how to run on compatible version mode? python3 privateGPT.py python3 --version uname -an |
I had the same error with this .env file: PERSIST_DIRECTORY=db I just replace the models on MODEL_TYPE for source_documents. My MODEL_TYPE is like this now: |
This worked perfectly! I also commented out the lines under "Print the relevant sources used for the answer" to shorten the text. Is there a way to speed up the response time? |
Found it. Don't use a \ in the path. So replace |
Solved: |
Hi dear please help me |
@m3hrdadpourkheiri |
I managed to use ingest.py and load the txt files.
Afterwards I try to execture privateGPT.py and get the following error message (every time):
Using embedded DuckDB with persistence: data will be stored in: db
Traceback (most recent call last):
File "c:\privateGPT-main\privateGPT.py", line 75, in
main()
File "c:\privateGPT-main\privateGPT.py", line 35, in main
llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', callbacks=callbacks, verbose=False)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for GPT4All
root
Failed to retrieve model (type=value_error)
my env file looks like this:
MODEL_TYPE=GPT4All
PERSIST_DIRECTORY=db
MODEL_PATH=models\ggml-gpt4all-j-v1.3-groovy.bin
MODEL_N_CTX=1000
EMBEDDINGS_MODEL_NAME=distiluse-base-multilingual-cased-v2
The text was updated successfully, but these errors were encountered: