Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UnboundLocalError: cannot access local variable 'llm' where it is not associated with a value #394

Closed
QuantumPickleJar opened this issue May 22, 2023 · 5 comments
Labels
bug Something isn't working primordial Related to the primordial version of PrivateGPT, which is now frozen in favour of the new PrivateGPT

Comments

@QuantumPickleJar
Copy link

QuantumPickleJar commented May 22, 2023

Describe the bug and how to reproduce it
I was able to product this bug by trying to load up the Vicuna LLAMA model from Huggingface. Here's the link to the model I attempted to use: https://huggingface.co/eachadea/ggml-vicuna-13b-1.1

Expected behavior
Upon executing python privateGPT.py, I should be prompted to enter a query, but instead a failure with the following console output is received:

Using embedded DuckDB with persistence: data will be stored in: db
Model llama not supported!
Traceback (most recent call last):
File "X:\git\privategpt\privateGPT\privateGPT.py", line 75, in
main()
File "X:\git\privategpt\privateGPT\privateGPT.py", line 39, in main
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever, return_source_documents= not args.hide_source)
^^^
UnboundLocalError: cannot access local variable 'llm' where it is not associated with a value

Environment (please complete the following information):

  • OS / hardware: Win10; 2017 Asus ROG Laptop
  • Python version 3.11.3

I know that Vicuna is a LLAMA based model, so I'm not sure why the interpreter is saying it's an unsupported model type.
Let me know if there's anything else I can include to help resolve this: this is my first issue I've posted on GitHub. Thanks for your patience!

@QuantumPickleJar QuantumPickleJar added the bug Something isn't working label May 22, 2023
@PulpCattel
Copy link

Model llama not supported!

This is because of a small bug there's in the code. We don't correctly terminate when an invalid model_type is found.

Check your .env config file, you somehow have MODEL_TYPE=llama but should be MODEL_TYPE=LlamaCpp or MODEL_TYPE=GPT4All

@QuantumPickleJar
Copy link
Author

QuantumPickleJar commented May 22, 2023

Ah, okay: I hadn't realized the LlamaCpp was the explicit modeltype. Thanks!
Adjusting the MODEL_TYPE to be LlamaCpp results in the following new value_error:
Using embedded DuckDB with persistence: data will be stored in: db
llama.cpp: loading model from X:\models\ggml-vicuna-13b-1.1-q4_2.bin
error loading model: unrecognized tensor type 4

llama_init_from_file: failed to load model
Traceback (most recent call last):
File "X:\git\privategpt\privateGPT\privateGPT.py", line 75, in
main()
File "X:\git\privategpt\privateGPT\privateGPT.py", line 33, in main
llm = LlamaCpp(model_path=model_path, n_ctx=model_n_ctx, callbacks=callbacks, verbose=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCpp
root
Could not load Llama model from path: X:\models\ggml-vicuna-13b-1.1-q4_2.bin. Received error (type=value_error)

I've also tried reversing slash directions in the MODEL_PATH but that didn't seem to alter the error received in any way: not even the line numbers were different. Hoping it's just more user error obstructing expected functionality!

@maozdemir
Copy link
Contributor

use Q4_0 models.

@QuantumPickleJar
Copy link
Author

Downloading one now. I'm curious what difference this makes: is there some kind of naming convention being followed that I'm simple not aware of yet? I'd love to know what this is or if there's further reading I could look into.

@QuantumPickleJar
Copy link
Author

It got a little further using a model ending in q4_0, it looks like it was at least able to partially load some of the model functions. Here's the output and error recieved upon updating the MODEL_PATH to reflect the newly download q4_0 model:

$ python privateGPT.py
Using embedded DuckDB with persistence: data will be stored in: db
llama.cpp: loading model from X:/models/Vicuna/ggml-old-vic13b-q4_0.bin
llama_model_load_internal: format = ggjt v1 (pre #1405)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 1000
llama_model_load_internal: n_embd = 5120
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 40
llama_model_load_internal: n_layer = 40
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 4 (mostly Q4_1, some F16)
llama_model_load_internal: n_ff = 13824
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 13B
error loading model: this format is no longer supported (see ggerganov/llama.cpp#1305)
llama_init_from_file: failed to load model
Traceback (most recent call last):
File "X:\git\privategpt\privateGPT\privateGPT.py", line 75, in
main()
File "X:\git\privategpt\privateGPT\privateGPT.py", line 33, in main
llm = LlamaCpp(model_path=model_path, n_ctx=model_n_ctx, callbacks=callbacks, verbose=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCpp
root
Could not load Llama model from path: X:/models/Vicuna/ggml-old-vic13b-q4_0.bin. Received error (type=value_error)

@imartinez imartinez added the primordial Related to the primordial version of PrivateGPT, which is now frozen in favour of the new PrivateGPT label Oct 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working primordial Related to the primordial version of PrivateGPT, which is now frozen in favour of the new PrivateGPT
Projects
None yet
Development

No branches or pull requests

4 participants