New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UnboundLocalError: cannot access local variable 'llm' where it is not associated with a value #394
Comments
This is because of a small bug there's in the code. We don't correctly terminate when an invalid Check your |
Ah, okay: I hadn't realized the llama_init_from_file: failed to load model I've also tried reversing slash directions in the MODEL_PATH but that didn't seem to alter the error received in any way: not even the line numbers were different. Hoping it's just more user error obstructing expected functionality! |
use Q4_0 models. |
Downloading one now. I'm curious what difference this makes: is there some kind of naming convention being followed that I'm simple not aware of yet? I'd love to know what this is or if there's further reading I could look into. |
It got a little further using a model ending in q4_0, it looks like it was at least able to partially load some of the model functions. Here's the output and error recieved upon updating the MODEL_PATH to reflect the newly download q4_0 model: $ python privateGPT.py |
Describe the bug and how to reproduce it
I was able to product this bug by trying to load up the Vicuna LLAMA model from Huggingface. Here's the link to the model I attempted to use: https://huggingface.co/eachadea/ggml-vicuna-13b-1.1
Expected behavior
Upon executing
python privateGPT.py
, I should be prompted to enter a query, but instead a failure with the following console output is received:Using embedded DuckDB with persistence: data will be stored in: db
Model llama not supported!
Traceback (most recent call last):
File "X:\git\privategpt\privateGPT\privateGPT.py", line 75, in
main()
File "X:\git\privategpt\privateGPT\privateGPT.py", line 39, in main
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever, return_source_documents= not args.hide_source)
^^^
UnboundLocalError: cannot access local variable 'llm' where it is not associated with a value
Environment (please complete the following information):
I know that Vicuna is a LLAMA based model, so I'm not sure why the interpreter is saying it's an unsupported model type.
Let me know if there's anything else I can include to help resolve this: this is my first issue I've posted on GitHub. Thanks for your patience!
The text was updated successfully, but these errors were encountered: