-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unable to start interpreter after updating API #48
Comments
Could you one of these two things?
I am not very sure, but maybe these steps could help |
I got the C++ error, so I'm install all 19 GB of the Visual Studio stuff. |
I downloaded all of the CPP stuff, and reinstalled the llama-python. This is my terminal info: But, when I run it: Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model. [?] Parameter count (smaller is faster, larger is more capable): 7B
[?] Quality (lower is faster, higher is more capable): Low | Size: 3.01 GB, RAM usage: 5.51 GB
[?] Use GPU? (Large models might crash on GPU, but will run more quickly) (Y/n): N [?] Fatal Python error: _Py_HashRandomization_Init: failed to get random numbers to initialize Python Error during installation with OpenBLAS: Command
We have likely not built the proper (Running language models locally is a difficult task! If you have insight into the best way to implement this across Please press enter to switch to So it's acting like there was no llama-cpp-python installed. |
This is now a duplicate of #167. If you still need help, please leave a comment on that issue. |
Windows 10 stock on a Hyundai laptop with Intel chip.
I installed it, tried to download the Code-Llama, which it downloaded, but now I get this:
`Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model.
[?] Parameter count (smaller is faster, larger is more capable): 7B
[?] Quality (lower is faster, higher is more capable): Low | Size: 3.01 GB, RAM usage: 5.51 GB
[?] Use GPU? (Large models might crash on GPU, but will run more quickly) (Y/n): n
[?]
Code-Llama
interface package not found. Installllama-cpp-python
? (Y/n): yFatal Python error: _Py_HashRandomization_Init: failed to get random numbers to initialize Python
Python runtime state: preinitialized
Error during installation with OpenBLAS: Command
'['C:\Users\15702\AppData\Local\Programs\Python\Python310\python.exe', '-m', 'pip', 'install',
'llama-cpp-python']' returned non-zero exit status 1.
We have likely not built the proper
Code-Llama
support for your system.(Running language models locally is a difficult task! If you have insight into the best way to implement this across
platforms/architectures, please join the Open Interpreter community Discord and consider contributing the project's
development.)
Please press enter to switch to
GPT-4
(recommended).`So I try to use GPT-4, and I use an API from OpenAI as I have an account to use the 3.5 online.
Using --fast gets me this:
`PS C:\Users\15702> interpreter --fast
▌ Model set to GPT-3.5-TURBO
Tip: To run locally, use interpreter --local
Open Interpreter will require approval before running code. Use interpreter -y to bypass this.
Press CTRL-C to exit.`
Any suggestions?
The text was updated successfully, but these errors were encountered: