Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unable to start interpreter after updating API #48

Closed
Bobpick opened this issue Sep 5, 2023 · 5 comments
Closed

unable to start interpreter after updating API #48

Bobpick opened this issue Sep 5, 2023 · 5 comments

Comments

@Bobpick
Copy link

Bobpick commented Sep 5, 2023

Windows 10 stock on a Hyundai laptop with Intel chip.
I installed it, tried to download the Code-Llama, which it downloaded, but now I get this:
`Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model.

[?] Parameter count (smaller is faster, larger is more capable): 7B

7B
16B
34B

[?] Quality (lower is faster, higher is more capable): Low | Size: 3.01 GB, RAM usage: 5.51 GB

Low | Size: 3.01 GB, RAM usage: 5.51 GB
Medium | Size: 4.24 GB, RAM usage: 6.74 GB
High | Size: 7.16 GB, RAM usage: 9.66 GB

[?] Use GPU? (Large models might crash on GPU, but will run more quickly) (Y/n): n

[?] Code-Llama interface package not found. Install llama-cpp-python? (Y/n): y

Fatal Python error: _Py_HashRandomization_Init: failed to get random numbers to initialize Python
Python runtime state: preinitialized

Error during installation with OpenBLAS: Command
'['C:\Users\15702\AppData\Local\Programs\Python\Python310\python.exe', '-m', 'pip', 'install',
'llama-cpp-python']' returned non-zero exit status 1.

Failed to install Code-LLama.

We have likely not built the proper Code-Llama support for your system.

(Running language models locally is a difficult task! If you have insight into the best way to implement this across
platforms/architectures, please join the Open Interpreter community Discord and consider contributing the project's
development.)

Please press enter to switch to GPT-4 (recommended).`

So I try to use GPT-4, and I use an API from OpenAI as I have an account to use the 3.5 online.
Using --fast gets me this:
`PS C:\Users\15702> interpreter --fast

▌ Model set to GPT-3.5-TURBO

Tip: To run locally, use interpreter --local

Open Interpreter will require approval before running code. Use interpreter -y to bypass this.

Press CTRL-C to exit.`

Any suggestions?

@extending
Copy link

extending commented Sep 5, 2023

pip 23.2.1 from /opt/homebrew/lib/python3.11/site-packages/pip (python 3.11)

same question

image

@TanmayDoesAI
Copy link
Collaborator

Could you one of these two things?

  • Trying to install llama-cpp-python seperately, I have personally faced a lot of issues earlier installing. so could be one of those, use pip install llama-cpp-python , (the usual error we have is it would ask you to install c++ build tools I think,)
  • Creating a seperate environment and trying to install it again just in case there is some conflict within the packages

I am not very sure, but maybe these steps could help

@Bobpick
Copy link
Author

Bobpick commented Sep 5, 2023

I got the C++ error, so I'm install all 19 GB of the Visual Studio stuff.
Also installing on my linux box, and after install, it can't find "interpreter".

@Bobpick
Copy link
Author

Bobpick commented Sep 5, 2023

I downloaded all of the CPP stuff, and reinstalled the llama-python. This is my terminal info:
PS C:\Users\15702> pip install llama-cpp-python Collecting llama-cpp-python Using cached llama_cpp_python-0.1.83.tar.gz (1.8 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting typing-extensions>=4.5.0 (from llama-cpp-python) Obtaining dependency information for typing-extensions>=4.5.0 from https://files.pythonhosted.org/packages/ec/6b/63cc3df74987c36fe26157ee12e09e8f9db4de771e0f3404263117e75b95/typing_extensions-4.7.1-py3-none-any.whl.metadata Using cached typing_extensions-4.7.1-py3-none-any.whl.metadata (3.1 kB) Collecting numpy>=1.20.0 (from llama-cpp-python) Obtaining dependency information for numpy>=1.20.0 from https://files.pythonhosted.org/packages/b7/db/4d37359e2c9cf8bf071c08b8a6f7374648a5ab2e76e2e22e3b808f81d507/numpy-1.25.2-cp310-cp310-win_amd64.whl.metadata Using cached numpy-1.25.2-cp310-cp310-win_amd64.whl.metadata (5.7 kB) Collecting diskcache>=5.6.1 (from llama-cpp-python) Obtaining dependency information for diskcache>=5.6.1 from https://files.pythonhosted.org/packages/3f/27/4570e78fc0bf5ea0ca45eb1de3818a23787af9b390c0b0a0033a1b8236f9/diskcache-5.6.3-py3-none-any.whl.metadata Using cached diskcache-5.6.3-py3-none-any.whl.metadata (20 kB) Using cached diskcache-5.6.3-py3-none-any.whl (45 kB) Using cached numpy-1.25.2-cp310-cp310-win_amd64.whl (15.6 MB) Using cached typing_extensions-4.7.1-py3-none-any.whl (33 kB) Building wheels for collected packages: llama-cpp-python Building wheel for llama-cpp-python (pyproject.toml) ... done Created wheel for llama-cpp-python: filename=llama_cpp_python-0.1.83-cp310-cp310-win_amd64.whl size=1641494 sha256=976a07263f0e0b93c2d00ce826774d551cae0be8757b07adcaac62039619beca Stored in directory: c:\users\15702\appdata\local\pip\cache\wheels\3f\39\6f\3e75230ce84bb465df194bca6c0c7b936dc4b0b3c83389688d Successfully built llama-cpp-python Installing collected packages: typing-extensions, numpy, diskcache, llama-cpp-python Successfully installed diskcache-5.6.3 llama-cpp-python-0.1.83 numpy-1.25.2 typing-extensions-4.7.1

But, when I run it:
`PS C:\Users\15702> interpreter --local

Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model.

[?] Parameter count (smaller is faster, larger is more capable): 7B

7B
16B
34B

[?] Quality (lower is faster, higher is more capable): Low | Size: 3.01 GB, RAM usage: 5.51 GB

Low | Size: 3.01 GB, RAM usage: 5.51 GB
Medium | Size: 4.24 GB, RAM usage: 6.74 GB
High | Size: 7.16 GB, RAM usage: 9.66 GB

[?] Use GPU? (Large models might crash on GPU, but will run more quickly) (Y/n): N

[?] Code-Llama interface package not found. Install llama-cpp-python? (Y/n): Y

Fatal Python error: _Py_HashRandomization_Init: failed to get random numbers to initialize Python
Python runtime state: preinitialized

Error during installation with OpenBLAS: Command
'['C:\Users\15702\AppData\Local\Programs\Python\Python310\python.exe', '-m', 'pip', 'install',
'llama-cpp-python']' returned non-zero exit status 1.

Failed to install Code-LLama.

We have likely not built the proper Code-Llama support for your system.

(Running language models locally is a difficult task! If you have insight into the best way to implement this across
platforms/architectures, please join the Open Interpreter community Discord and consider contributing the project's
development.)

Please press enter to switch to GPT-4 (recommended).`

So it's acting like there was no llama-cpp-python installed.

@jordanbtucker
Copy link
Collaborator

This is now a duplicate of #167. If you still need help, please leave a comment on that issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants