Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot download Code-Llama #39

Closed
AlvinCZ opened this issue Sep 4, 2023 · 14 comments
Closed

Cannot download Code-Llama #39

AlvinCZ opened this issue Sep 4, 2023 · 14 comments

Comments

@AlvinCZ
Copy link

AlvinCZ commented Sep 4, 2023

Hi! This seems to be an amazing project and I am eager to have a try. However, I had this problem when I tried to run it locally:

>Failed to install Code-LLama.

**We have likely not built the proper `Code-Llama` support for your system.**

(Running language models locally is a difficult task! If you have insight into the best way 
to implement this across platforms/architectures, please join the Open Interpreter community
Discord and consider contributing the project's development.)

Since I am using a M2 Mac and it's not likely that there's no codellma for it, I tried running the function get_llama_2_instance() in the source file, and I got some exceptions:

Traceback (most recent call last):
  File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 1348, in do_open
    h.request(req.get_method(), req.selector, req.data, headers,
  File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1286, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1332, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1281, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1041, in _send_output
    self.send(msg)
  File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 979, in send
    self.connect()
  File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/http/client.py", line 1458, in connect
    self.sock = self._context.wrap_socket(self.sock,
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 517, in wrap_socket
    return self.sslsocket_class._create(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1108, in _create
    self.do_handshake()
  File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py", line 1379, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Volumes/Ext SD/llama_2.py", line 86, in get_llama_2_instance
    wget.download(url, download_path)
  File "/opt/homebrew/lib/python3.11/site-packages/wget.py", line 526, in download
    (tmpfile, headers) = ulib.urlretrieve(binurl, tmpfile, callback)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 241, in urlretrieve
    with contextlib.closing(urlopen(url, data)) as fp:
                            ^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 216, in urlopen
    return opener.open(url, data, timeout)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 519, in open
    response = self._open(req, data)
               ^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 536, in _open
    result = self._call_chain(self.handle_open, protocol, protocol +
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 496, in _call_chain
    result = func(*args)
             ^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 1391, in https_open
    return self.do_open(http.client.HTTPSConnection, req,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/Cellar/python@3.11/3.11.5/Frameworks/Python.framework/Versions/3.11/lib/python3.11/urllib/request.py", line 1351, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)>

It seems that there's something wrong with ssl? Or is it that I am using Python 3.11 and it's not supported? (I find that "3.11" was removed from python-package.yml)

@KillianLucas
Copy link
Collaborator

Hi @AlvinCZ! Thanks for trying this out + for the kind words about the project.

Does it do this error for all models, like if you try to download the low quality 7B vs. medium quality 7B, etc? I'm also on M2 Mac with Python 3.11 so it should work. Maybe one of the download URLs has some weird SSL certificate stuff and we just need to find a new URL.

@Antwa-sensei253
Copy link

I am having the same issue in linux
this is my log

interpreter --local

Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model.

[?] Parameter count (smaller is faster, larger is more capable): 7B
 > 7B
   16B
   34B

[?] Quality (lower is faster, higher is more capable): Low | Size: 3.01 GB, RAM usage: 5.51 GB
 > Low | Size: 3.01 GB, RAM usage: 5.51 GB
   Medium | Size: 4.24 GB, RAM usage: 6.74 GB
   High | Size: 7.16 GB, RAM usage: 9.66 GB

[?] Use GPU? (Large models might crash on GPU, but will run more quickly) (Y/n): y

[?] `Code-Llama` interface package not found. Install `llama-cpp-python`? (Y/n): y

Collecting llama-cpp-python
  Using cached llama_cpp_python-0.1.83.tar.gz (1.8 MB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: typing-extensions>=4.5.0 in ./miniconda3/lib/python3.11/site-packages (from llama-cpp-python) (4.7.1)
Requirement already satisfied: numpy>=1.20.0 in ./miniconda3/lib/python3.11/site-packages (from llama-cpp-python) (1.24.4)
Collecting diskcache>=5.6.1 (from llama-cpp-python)
  Using cached diskcache-5.6.3-py3-none-any.whl (45 kB)
Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml) ... error
  error: subprocess-exited-with-error

  × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [112 lines of output]


      --------------------------------------------------------------------------------
      -- Trying 'Ninja' generator
      --------------------------------
      ---------------------------
      ----------------------
      -----------------
      ------------
      -------
      --
      CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
        Compatibility with CMake < 3.5 will be removed from a future version of
        CMake.

        Update the VERSION argument <min> value or use a ...<max> suffix to tell
        CMake that the project does not need compatibility with older versions.

      Not searching for unused variables given on the command line.

      -- The C compiler identification is unknown
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - failed
      -- Check for working C compiler: /usr/bin/cc
      -- Check for working C compiler: /usr/bin/cc - broken
      CMake Error at /tmp/pip-build-env-76a7bker/overlay/lib/python3.11/site-packages/cmake/data/share/cmake-3.27/Modules/CMakeTestCCompiler.cmake:67 (message):
        The C compiler

          "/usr/bin/cc"

        is not able to compile a simple test program.

        It fails with the following output:

          Change Dir: '/tmp/pip-install-f91tf6td/llama-cpp-python_c2997b3bb92647b7838d93aba07777dc/_cmake_test_compile/build/CMakeFiles/CMakeScratch/TryCompile-Ex1lli'

          Run Build Command(s): /tmp/pip-build-env-76a7bker/overlay/lib/python3.11/site-packages/ninja/data/bin/ninja -v cmTC_66c96
          [1/2] /usr/bin/cc    -o CMakeFiles/cmTC_66c96.dir/testCCompiler.c.o -c /tmp/pip-install-f91tf6td/llama-cpp-python_c2997b3bb92647b7838d93aba07777dc/_cmake_test_compile/build/CMakeFiles/CMakeScratch/TryCompile-Ex1lli/testCCompiler.c
          FAILED: CMakeFiles/cmTC_66c96.dir/testCCompiler.c.o
          /usr/bin/cc    -o CMakeFiles/cmTC_66c96.dir/testCCompiler.c.o -c /tmp/pip-install-f91tf6td/llama-cpp-python_c2997b3bb92647b7838d93aba07777dc/_cmake_test_compile/build/CMakeFiles/CMakeScratch/TryCompile-Ex1lli/testCCompiler.c
          cc: fatal error: cannot executeas: execvp: No such file or directory
          compilation terminated.
          ninja: build stopped: subcommand failed.





        CMake will not be able to correctly generate this project.
      Call Stack (most recent call first):
        CMakeLists.txt:3 (ENABLE_LANGUAGE)


      -- Configuring incomplete, errors occurred!
      --
      -------
      ------------
      -----------------
      ----------------------
      ---------------------------
      --------------------------------
      -- Trying 'Ninja' generator - failure
      --------------------------------------------------------------------------------



      --------------------------------------------------------------------------------
      -- Trying 'Unix Makefiles' generator
      --------------------------------
      ---------------------------
      ----------------------
      -----------------
      ------------
      -------
      --
      CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
        Compatibility with CMake < 3.5 will be removed from a future version of
        CMake.

        Update the VERSION argument <min> value or use a ...<max> suffix to tell
        CMake that the project does not need compatibility with older versions.

      Not searching for unused variables given on the command line.

      CMake Error: CMake was unable to find a build program corresponding to "Unix Makefiles".  CMAKE_MAKE_PROGRAM is not set.  You probably need to select a different build tool.
      -- Configuring incomplete, errors occurred!
      --
      -------
      ------------
      -----------------
      ----------------------
      ---------------------------
      --------------------------------
      -- Trying 'Unix Makefiles' generator - failure
      --------------------------------------------------------------------------------

                      ********************************************************************************
                      scikit-build could not get a working generator for your system. Aborting build.

                      Building Linux wheels for Python 3.11 requires a compiler (e.g gcc).
      But scikit-build does *NOT* know how to install it on arch

      To build compliant wheels, consider using the manylinux system described in PEP-513.
      Get it with "dockcross/manylinux-x64" docker image:

        https://github.com/dockcross/dockcross#readme

      For more details, please refer to scikit-build documentation:

        http://scikit-build.readthedocs.io/en/latest/generators.html#linux

                      ********************************************************************************
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
Error during installation with OpenBLAS: Command '['/home/antwa/miniconda3/bin/python', '-m', 'pip', 'install',
'llama-cpp-python']' returned non-zero exit status 1.
>Failed to install Code-LLama.

**We have likely not built the proper `Code-Llama` support for your system.**

(Running language models locally is a difficult task! If you have insight into the best way to implement this
across platforms/architectures, please join the Open Interpreter community Discord and consider contributing
the project's development.)

Please press enter to switch to `GPT-4` (recommended).

@CyberTea0X
Copy link

CyberTea0X commented Sep 5, 2023

@Antwa-sensei253
Had the same issue on Ubuntu 22.04.03. I manually installed llvm, cmake, gcc and clang. Then i manually installed llama-cpp-python and it worked.

sudo apt install gcc cmake llvm clang
pip3 install llama-cpp-python

@Cafezinho
Copy link

Error for me too...

`
Defaulting to user installation because normal site-packages is not writeable
Collecting llama-cpp-python
Using cached llama_cpp_python-0.1.83.tar.gz (1.8 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting typing-extensions>=4.5.0
Using cached typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Collecting numpy>=1.20.0
Using cached numpy-1.25.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB)
Collecting diskcache>=5.6.1
Using cached diskcache-5.6.3-py3-none-any.whl (45 kB)
Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... error
error: subprocess-exited-with-error

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [184 lines of output]

  --------------------------------------------------------------------------------
  -- Trying 'Ninja' generator
  --------------------------------
  ---------------------------
  ----------------------
  -----------------
  ------------
  -------
  --
  CMake Deprecation Warning at CMakeLists.txt:1 (cmake_minimum_required):
    Compatibility with CMake < 3.5 will be removed from a future version of
    CMake.

    Update the VERSION argument <min> value or use a ...<max> suffix to tell
    CMake that the project does not need compatibility with older versions.

  Not searching for unused variables given on the command line.

  -- The C compiler identification is GNU 11.4.0
  -- Detecting C compiler ABI info
  -- Detecting C compiler ABI info - done
  -- Check for working C compiler: /usr/bin/cc - skipped
  -- Detecting C compile features
  -- Detecting C compile features - done
  -- The CXX compiler identification is GNU 11.4.0
  -- Detecting CXX compiler ABI info
  -- Detecting CXX compiler ABI info - done
  -- Check for working CXX compiler: /usr/bin/c++ - skipped
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  -- Configuring done (4.9s)
  -- Generating done (0.0s)
  -- Build files have been written to: /tmp/pip-install-zb5bu9st/llama-cpp-python_24b7e33fea9a4111bd59bb4482c69f2e/_cmake_test_compile/build
  --
  -------
  ------------
  -----------------
  ----------------------
  ---------------------------
  --------------------------------
  -- Trying 'Ninja' generator - success
  --------------------------------------------------------------------------------

  Configuring Project
    Working directory:
      /tmp/pip-install-zb5bu9st/llama-cpp-python_24b7e33fea9a4111bd59bb4482c69f2e/_skbuild/linux-x86_64-3.11/cmake-build
    Command:
      /tmp/pip-build-env-8te9r42p/overlay/local/lib/python3.11/dist-packages/cmake/data/bin/cmake /tmp/pip-install-zb5bu9st/llama-cpp-python_24b7e33fea9a4111bd59bb4482c69f2e -G Ninja -DCMAKE_MAKE_PROGRAM:FILEPATH=/tmp/pip-build-env-8te9r42p/overlay/local/lib/python3.11/dist-packages/ninja/data/bin/ninja --no-warn-unused-cli -DCMAKE_INSTALL_PREFIX:PATH=/tmp/pip-install-zb5bu9st/llama-cpp-python_24b7e33fea9a4111bd59bb4482c69f2e/_skbuild/linux-x86_64-3.11/cmake-install -DPYTHON_VERSION_STRING:STRING=3.11.4 -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/tmp/pip-build-env-8te9r42p/overlay/local/lib/python3.11/dist-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/usr/bin/python3 -DPYTHON_INCLUDE_DIR:PATH=/usr/include/python3.11 -DPYTHON_LIBRARY:PATH=/usr/lib/x86_64-linux-gnu/libpython3.11.so -DPython_EXECUTABLE:PATH=/usr/bin/python3 -DPython_ROOT_DIR:PATH=/usr -DPython_FIND_REGISTRY:STRING=NEVER -DPython_INCLUDE_DIR:PATH=/usr/include/python3.11 -DPython3_EXECUTABLE:PATH=/usr/bin/python3 -DPython3_ROOT_DIR:PATH=/usr -DPython3_FIND_REGISTRY:STRING=NEVER -DPython3_INCLUDE_DIR:PATH=/usr/include/python3.11 -DCMAKE_MAKE_PROGRAM:FILEPATH=/tmp/pip-build-env-8te9r42p/overlay/local/lib/python3.11/dist-packages/ninja/data/bin/ninja -DCMAKE_BUILD_TYPE:STRING=Release

  Not searching for unused variables given on the command line.
  -- The C compiler identification is GNU 11.4.0
  -- The CXX compiler identification is GNU 11.4.0
  -- Detecting C compiler ABI info
  -- Detecting C compiler ABI info - done
  -- Check for working C compiler: /usr/bin/cc - skipped
  -- Detecting C compile features
  -- Detecting C compile features - done
  -- Detecting CXX compiler ABI info
  -- Detecting CXX compiler ABI info - done
  -- Check for working CXX compiler: /usr/bin/c++ - skipped
  -- Detecting CXX compile features
  -- Detecting CXX compile features - done
  -- Configuring done (3.9s)
  -- Generating done (0.0s)
  -- Build files have been written to: /tmp/pip-install-zb5bu9st/llama-cpp-python_24b7e33fea9a4111bd59bb4482c69f2e/_skbuild/linux-x86_64-3.11/cmake-build
  [1/2] Generating /tmp/pip-install-zb5bu9st/llama-cpp-python_24b7e33fea9a4111bd59bb4482c69f2e/vendor/llama.cpp/libllama.so
  I llama.cpp build info:
  I UNAME_S:  Linux
  I UNAME_P:  x86_64
  I UNAME_M:  x86_64
  I CFLAGS:   -I.            -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS
  I CXXFLAGS: -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS
  I LDFLAGS:
  I CC:       cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
  I CXX:      g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0

  g++ -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -c llama.cpp -o llama.o
  cc  -I.            -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS   -c ggml.c -o ggml.o
  cc -I.            -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS   -c -o k_quants.o k_quants.c
  k_quants.c:182:14: warning: ‘make_qkx1_quants’ defined but not used [-Wunused-function]
    182 | static float make_qkx1_quants(int n, int nmax, const float * restrict x, uint8_t * restrict L, float * restrict the_min,
        |              ^~~~~~~~~~~~~~~~
  cc  -I.            -O3 -std=c11   -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS   -c ggml-alloc.c -o ggml-alloc.o
  g++ -I. -I./common -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -shared -fPIC -o libllama.so llama.o ggml.o k_quants.o ggml-alloc.o
  [1/2] Install the project...
  -- Install configuration: "Release"
  -- Installing: /tmp/pip-install-zb5bu9st/llama-cpp-python_24b7e33fea9a4111bd59bb4482c69f2e/_skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/libllama.so

  copying llama_cpp/utils.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/utils.py
  copying llama_cpp/llama_grammar.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/llama_grammar.py
  copying llama_cpp/llama_cpp.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/llama_cpp.py
  copying llama_cpp/llama_types.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/llama_types.py
  copying llama_cpp/__init__.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/__init__.py
  copying llama_cpp/llama.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/llama.py
  creating directory _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/server
  copying llama_cpp/server/__main__.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/server/__main__.py
  copying llama_cpp/server/app.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/server/app.py
  copying llama_cpp/server/__init__.py -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/server/__init__.py
  copying /tmp/pip-install-zb5bu9st/llama-cpp-python_24b7e33fea9a4111bd59bb4482c69f2e/llama_cpp/py.typed -> _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/py.typed

  running bdist_wheel
  running build
  running build_py
  creating _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11
  creating _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/utils.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/llama_grammar.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/llama_cpp.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/llama_types.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/__init__.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/llama.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  creating _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp/server
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/server/__main__.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp/server
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/server/app.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp/server
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/server/__init__.py -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp/server
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/py.typed -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  copying _skbuild/linux-x86_64-3.11/cmake-install/llama_cpp/libllama.so -> _skbuild/linux-x86_64-3.11/setuptools/lib.linux-x86_64-3.11/llama_cpp
  running build_ext
  running install
  running install_lib
  Traceback (most recent call last):
    File "/usr/lib/python3/dist-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module>
      main()
    File "/usr/lib/python3/dist-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main
      json_out['return_val'] = hook(**hook_input['kwargs'])
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/usr/lib/python3/dist-packages/pip/_vendor/pep517/in_process/_in_process.py", line 261, in build_wheel
      return _build_backend().build_wheel(wheel_directory, config_settings,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/usr/lib/python3/dist-packages/setuptools/build_meta.py", line 230, in build_wheel
      return self._build_with_temp_dir(['bdist_wheel'], '.whl',
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/usr/lib/python3/dist-packages/setuptools/build_meta.py", line 215, in _build_with_temp_dir
      self.run_setup()
    File "/usr/lib/python3/dist-packages/setuptools/build_meta.py", line 158, in run_setup
      exec(compile(code, __file__, 'exec'), locals())
    File "setup.py", line 8, in <module>
      setup(
    File "/tmp/pip-build-env-8te9r42p/overlay/local/lib/python3.11/dist-packages/skbuild/setuptools_wrap.py", line 781, in setup
      return setuptools.setup(**kw)  # type: ignore[no-any-return, func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^
    File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 153, in setup
      return distutils.core.setup(**attrs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/core.py", line 148, in setup
      return run_commands(dist)
             ^^^^^^^^^^^^^^^^^^
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/core.py", line 163, in run_commands
      dist.run_commands()
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 967, in run_commands
      self.run_command(cmd)
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 986, in run_command
      cmd_obj.run()
    File "/tmp/pip-build-env-8te9r42p/overlay/local/lib/python3.11/dist-packages/skbuild/command/bdist_wheel.py", line 33, in run
      super().run(*args, **kwargs)
    File "/usr/lib/python3/dist-packages/wheel/bdist_wheel.py", line 335, in run
      self.run_command('install')
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 986, in run_command
      cmd_obj.run()
    File "/usr/lib/python3/dist-packages/setuptools/command/install.py", line 68, in run
      return orig.install.run(self)
             ^^^^^^^^^^^^^^^^^^^^^^
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/command/install.py", line 622, in run
      self.run_command(cmd_name)
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 985, in run_command
      cmd_obj.ensure_finalized()
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 107, in ensure_finalized
      self.finalize_options()
    File "/tmp/pip-build-env-8te9r42p/overlay/local/lib/python3.11/dist-packages/skbuild/command/__init__.py", line 34, in finalize_options
      super().finalize_options(*args, **kwargs)
    File "/usr/lib/python3/dist-packages/setuptools/command/install_lib.py", line 17, in finalize_options
      self.set_undefined_options('install',('install_layout','install_layout'))
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 290, in set_undefined_options
      setattr(self, dst_option, getattr(src_cmd_obj, src_option))
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 103, in __getattr__
      raise AttributeError(attr)
  AttributeError: install_layout. Did you mean: 'install_platlib'?
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

`

@hcymysql
Copy link

hcymysql commented Sep 6, 2023

Open Interpreter will use Code Llama for local execution. Use your arrow keys to set up the model.                                                                     

[?] Parameter count (smaller is faster, larger is more capable): 7B
 > 7B
   13B
   34B

[?] Quality (lower is faster, higher is more capable): Low | Size: 3.01 GB, RAM usage: 5.51 GB
 > Low | Size: 3.01 GB, RAM usage: 5.51 GB
   Medium | Size: 4.24 GB, RAM usage: 6.74 GB
   High | Size: 7.16 GB, RAM usage: 9.66 GB

[?] Use GPU? (Large models might crash on GPU, but will run more quickly) (Y/n): y

[?] This instance of `Code-Llama` was not found. Would you like to download it? (Y/n): y


▌ Failed to install Code-LLama.                                                                                                                                      

We have likely not built the proper Code-Llama support for your system.                                                                                                

( Running language models locally is a difficult task! If you have insight into the best way to implement this across platforms/architectures, please join the Open    
Interpreter community Discord and consider contributing the project's development. )                                                                                   

Please press enter to switch to GPT-4 (recommended).   

@alexgit2k
Copy link

Had the same problem. As said in #39 (comment) llama-cpp-python has to be installed previously. Got it working in the Python docker-image docker run -it --rm python /bin/bash with the following commands:

pip install llama-cpp-python

pip install open-interpreter
interpreter --local

@Cafezinho
Copy link

Does not work.

@AlvinCZ
Copy link
Author

AlvinCZ commented Sep 6, 2023

Hi @AlvinCZ! Thanks for trying this out + for the kind words about the project.

Does it do this error for all models, like if you try to download the low quality 7B vs. medium quality 7B, etc? I'm also on M2 Mac with Python 3.11 so it should work. Maybe one of the download URLs has some weird SSL certificate stuff and we just need to find a new URL.

This error happens for all models and I still cannot download any model on my mac, but I managed to download models on my Windows machine (used to have this same issue) by closing my proxy app ( yes, windows can download models under the same internet condition with my mac?! )... btw, if someone has problem installing llama-cpp-python on Windows, try installing Microsoft Visual Studio first (need a cpp compiler for this module).

@Silversith
Copy link

Well, that took a second to figure out. You need to use Python x64 not x86

@robert-pattern
Copy link

I'm running into the same issues here. On a PC I was able to get it to download the packages but then it still wouldn't work. On my M2 mac I can't even get it to download the models

@iplayfast
Copy link

On Linux it installs interpreter in ~/.local/bin/interpreter with ~/.local/share/Open Interpreter/models/ being the location of the models.
I keep my models in ~/ai/data/models so I'm going to be moving things around and providing links from the old model location

@jordanbtucker
Copy link
Collaborator

@AlvinCZ Try running the following command on your Mac.

/Applications/Python\ 3.11/Install\ Certificates.command

Source

@Antwa-sensei253 @Cafezinho Ensure you have the proper build tools like cmake and build-essential or build_devel depending on your distribution. For Ubuntu, run this.

sudo apt update
sudo apt install build-essential cmake

@jordanbtucker
Copy link
Collaborator

I'm closing this issue. Feel free to open it back up if the issue is not resolved.

@Emojigit
Copy link

Emojigit commented Sep 9, 2023

Installing packages did not work for me, but I was able to install the package manually by directly executing pip in open interpreter's virtual environment created by pipx.

$HOME/.local/pipx/venvs/open-interpreter/bin/python -m pip install llama-cpp-python

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests