Skip to content

Commit

Permalink
docs: Update README.md to fix pip install llama cpp server (#1187)
Browse files Browse the repository at this point in the history
Without the single quotes, when running the command, an error is printed saying no matching packages found on pypi. Adding the quotes fixes it

```bash
$ pip install llama-cpp-python[server]
zsh: no matches found: llama-cpp-python[server]
```

Co-authored-by: Andrei <abetlen@gmail.com>
  • Loading branch information
audip and abetlen committed Feb 23, 2024
1 parent 251a8a2 commit 52d9d70
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -505,14 +505,14 @@ This allows you to use llama.cpp compatible models with any OpenAI compatible cl
To install the server package and get started:

```bash
pip install llama-cpp-python[server]
pip install 'llama-cpp-python[server]'
python3 -m llama_cpp.server --model models/7B/llama-model.gguf
```

Similar to Hardware Acceleration section above, you can also install with GPU (cuBLAS) support like this:

```bash
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python[server]
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install 'llama-cpp-python[server]'
python3 -m llama_cpp.server --model models/7B/llama-model.gguf --n_gpu_layers 35
```

Expand Down

0 comments on commit 52d9d70

Please sign in to comment.