From 339453e2f9288ddb4552261cbc20e94f82aba44f Mon Sep 17 00:00:00 2001 From: Radoslav Gerganov Date: Mon, 6 Oct 2025 11:46:25 +0300 Subject: [PATCH 1/4] rpc : update documentation Update the README file to match the newly added functionality of exposing multiple devices from a single server. --- tools/rpc/README.md | 60 +++++++++++++++++++++++++++++---------------- 1 file changed, 39 insertions(+), 21 deletions(-) diff --git a/tools/rpc/README.md b/tools/rpc/README.md index 561f19fda6b06..bc52ec1a4e5b0 100644 --- a/tools/rpc/README.md +++ b/tools/rpc/README.md @@ -4,7 +4,7 @@ > This example and the RPC backend are currently in a proof-of-concept development stage. As such, the functionality is fragile and > insecure. **Never run the RPC server on an open network or in a sensitive environment!** -The `rpc-server` allows running `ggml` backend on a remote host. +The `rpc-server` allows exposing `ggml` devices on a remote host. The RPC backend communicates with one or several instances of `rpc-server` and offloads computations to them. This can be used for distributed LLM inference with `llama.cpp` in the following way: @@ -14,28 +14,34 @@ flowchart TD rpcb<-->|TCP|srvb rpcb<-.->|TCP|srvn subgraph hostn[Host N] - srvn[rpc-server]<-.->backend3["Backend (CUDA,Metal,etc.)"] + srvn[rpc-server]<-.->dev4["CUDA0"] + srvn[rpc-server]<-.->dev5["CPU"] end subgraph hostb[Host B] - srvb[rpc-server]<-->backend2["Backend (CUDA,Metal,etc.)"] + srvb[rpc-server]<-->dev3["Metal"] end subgraph hosta[Host A] - srva[rpc-server]<-->backend["Backend (CUDA,Metal,etc.)"] + srva[rpc-server]<-->dev["CUDA0"] + srva[rpc-server]<-->dev2["CUDA1"] end subgraph host[Main Host] - local["Backend (CUDA,Metal,etc.)"]<-->ggml[llama-cli] + local["Local devices"]<-->ggml[llama-cli] ggml[llama-cli]<-->rpcb[RPC backend] end style hostn stroke:#66,stroke-width:2px,stroke-dasharray: 5 5 + classDef devcls fill:cyan + class local,dev,dev2,dev3,dev4,dev5 devcls ``` -Each host can run a different backend, e.g. one with CUDA and another with Metal. -You can also run multiple `rpc-server` instances on the same host, each with a different backend. +By default, `rpc-server` exposes all available accelerator devices on the host. +If there are no accelerators, it exposes a single `CPU` device. ## Usage -On each host, build the corresponding backend with `cmake` and add `-DGGML_RPC=ON` to the build options. -For example, to build the CUDA backend with RPC support: +### Remote hosts + +On each remote host, build the backends for each accelerator by adding `-DGGML_RPC=ON` to the build options. +For example, to build the `rpc-server` with support for CUDA accelerators: ```bash mkdir build-rpc-cuda @@ -44,30 +50,34 @@ cmake .. -DGGML_CUDA=ON -DGGML_RPC=ON cmake --build . --config Release ``` -Then, start the `rpc-server` with the backend: +When started, the `rpc-server` will detect and expose all available `CUDA` devices: ```bash -$ bin/rpc-server -p 50052 -create_backend: using CUDA backend -ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no -ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes +$ bin/rpc-server +ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no +ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: - Device 0: NVIDIA T1200 Laptop GPU, compute capability 7.5, VMM: yes -Starting RPC server on 0.0.0.0:50052 + Device 0: NVIDIA GeForce RTX 5090, compute capability 12.0, VMM: yes +Starting RPC server v3.0.0 + endpoint : 127.0.0.1:50052 + local cache : n/a +Devices: + CUDA0: NVIDIA GeForce RTX 5090 (32109 MiB, 31588 MiB free) ``` -When using the CUDA backend, you can specify the device with the `CUDA_VISIBLE_DEVICES` environment variable, e.g.: +You can control the set of exposed CUDA devices with the `CUDA_VISIBLE_DEVICES` environment variable or the `--device` command line option. The following two commands have the same effect: ```bash $ CUDA_VISIBLE_DEVICES=0 bin/rpc-server -p 50052 +$ bin/rpc-server --device CUDA0 -p 50052 ``` -This way you can run multiple `rpc-server` instances on the same host, each with a different CUDA device. +### Main host -On the main host build `llama.cpp` for the local backend and add `-DGGML_RPC=ON` to the build options. -Finally, when running `llama-cli`, use the `--rpc` option to specify the host and port of each `rpc-server`: +On the main host build `llama.cpp` with the backends for the local devices and add `-DGGML_RPC=ON` to the build options. +Finally, when running `llama-cli` or `llama-server`, use the `--rpc` option to specify the host and port of each `rpc-server`: ```bash -$ bin/llama-cli -m ../models/tinyllama-1b/ggml-model-f16.gguf -p "Hello, my name is" --repeat-penalty 1.0 -n 64 --rpc 192.168.88.10:50052,192.168.88.11:50052 -ngl 99 +$ llama-cli -hf ggml-org/gemma-3-1b-it-GGUF -ngl 99 --rpc 192.168.88.10:50052,192.168.88.11:50052 ``` This way you can offload model layers to both local and remote devices. @@ -83,3 +93,11 @@ $ bin/rpc-server -c ``` By default, the cache is stored in the `$HOME/.cache/llama.cpp/rpc` directory and can be controlled via the `LLAMA_CACHE` environment variable. + +### Troubleshooting + +Use the `GGML_RPC_DEBUG` environment variable to enable debug messages from `rpc-server`: +```bash +$ GGML_RPC_DEBUG=1 bin/rpc-server +``` + From 6f90443ef87177b66f5f22035340f60cdaab7c2e Mon Sep 17 00:00:00 2001 From: Radoslav Gerganov Date: Mon, 6 Oct 2025 15:13:46 +0300 Subject: [PATCH 2/4] change device color in diagram --- tools/rpc/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/rpc/README.md b/tools/rpc/README.md index bc52ec1a4e5b0..8898f82b147bf 100644 --- a/tools/rpc/README.md +++ b/tools/rpc/README.md @@ -29,7 +29,7 @@ flowchart TD ggml[llama-cli]<-->rpcb[RPC backend] end style hostn stroke:#66,stroke-width:2px,stroke-dasharray: 5 5 - classDef devcls fill:cyan + classDef devcls fill:#5B9BD5 class local,dev,dev2,dev3,dev4,dev5 devcls ``` From 0ccf0ba7e74ebfc04643055d70e0592083e63ca5 Mon Sep 17 00:00:00 2001 From: Radoslav Gerganov Date: Mon, 6 Oct 2025 16:18:47 +0300 Subject: [PATCH 3/4] add not about --tensor-split --- tools/rpc/README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/tools/rpc/README.md b/tools/rpc/README.md index 8898f82b147bf..282669d7ed542 100644 --- a/tools/rpc/README.md +++ b/tools/rpc/README.md @@ -80,7 +80,8 @@ Finally, when running `llama-cli` or `llama-server`, use the `--rpc` option to s $ llama-cli -hf ggml-org/gemma-3-1b-it-GGUF -ngl 99 --rpc 192.168.88.10:50052,192.168.88.11:50052 ``` -This way you can offload model layers to both local and remote devices. +By default, the ggml scheduler distributes model weights across all available devices -- both local and remote -- in proportion to each device's available memory. +You can override this behavior with the `--tensor-split` option and set custom proportions when splitting tensor data across devices. ### Local cache From 55231c60495a1b3524a85253dd7260eaa7e059c7 Mon Sep 17 00:00:00 2001 From: Radoslav Gerganov Date: Tue, 7 Oct 2025 06:50:37 +0000 Subject: [PATCH 4/4] Update tools/rpc/README.md Co-authored-by: Diego Devesa --- tools/rpc/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tools/rpc/README.md b/tools/rpc/README.md index 282669d7ed542..afbb302f4b46d 100644 --- a/tools/rpc/README.md +++ b/tools/rpc/README.md @@ -80,7 +80,7 @@ Finally, when running `llama-cli` or `llama-server`, use the `--rpc` option to s $ llama-cli -hf ggml-org/gemma-3-1b-it-GGUF -ngl 99 --rpc 192.168.88.10:50052,192.168.88.11:50052 ``` -By default, the ggml scheduler distributes model weights across all available devices -- both local and remote -- in proportion to each device's available memory. +By default, llama.cpp distributes model weights and the KV cache across all available devices -- both local and remote -- in proportion to each device's available memory. You can override this behavior with the `--tensor-split` option and set custom proportions when splitting tensor data across devices. ### Local cache