From 45621316c76b18902ed9731701cf743b6b8df286 Mon Sep 17 00:00:00 2001 From: ArthurFlag Date: Mon, 2 Jun 2025 12:43:46 +0200 Subject: [PATCH] clarify instruction for windows --- content/manuals/ai/model-runner.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/content/manuals/ai/model-runner.md b/content/manuals/ai/model-runner.md index db643bcab13d..7f9a94b186c9 100644 --- a/content/manuals/ai/model-runner.md +++ b/content/manuals/ai/model-runner.md @@ -143,6 +143,10 @@ To call the `chat/completions` OpenAI endpoint from the host via TCP: 1. Enable the host-side TCP support from the Docker Desktop GUI, or via the [Docker Desktop CLI](/manuals/desktop/features/desktop-cli.md). For example: `docker desktop enable model-runner --tcp `. + + If you are running on Windows, also enable GPU-backed inference. + See [Enable Docker Model Runner](#enable-dmr-in-docker-desktop). + 2. Interact with it as documented in the previous section using `localhost` and the correct port. ```bash