From 4f93d31fb622f1fd27377b7d30653772bf7e398a Mon Sep 17 00:00:00 2001 From: Arthur Date: Wed, 13 Aug 2025 10:54:56 +0200 Subject: [PATCH] various touch ups --- assets/icons/models.svg | 3 +++ content/manuals/_index.md | 8 +++--- .../manuals/ai/model-runner/api-reference.md | 2 +- .../manuals/ai/model-runner/get-started.md | 25 ++++++++++--------- content/manuals/ai/model-runner/setup.md | 0 5 files changed, 21 insertions(+), 17 deletions(-) create mode 100644 assets/icons/models.svg delete mode 100644 content/manuals/ai/model-runner/setup.md diff --git a/assets/icons/models.svg b/assets/icons/models.svg new file mode 100644 index 000000000000..581f3621afb2 --- /dev/null +++ b/assets/icons/models.svg @@ -0,0 +1,3 @@ + + + diff --git a/content/manuals/_index.md b/content/manuals/_index.md index a0a8ea05ea78..58b911701cfe 100644 --- a/content/manuals/_index.md +++ b/content/manuals/_index.md @@ -36,7 +36,7 @@ params: description: Manage and secure your AI tools with a single gateway. icon: /icons/toolkit.svg link: /ai/mcp-gateway/ - + ai: - title: Ask Gordon description: Streamline your workflow and get the most out of the Docker ecosystem with your personal AI assistant. @@ -44,7 +44,7 @@ params: link: /ai/gordon/ - title: Docker Model Runner description: View and manage your local models. - icon: view_in_ar + icon: /icons/models.svg link: /ai/model-runner/ - title: MCP Catalog and Toolkit description: Augment your AI workflow with MCP servers. @@ -126,7 +126,7 @@ Open source development and containerization technologies. ## AI -All the Docker AI tools in one easy-to-access location. +All the Docker AI tools in one easy-to-access location. {{< grid items=ai >}} @@ -145,6 +145,6 @@ subscription management. ## Enterprise -Targeted at IT administrators with help on deploying Docker Desktop at scale with configuration guidance on security related features. +Targeted at IT administrators with help on deploying Docker Desktop at scale with configuration guidance on security related features. {{< grid items=enterprise >}} \ No newline at end of file diff --git a/content/manuals/ai/model-runner/api-reference.md b/content/manuals/ai/model-runner/api-reference.md index 6a05c7c82893..3d6d81422d57 100644 --- a/content/manuals/ai/model-runner/api-reference.md +++ b/content/manuals/ai/model-runner/api-reference.md @@ -144,7 +144,7 @@ To call the `chat/completions` OpenAI endpoint from the host via TCP: If you are running on Windows, also enable GPU-backed inference. See [Enable Docker Model Runner](get-started.md#enable-docker-model-runner-in-docker-desktop). -2. Interact with it as documented in the previous section using `localhost` and the correct port. +1. Interact with it as documented in the previous section using `localhost` and the correct port. ```bash #!/bin/sh diff --git a/content/manuals/ai/model-runner/get-started.md b/content/manuals/ai/model-runner/get-started.md index e4a66594bce6..727e63d53530 100644 --- a/content/manuals/ai/model-runner/get-started.md +++ b/content/manuals/ai/model-runner/get-started.md @@ -30,7 +30,7 @@ with your local models in the **Models** tab in the Docker Desktop Dashboard. > For Docker Desktop versions 4.41 and earlier, this setting was under the > **Experimental features** tab on the **Features in development** page. -### Enable Docker Model Runner in Docker Engine +### Enable DMR in Docker Engine 1. Ensure you have installed [Docker Engine](/engine/install/). 1. Docker Model Runner is available as a package. To install it, run: @@ -38,7 +38,7 @@ with your local models in the **Models** tab in the Docker Desktop Dashboard. {{< tabs >}} {{< tab name="Ubuntu/Debian">}} - ```console + ```bash $ sudo apt-get update $ sudo apt-get install docker-model-plugin ``` @@ -46,7 +46,7 @@ with your local models in the **Models** tab in the Docker Desktop Dashboard. {{< /tab >}} {{< tab name="RPM-base distributions">}} - ```console + ```bash $ sudo dnf update $ sudo dnf install docker-model-plugin ``` @@ -56,7 +56,7 @@ with your local models in the **Models** tab in the Docker Desktop Dashboard. 1. Test the installation: - ```console + ```bash $ docker model version $ docker model run ai/smollm2 ``` @@ -64,13 +64,13 @@ with your local models in the **Models** tab in the Docker Desktop Dashboard. > [!NOTE] > TCP support is enabled by default for Docker Engine on port `12434`. -### Update Docker Model Runner in Docker Engine +### Update DMR in Docker Engine To update Docker Model Runner in Docker Engine, uninstall it with [`docker model uninstall-runner`](/reference/cli/docker/model/uninstall-runner/) then reinstall it: -```console +```bash docker model uninstall-runner --images && docker model install-runner ``` @@ -133,8 +133,9 @@ Use the [`docker model run` command](/reference/cli/docker/model/run/). ## Configure a model -You can configure a model, such as the its maximum token limit and more, -use Docker Compose. See [Models and Compose - Model configuration options](../compose/models-and-compose.md#model-configuration-options). +You can configure a model, such as its maximum token limit and more, +use Docker Compose. +See [Models and Compose - Model configuration options](../compose/models-and-compose.md#model-configuration-options). ## Publish a model @@ -146,7 +147,7 @@ use Docker Compose. See [Models and Compose - Model configuration options](../co You can tag existing models with a new name and publish them under a different namespace and repository: -```console +```bash # Tag a pulled model under a new name $ docker model tag ai/smollm2 myorg/smollm2 @@ -161,7 +162,7 @@ documentation. You can also package a model file in GGUF format as an OCI Artifact and publish it to Docker Hub. -```console +```bash # Download a model file in GGUF format, for example from HuggingFace $ curl -L -o model.gguf https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF/resolve/main/mistral-7b-v0.1.Q4_K_M.gguf @@ -209,7 +210,7 @@ In Docker Desktop, to inspect the requests and responses for each model: - The prompt/request - The context usage - The time it took for the response to be generated. -2. Select one of the requests to display further details: +1. Select one of the requests to display further details: - In the **Overview** tab, view the token usage, response metadata and generation speed, and the actual prompt and response. - In the **Request** and **Response** tabs, view the full JSON payload of the request and the response. @@ -220,4 +221,4 @@ In Docker Desktop, to inspect the requests and responses for each model: - [Interact with your model programmatically](./api-reference.md) - [Models and Compose](../compose/models-and-compose.md) -- [Docker Model Runner cli reference documentation](/reference/cli/docker/model) \ No newline at end of file +- [Docker Model Runner CLI reference documentation](/reference/cli/docker/model) \ No newline at end of file diff --git a/content/manuals/ai/model-runner/setup.md b/content/manuals/ai/model-runner/setup.md deleted file mode 100644 index e69de29bb2d1..000000000000