Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions assets/icons/models.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 4 additions & 4 deletions content/manuals/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,15 +36,15 @@ params:
description: Manage and secure your AI tools with a single gateway.
icon: /icons/toolkit.svg
link: /ai/mcp-gateway/

ai:
- title: Ask Gordon
description: Streamline your workflow and get the most out of the Docker ecosystem with your personal AI assistant.
icon: note_add
link: /ai/gordon/
- title: Docker Model Runner
description: View and manage your local models.
icon: view_in_ar
icon: /icons/models.svg
link: /ai/model-runner/
- title: MCP Catalog and Toolkit
description: Augment your AI workflow with MCP servers.
Expand Down Expand Up @@ -126,7 +126,7 @@ Open source development and containerization technologies.

## AI

All the Docker AI tools in one easy-to-access location.
All the Docker AI tools in one easy-to-access location.

{{< grid items=ai >}}

Expand All @@ -145,6 +145,6 @@ subscription management.

## Enterprise

Targeted at IT administrators with help on deploying Docker Desktop at scale with configuration guidance on security related features.
Targeted at IT administrators with help on deploying Docker Desktop at scale with configuration guidance on security related features.

{{< grid items=enterprise >}}
2 changes: 1 addition & 1 deletion content/manuals/ai/model-runner/api-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ To call the `chat/completions` OpenAI endpoint from the host via TCP:
If you are running on Windows, also enable GPU-backed inference.
See [Enable Docker Model Runner](get-started.md#enable-docker-model-runner-in-docker-desktop).

2. Interact with it as documented in the previous section using `localhost` and the correct port.
1. Interact with it as documented in the previous section using `localhost` and the correct port.

```bash
#!/bin/sh
Expand Down
25 changes: 13 additions & 12 deletions content/manuals/ai/model-runner/get-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,23 +30,23 @@ with your local models in the **Models** tab in the Docker Desktop Dashboard.
> For Docker Desktop versions 4.41 and earlier, this setting was under the
> **Experimental features** tab on the **Features in development** page.

### Enable Docker Model Runner in Docker Engine
### Enable DMR in Docker Engine

1. Ensure you have installed [Docker Engine](/engine/install/).
1. Docker Model Runner is available as a package. To install it, run:

{{< tabs >}}
{{< tab name="Ubuntu/Debian">}}

```console
```bash
$ sudo apt-get update
$ sudo apt-get install docker-model-plugin
```

{{< /tab >}}
{{< tab name="RPM-base distributions">}}

```console
```bash
$ sudo dnf update
$ sudo dnf install docker-model-plugin
```
Expand All @@ -56,21 +56,21 @@ with your local models in the **Models** tab in the Docker Desktop Dashboard.

1. Test the installation:

```console
```bash
$ docker model version
$ docker model run ai/smollm2
```

> [!NOTE]
> TCP support is enabled by default for Docker Engine on port `12434`.

### Update Docker Model Runner in Docker Engine
### Update DMR in Docker Engine

To update Docker Model Runner in Docker Engine, uninstall it with
[`docker model uninstall-runner`](/reference/cli/docker/model/uninstall-runner/)
then reinstall it:

```console
```bash
docker model uninstall-runner --images && docker model install-runner
```

Expand Down Expand Up @@ -133,8 +133,9 @@ Use the [`docker model run` command](/reference/cli/docker/model/run/).

## Configure a model

You can configure a model, such as the its maximum token limit and more,
use Docker Compose. See [Models and Compose - Model configuration options](../compose/models-and-compose.md#model-configuration-options).
You can configure a model, such as its maximum token limit and more,
use Docker Compose.
See [Models and Compose - Model configuration options](../compose/models-and-compose.md#model-configuration-options).

## Publish a model

Expand All @@ -146,7 +147,7 @@ use Docker Compose. See [Models and Compose - Model configuration options](../co
You can tag existing models with a new name and publish them under a different
namespace and repository:

```console
```bash
# Tag a pulled model under a new name
$ docker model tag ai/smollm2 myorg/smollm2

Expand All @@ -161,7 +162,7 @@ documentation.
You can also package a model file in GGUF format as an OCI Artifact and publish
it to Docker Hub.

```console
```bash
# Download a model file in GGUF format, for example from HuggingFace
$ curl -L -o model.gguf https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF/resolve/main/mistral-7b-v0.1.Q4_K_M.gguf

Expand Down Expand Up @@ -209,7 +210,7 @@ In Docker Desktop, to inspect the requests and responses for each model:
- The prompt/request
- The context usage
- The time it took for the response to be generated.
2. Select one of the requests to display further details:
1. Select one of the requests to display further details:
- In the **Overview** tab, view the token usage, response metadata and generation speed, and the actual prompt and response.
- In the **Request** and **Response** tabs, view the full JSON payload of the request and the response.

Expand All @@ -220,4 +221,4 @@ In Docker Desktop, to inspect the requests and responses for each model:

- [Interact with your model programmatically](./api-reference.md)
- [Models and Compose](../compose/models-and-compose.md)
- [Docker Model Runner cli reference documentation](/reference/cli/docker/model)
- [Docker Model Runner CLI reference documentation](/reference/cli/docker/model)
Empty file.