Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
139 changes: 139 additions & 0 deletions content/manuals/compose/how-tos/depent-images.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
---
description: Build images for services with shared definition
keywords: compose, build
title: Build dependent images
weight: 50
---

To reduce push/pull time and image weight, a common practice for Compose applications is to have services
share base layers as much as possible. You will typically select the same operating system base image for
all services. But you also can get one step further sharing image layers when your images share the same
system packages. The challenge to address is then to avoid repeating the exact same Dockerfile instruction
in all services.

For illustration, this page assumes you want all your services to be built with an `alpine` base
image and install system package `openssl`.

## Multi-stage Dockerfile

The recommended approach is to group the shared declaration in a single Dockerfile, and use multi-stage features
so that service images are built from this shared declaration.


Dockerfile:

```dockerfile
FROM alpine as base
RUN /bin/sh -c apk add --update --no-cache openssl

FROM base as service_a
# build service a
...

FROM base as service_b
# build service b
...
```

Compose file:

```yaml
services:
a:
build:
target: service_a
b:
build:
target: service_b
```

## Use another service's image as the base image

A popular pattern is to reuse a service image as a base image in another service.
As Compose does not parse the Dockerfile, it can't automatically detect this dependency
between services to correctly order the build execution.


a.Dockerfile:

```dockerfile
FROM alpine
RUN /bin/sh -c apk add --update --no-cache openssl
```

b.Dockerfile:

```dockerfile
FROM service_a
# build service b
```

Compose file:

```yaml
services:
a:
image: service_a
build:
dockerfile: a.Dockerfile
b:
image: service_b
build:
dockerfile: b.Dockerfile
```


Legacy Docker Compose v1 used to build images sequentially, which made this pattern usable
out of the box. Compose v2 uses BuildKit to optimise builds and build images in parallel
and requires an explicit declaration.

The recommended approach is to declare the dependent base image as an additional build context:

Compose file:

```yaml
services:
a:
image: service_a
build:
dockerfile: a.Dockerfile
b:
image: service_b
build:
context: b/
dockerfile: b.Dockerfile
additional_contexts:
# `FROM service_a` will be resolved as a dependency on service a which has to be built first
service_a: "service:a"
```


## Build with Bake

Using [Bake](/manuals/build/bake/_index.md) let you pass the complete build definition for all services
and to orchestrate build execution in the most efficient way.

To enable this feature, run Compose with the `COMPOSE_BAKE=true` variable set in your environment.

```console
$ COMPOSE_BAKE=true docker compose build
[+] Building 0.0s (0/1)
=> [internal] load local bake definitions 0.0s
...
[+] Building 2/2 manifest list sha256:4bd2e88a262a02ddef525c381a5bdb08c83 0.0s
✔ service_b Built 0.7s
✔ service_a Built
```

Bake can also be selected as the default builder by editing your `$HOME/.docker/config.json` config file:
```json
{
...
"plugins": {
"compose": {
"build": "bake"
}
}
...
}
```
2 changes: 2 additions & 0 deletions content/manuals/desktop/features/gordon/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ params:
badge:
color: blue
text: Beta
aliases:
- /desktop/features/gordon/
---

{{< summary-bar feature_name="Ask Gordon" >}}
Expand Down
22 changes: 12 additions & 10 deletions content/manuals/desktop/features/kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,14 @@ aliases:
weight: 60
---

Docker Desktop includes a standalone Kubernetes server and client, as well as Docker CLI integration, enabling local Kubernetes development and testing directly on your machine.
Docker Desktop includes a standalone Kubernetes server and client, as well as Docker CLI integration, enabling local Kubernetes development and testing directly on your machine.

The Kubernetes server runs as a single or multi-node cluster within a Docker container. This lightweight setup helps you explore Kubernetes features, test workloads, and work with container orchestration in parallel with other Docker functionalities.
The Kubernetes server runs as a single or multi-node cluster, within Docker container(s). This lightweight setup helps you explore Kubernetes features, test workloads, and work with container orchestration in parallel with other Docker functionalities.

Kubernetes on Docker Desktop runs alongside other workloads, including Swarm services and standalone containers.

![k8s settings](../images/k8s-settings.png)

## What happens when I enable Kubernetes in Docker Desktop?

When you enable Kubernetes in Docker Desktop, the following actions are triggered in the Docker Desktop backend and VM:
Expand All @@ -30,18 +32,18 @@ Turning the Kubernetes server on or off in Docker Desktop does not affect your o
## Install and turn on Kubernetes

1. Open the Docker Desktop Dashboard and navigate to **Settings**.
2. Select the **Kubernetes** tab.
2. Select the **Kubernetes** tab.
3. Toggle on **Enable Kubernetes**.
4. Choose your cluster provisioning method. You can choose either **Kubeadm** or **kind** if you are signed in and are using Docker Desktop version 4.38 or later.
4. Choose your cluster provisioning method. You can choose either **Kubeadm** or **kind** if you are signed in and are using Docker Desktop version 4.38 or later.

If you select **kind** you can also choose the Kubernetes version and the number of nodes.
If you select **kind** you can also choose the Kubernetes version and the number of nodes.
5. Select **Apply & Restart** to save the settings. This sets up the images required to run the Kubernetes server as containers, and installs the `kubectl` command-line tool on your system at `/usr/local/bin/kubectl` (Mac) or `C:\Program Files\Docker\Docker\Resources\bin\kubectl.exe` (Windows).

> [!NOTE]
>
> Docker Desktop for Linux does not include `kubectl` by default. You can install it separately by following the [Kubernetes installation guide](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/). Ensure the `kubectl` binary is installed at `/usr/local/bin/kubectl`.

When Kubernetes is enabled, its status is displayed in the Docker Desktop Dashboard footer and the Docker menu.
When Kubernetes is enabled, its status is displayed in the Docker Desktop Dashboard footer and the Docker menu.

You can check which version of Kubernetes you're on with:

Expand All @@ -53,7 +55,7 @@ $ kubectl version

#### Kubernetes dashboard

Once Kubernetes is installed and set up, you can select the **Deploy the Kubernetes Dashboard into cluster** setting so you can manage and monitor your Kubernetes clusters and applications more easily.
Once Kubernetes is installed and set up, you can select the **Deploy the Kubernetes Dashboard into cluster** setting so you can manage and monitor your Kubernetes clusters and applications more easily.

#### Viewing system containers

Expand All @@ -79,7 +81,7 @@ $ kubectl config use-context docker-desktop
> [!TIP]
>
> If the `kubectl` config get-contexts command returns an empty result, try:
>
>
> - Running the command in the Command Prompt or PowerShell.
> - Setting the `KUBECONFIG` environment variable to point to your `.kube/config` file.

Expand Down Expand Up @@ -111,13 +113,13 @@ Kubernetes clusters are not automatically upgraded with Docker Desktop updates.
$ kubectl config use-context docker-desktop
```
You can then try checking the logs of the [Kubernetes system containers](#viewing-system-containers) if you have enabled that setting.
- If you're experiencing cluster issues after updating, reset your Kubernetes cluster. Resetting a Kubernetes cluster can help resolve issues by essentially reverting the cluster to a clean state, and clearing out misconfigurations, corrupted data, or stuck resources that may be causing problems. If the issue still persists, you may need to clean and purge data, and then restart Docker Desktop.
- If you're experiencing cluster issues after updating, reset your Kubernetes cluster. Resetting a Kubernetes cluster can help resolve issues by essentially reverting the cluster to a clean state, and clearing out misconfigurations, corrupted data, or stuck resources that may be causing problems. If the issue still persists, you may need to clean and purge data, and then restart Docker Desktop.

## Turn off and uninstall Kubernetes

To turn off Kubernetes in Docker Desktop:

1. From the Docker Desktop Dashboard, select the **Settings** icon.
2. Select the **Kubernetes** tab.
2. Select the **Kubernetes** tab.
3. Deselect the **Enable Kubernetes** checkbox.
4. Select **Apply & Restart** to save the settings. This stops and removes Kubernetes containers, and also removes the `/usr/local/bin/kubectl` command.
1 change: 1 addition & 0 deletions content/manuals/desktop/features/wsl/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,3 +109,4 @@ Docker Desktop does not require any particular Linux distributions to be install
- [Explore best practices](best-practices.md)
- [Understand how to develop with Docker and WSL 2](use-wsl.md)
- [Learn about GPU support with WSL 2](/manuals/desktop/features/gpu.md)
- [Custom kernels on WSL](custom-kernels.md)
32 changes: 32 additions & 0 deletions content/manuals/desktop/features/wsl/custom-kernels.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
---
title: Custom kernels on WSL
description: Using custom kernels with Docker Desktop on WSL 2
keywords: wsl, docker desktop, custom kernel
tags: [Best practices, troubleshooting]
---

Docker Desktop depends on several kernel features built into the default
WSL 2 Linux kernel distributed by Microsoft. Consequently, using a
custom kernel with Docker Desktop on WSL 2 is not officially supported
and may cause issues with Docker Desktop startup or operation.

However, in some cases it may be necessary
to run custom kernels; Docker Desktop does not block their use, and
some users have reported success using them.

If you choose to use a custom kernel, it is recommended you start
from the kernel tree distributed by Microsoft from their [official
repository](https://github.com/microsoft/WSL2-Linux-Kernel) and then add
the features you need on top of that.

It's also recommended that you:
- Use the same kernel version as the one distributed by the latest WSL2
release. You can find the version by running `wsl.exe --system uname -r`
in a terminal.
- Start from the default kernel configuration as provided by Microsoft
from their [repository](https://github.com/microsoft/WSL2-Linux-Kernel)
and add the features you need on top of that.
- Make sure that your kernel build environment includes `pahole` and
its version is properly reflected in the corresponding kernel config
(`CONFIG_PAHOLE_VERSION`).

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -69,22 +69,42 @@ See [ECI Docker socket mount permissions](config.md#docker-socket-mount-permissi
Not yet. It protects all containers launched by users via `docker create` and
`docker run`.

Prior to Docker Desktop 4.30, it did not protect containers implicitly used by
`docker build` with the `docker` build driver (the default driver). Starting
with Docker Desktop 4.30, it protects such containers, except for Docker Desktop
on WSL 2 (Windows hosts).
For containers implicitly created by `docker build` as well as Docker
Desktop's integrated Kubernetes, protection varies depending on the Docker
Desktop version (see the following two FAQs).

Note that ECI always protects containers used by `docker build`, when using the
[docker-container build driver](/manuals/build/builders/drivers/_index.md), since Docker
Desktop 4.19 and on all supported platforms (Windows with WSL 2 or Hyper-V, Mac,
and Linux).
ECI does not yet protect Docker Desktop Extension containers and
[Dev Environments containers](/manuals/desktop/features/dev-environments/_index.md).

ECI does not yet protect Docker Desktop Kubernetes pods, Extension containers,
and [Dev Environments containers](/manuals/desktop/features/dev-environments/_index.md).
### Does ECI protect containers implicitly used by `docker build`?

Prior to Docker Desktop 4.19, ECI did not protect containers used implicitly
by `docker build` during the build process.

Since Docker Desktop 4.19, ECI protects containers used by `docker build`
when using the [Docker container build driver](/manuals/build/builders/drivers/_index.md).

In addition, since Docker Desktop 4.30, ECI also protects containers used by
`docker build` when using the default "docker" build driver, on all
platforms supported by Docker Desktop except Windows with WSL 2.

### Does ECI protect Kubernetes in Docker Desktop?

Prior to Docker Desktop 4.38, ECI did not protect the Kubernetes cluster
integrated in Docker Desktop.

Since Docker Desktop 4.38, ECI protects the integreated Kubernetes cluster
when using the new **kind** provisioner (see [Deploy On Kubernetes](/manuals/desktop/features/kubernetes.md)).
In this case, each node in the multi-node Kubernetes cluster is actually an ECI
protected container. With ECI disabled, each node in the Kubernetes cluster is
a less-secure fully privileged container.

ECI does not protect the integrated Kubernetes cluster when using the
older **Kubeadm** single-node cluster provisioner.

### Does ECI protect containers launched prior to enabling ECI?

No. Containers created prior to switching on ECI are not protected. Therefore, it is
No. Containers created prior to switching on ECI are not protected. Therefore, it is
recommended you remove all containers prior to switching on ECI.

### Does ECI affect the performance of containers?
Expand Down