- Setting up
- Which Docker packages are supported?
- What is the minimum supported Docker version?
- How do I install the NVIDIA driver?
- I'm getting The following signatures were invalid: EXPKEYSIG while trying to install the packages, what do I do?
- Platform support
- Do you support Jetson platforms (AArch64)?
- Is macOS supported?
- Is Microsoft Windows supported?
- Do you support Microsoft native container technologies (e.g. Windows Server, Hyper-v)?
- Do you support Optimus (i.e. NVIDIA dGPU + Intel iGPU)?
- What distributions are officially supported?
- Do you support PowerPC64 (ppc64le)?
- Container Runtime
- Does it have a performance impact on my GPU workload?
- Is OpenGL supported?
- How do I fix unsatisfied condition: cuda >= X.Y?
- Do you support CUDA Multi Process Service (a.k.a. MPS)?
- Do you support running a GPU-accelerated X server inside the container?
- I have multiple GPU devices, how can I isolate them between my containers?
- Why is nvidia-smi inside the container not listing the running processes?
- Can I share a GPU between multiple containers?
- Can I limit the GPU resources (e.g. bandwidth, memory, CUDA cores) taken by a container?
- Can I enforce exclusive access for a GPU?
- Why is my container slow to start?
- Can I use it with Docker-in-Docker (a.k.a. DinD)?
- Why is my application inside the container slow to initialize?
- Is the JIT cache shared between containers?
- What is causing the CUDA invalid device function error?
- Why do I get Insufficient Permissions for some nvidia-smi operations?
- Can I profile and debug my GPU code inside a container?
- Is OpenCL supported?
- Is Vulkan supported?
- Container images
- What do I have to install in my container images?
- Do you provide official Docker images?
- Can I use the GPU during a container build (i.e. docker build)?
- Are my container images built for version 1.0 compatible with 2.0 and 3.0?
- How do I link against driver APIs at build time (e.g. libcuda.so or libnvidia-ml.so)?
- The official CUDA images are too big, what do I do?
- Why aren't CUDA 10 images working with nvidia-docker v1?
- Ecosystem enablement
- Do you support Docker Swarm mode?
- Do you support Docker Compose?
- Do you support Kubernetes?
Clone this wiki locally
- All stable releases of
docker-ceinstalled from https://docs.docker.com/install/ starting from docker 19.03
- The package provided by Canonical:
docker.iostarting from docker 19.03.
- The package provided by Red Hat:
dockerstarting from docker 19.03.
Note that Edge, Test and Nightly releases are not officially supported but we will provide best effort support.
Docker 19.03 which adds support for the
Alternatively, the NVIDIA driver can be deployed through a container.
Refer to the documentation for more information.
The following signatures were invalid: EXPKEYSIG while trying to install the packages, what do I do?
Make sure you fetched the latest GPG key from the repositories. Refer to the repository instructions for your distribution.
Yes - beta support of the NVIDIA Container Runtime is now available on Jetson platforms (AGX, TX2 and Nano). See this link for more information on getting started.
No, we do not support macOS (regardless of the version), however you can use the native macOS Docker client to deploy your containers remotely (refer to the dockerd documentation).
No, we do not support Microsoft Windows (regardless of the version), however you can use the native Microsoft Windows Docker client to deploy your containers remotely (refer to the dockerd documentation). We also support running Linux containers in Microsoft Windows Subsystem for Linux (WSL 2). Visit the user guide for getting started with WSL 2.
No, we do not yet support native Microsoft container technologies.
Yes, from the CUDA perspective there is no difference as long as your dGPU is powered-on and you are following the official driver instructions.
Yes, little-endian only.
No, usually the impact should be in the order of less than 1% and hardly noticeable.
However be aware of the following (list non exhaustive):
GPU topology and CPU affinity
You can query it using
nvidia-smi topoand use Docker CPU sets to pin CPU cores.
Compiling your code for your device architecture
Your container might be compiled for the wrong achitecture and could fallback to the JIT compilation of PTX code (refer to the official documentation for more information).
Note that you can express these constraints in your container image.
Container I/O overhead
By default Docker containers rely on an overlay filesystem and bridged/NATed networking.
Depending on your workload this can be a bottleneck, we recommend using Docker volumes and experiment with different Docker networks.
Linux kernel accounting and security overhead
In rare cases, you may notice than some kernel subsystems induce overhead.
This will likely depend on your kernel version and can include things like: cgroups, LSMs, seccomp filters, netfilter...
Yes, EGL is supported for headless rendering, but this is a beta feature. There is no plan to support GLX in the near future.
Images are available at
nvidia/opengl. If you need CUDA+OpenGL, use
If you are a NGC subscriber and require GLX for your workflow, please fill out a feature request for support consideration.
No, MPS is not supported at the moment. However we plan on supporting this feature in the future, and this issue will be updated accordingly.
No, running a X server inside the container is not supported at the moment and there is no plan to support it in the near future (see also OpenGL support).
GPU isolation is achieved through the CLI option
--gpus. Devices can be referenced by index (following the PCI bus order) or by UUID. See the user guide for more information on these options.
# If you have 4 GPUs, to isolate GPUs 3 and 4 (/dev/nvidia2 and /dev/nvidia3) $ docker run --gpus device=2,3 nvidia/cuda:9.0-base nvidia-smi
nvidia-smi and NVML are not compatible with PID namespaces.
We recommend monitoring your processes on the host or inside a container using
Yes. This is no different than sharing a GPU between multiple processes outside of containers.
Scheduling and compute preemption vary from one GPU architecture to another (e.g. CTA-level, instruction-level).
No. Your only option is to set the GPU clocks at a lower frequency before starting the container.
This is not currently supported but you can enforce it:
- At the container orchestration layer (Kubernetes, Swarm, Mesos, Slurm…) since this is tied to resource allocation.
- At the driver level by setting the compute mode of the GPU.
You probably need to enable persistence mode to keep the kernel modules loaded and the GPUs initialized.
The recommended way is to start the
nvidia-persistenced daemon on your host.
If you are running a Docker client inside a container: simply mount the Docker socket and proceed as usual.
If you are running a Docker daemon inside a container: this case is untested.
Your application was probably not compiled for the compute architecture of your GPU and thus the driver must JIT all the CUDA kernels from PTX. In addition to a slow start, the JIT compiler might generate less efficient code than directly targeting your compute architecture (see also performance impact).
No. You would have to handle this manually with Docker volumes.
Your application was not compiled for the compute architecture of your GPU, and no PTX was generated during build time. Thus, JIT compiling is impossible (see also slow to initialize).
Some device management operations require extra privileges (e.g. setting clocks frequency).
After learning about the security implications of doing so, you can add extra capabilities to your container using
--cap-add on the command-line (
--cap-add=SYS_ADMIN will allow most operations).
Yes, we now provide images on DockerHub.
No, Vulkan is not supported at the moment. However we plan on supporting this feature in the future.
Library dependencies vary from one application to another. In order to make things easier for developers, we provide a set of official images to base your images on.
Yes, as long as you configure your Docker daemon to use the
nvidia runtime as the default, you will be able to have build-time GPU support. However, be aware that this can render your images non-portable (see also invalid device function).
Yes, for most cases. The main difference being that we don’t mount all driver libraries by default in 2.0 and 3.0. You might need to set the
CUDA_DRIVER_CAPABILITIES environment variable in your Dockerfile or when starting the container. Check the documentation of nvidia-container-runtime.
Use the library stubs provided in
/usr/local/cuda/lib64/stubs/. Our official images already take care of setting
However, do not set
LD_LIBRARY_PATH to this folder, the stubs must not be used at runtime.
devel image tags are large since the CUDA toolkit ships with many libraries, a compiler and various command-line tools.
As a general rule of thumb, you shouldn’t ship your application with its build-time dependencies. We recommend to use multi-stage builds for this purpose. Your final container image should use our
As of CUDA 9.0 we now ship a
base image tag which bundles the strict minimum of dependencies.
Starting from CUDA 10.0, the CUDA images require using nvidia-docker v2 and won't trigger the GPU enablement path from nvidia-docker v1.
Not currently, support for Swarmkit is still being worked on in the upstream Moby project. You can track our progress here.
Yes, use Compose format
2.3 and add
runtime: nvidia to your GPU service. Docker Compose must be version 1.19.0 or higher. You can find an example here.
Note that you'll have to install the old
Since Kubernetes 1.8, the recommended way is to use our official device plugin.