Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
nvidia-docker is a thin wrapper on top of
docker and act as a drop-in replacement for the
docker command line interface. This binary is provided as a convenience to automatically detect and setup GPU containers leveraging NVIDIA hardware. Refer to the internals section if you don't intend to use it.
docker and relies on the NVIDIA Docker plugin to discover driver files and GPU devices. The command used by
nvidia-docker can be overridden using the environment variable
# Running nvidia-docker with a custom docker command NV_DOCKER='sudo docker -D' nvidia-docker <docker-options> <docker-command> <docker-args>
nvidia-docker only modifies the behavior of the
create Docker commands. All the other commands are just pass-through to the docker command line interface. As a result, you can't execute GPU code when building a Docker image.
GPUs are exported through a list of comma-separated IDs using the environment variable
NV_GPU. An ID is either the index or the UUID of a given device.
Device indexes are similar to the ones reported by the
nvidia-docker-plugin REST interface,
nvidia-smi, or when running CUDA code with
CUDA_DEVICE_ORDER=PCI_BUS_ID, it is however different from the default CUDA ordering. By default, all GPUs are exported.
# Running nvidia-docker isolating specific GPUs by index NV_GPU='0,1' nvidia-docker <docker-options> <docker-command> <docker-args> # Running nvidia-docker isolating specific GPUs by UUID NV_GPU='GPU-836c0c09,GPU-b78a60a' nvidia-docker <docker-options> <docker-command> <docker-args>
Running it locally
nvidia-docker-plugin is installed on your host and running locally, no additional step is needed.
nvidia-docker will perform what is necessary by querying the plugin when containers using NVIDIA GPUs need to be launched.
Running it remotely
nvidia-docker remotely requires
nvidia-docker-plugin running on the remote host machine.
The remote host target can be set using the environment variable
The rules are as follows:
NV_HOSTis set then it is used for contacting the plugin.
NV_HOSTis not set but
DOCKER_HOSTis set then
NV_HOSTdefaults to the
DOCKER_HOSTlocation using the
httpprotocol on port
The specification of
NV_HOST is defined as:
http protocol requires the
nvidia-docker-plugin to be listening on a reachable interface (by default
nvidia-docker-plugin only listens on
localhost). Opting for
ssh however, only requires valid SSH credentials (either a password or a private key in your ssh-agent).
# Run CUDA on the remote host 10.0.0.1 using HTTP DOCKER_HOST='10.0.0.1:' nvidia-docker run cuda # Run CUDA on the remote host 10.0.0.1 using SSH NV_HOST='ssh://10.0.0.1:' nvidia-docker -H 10.0.0.1: run cuda # Run CUDA on the remote host 10.0.0.1 using SSH with custom user and ports DOCKER_HOST='10.0.0.1:' NV_HOST='ssh://firstname.lastname@example.org:22:80' nvidia-docker run cuda