Skip to content
A Docker image for PyTorch
Dockerfile Shell
Branch: master
Clone or download

Latest commit

Latest commit 7561d69 Jan 23, 2020


Type Name Latest commit message Commit time
Failed to load latest commit information.
cuda-10.0 Upgrade to PyTorch 1.2.0 Sep 12, 2019
cuda-10.1 CUDA 10.1, PyTorch 1.4.0 Jan 23, 2020
cuda-7.5 Add CUDA 9.2, upgrade miniconda version Aug 10, 2018
cuda-8.0 PyTorch 1.0.0 Dec 10, 2018
cuda-9.0 PyTorch 1.0.0 Dec 10, 2018
cuda-9.1 Add CUDA 9.2, upgrade miniconda version Aug 10, 2018
cuda-9.2 CUDA 10.1, PyTorch 1.4.0 Jan 23, 2020
no-cuda CUDA 10.1, PyTorch 1.4.0 Jan 23, 2020
.gitignore Update images and fix broken builds for old Cuda versions May 20, 2018
Dockerfile.template Upgrade to PyTorch 1.2.0 Sep 12, 2019
LICENSE Initial commit May 22, 2017 Update README Jan 23, 2020 CUDA 10.1, PyTorch 1.4.0 Jan 23, 2020

PyTorch Docker image

Docker Automated build

Ubuntu + PyTorch + CUDA (optional)


In order to use this image you must have Docker Engine installed. Instructions for setting up Docker Engine are available on the Docker website.

CUDA requirements

If you have a CUDA-compatible NVIDIA graphics card, you can use a CUDA-enabled version of the PyTorch image to enable hardware acceleration. I have only tested this in Ubuntu Linux.

Firstly, ensure that you install the appropriate NVIDIA drivers. On Ubuntu, I've found that the easiest way of ensuring that you have the right version of the drivers set up is by installing a version of CUDA at least as new as the image you intend to use via the official NVIDIA CUDA download page. As an example, if you intend on using the cuda-10.1 image then setting up CUDA 10.1 or CUDA 10.2 should ensure that you have the correct graphics drivers.

You will also need to install nvidia-docker2 to enable GPU device access within Docker containers. This can be found at NVIDIA/nvidia-docker.

Prebuilt images

Prebuilt images are available on Docker Hub under the name anibali/pytorch. For example, you can pull the CUDA 10.1 version with:

$ docker pull anibali/pytorch:cuda-10.1

The table below lists software versions for each of the currently supported Docker image tags available for anibali/pytorch.

Image tag CUDA PyTorch
no-cuda None 1.4.0
cuda-10.1 10.1 1.4.0
cuda-9.2 9.2 1.4.0

The following images are also available, but are deprecated.

Image tag CUDA PyTorch
cuda-10.0 10.0 1.2.0
cuda-9.1 9.1 0.4.0
cuda-9.0 9.0 1.0.0
cuda-8.0 8.0 1.0.0
cuda-7.5 7.5 0.3.0


Running PyTorch scripts

It is possible to run PyTorch programs inside a container using the python3 command. For example, if you are within a directory containing some PyTorch project with entrypoint, you could run it with the following command:

docker run --rm -it --init \
  --runtime=nvidia \
  --ipc=host \
  --user="$(id -u):$(id -g)" \
  --volume="$PWD:/app" \
  anibali/pytorch python3

Here's a description of the Docker command-line options shown above:

  • --runtime=nvidia: Required if using CUDA, optional otherwise. Passes the graphics card from the host to the container.
  • --ipc=host: Required if using multiprocessing, as explained at
  • --user="$(id -u):$(id -g)": Sets the user inside the container to match your user and group ID. Optional, but is useful for writing files with correct ownership.
  • --volume="$PWD:/app": Mounts the current working directory into the container. The default working directory inside the container is /app. Optional.
  • -e NVIDIA_VISIBLE_DEVICES=0: Sets an environment variable to restrict which graphics cards are seen by programs running inside the container. Set to all to enable all cards. Optional, defaults to all.

You may wish to consider using Docker Compose to make running containers with many options easier. At the time of writing, only version 2.3 of Docker Compose configuration files supports the runtime option.

Running graphical applications

If you are running on a Linux host, you can get code running inside the Docker container to display graphics using the host X server (this allows you to use OpenCV's imshow, for example). Here we describe a quick-and-dirty (but INSECURE) way of doing this. For a more comprehensive guide on GUIs and Docker check out

On the host run:

sudo xhost +local:root

You can revoke these access permissions later with sudo xhost -local:root. Now when you run a container make sure you add the options -e "DISPLAY" and --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw". This will provide the container with your X11 socket for communication and your display ID. Here's an example:

docker run --rm -it --init \
  --runtime=nvidia \
  -e "DISPLAY" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
  anibali/pytorch python3 -c "import tkinter; tkinter.Tk().mainloop()"
You can’t perform that action at this time.