These are some simple benchmarks intended to discover performance differences between Docker storage drivers.
Tests are in the
tests directory. Each test is spawned inside one container
several times (depending on the parallelization being tested). There are other
potential tests (like running lots of containers), but I didn't find many
differences between the storage drivers doing this, and I was more interested
in parallel IO inside individual containers.
Results (as of 2017-10-28)
I ran some early tests against
devicemapper. I also ran the tests against tmpfs without Docker. The raw data
results. The full details (Docker version, machine specs, etc. are at
the bottom of this README).
For all of these graphs, smaller bars are better. Some of these are pretty open to interpretation, so mostly just providing the raw graphs. In rough order of most-to-least-interesting:
Appending to files
Appending to files is interesting because it typically requires copying the entire file from a lower layer to the top before appending. Appending even one byte to large files can be very expensive.
With aufs, appending to files in parallel quickly becomes extremely expensive with lots of files:
A similar test which involves appending to files in a binary tree (described better below) was much more drastic:
This might suggest that aufs suffers with some kinds of large directory structures?
The graph below was the only test that reads files in a directory tree on
ext4 mounted with
-v. Strangely, the storage driver still makes a large
difference in performance.
The test works by doing a DFS on a deep binary tree (directory structure) until
hitting a leaf node, then reading the file. It does
readlink on the
Here is the exact same test but with the binary tree in the Docker filesystem
(not mounted with
Reading lots of small files in parallel:
Some not very interesting results
Reading a small number of large files:
Appending a line to a small number of large files:
All test graphs by name:
Test machine specs
All the tests were run on a
c4.8xlarge EC2 instance.
- Docker 17.09.0-ce
- Debian stretch
- Kernel 4.9.51 (stock Debian except for
aufstests, which used a custom-built 4.9.51 kernel with the aufs patches applied)
- 36 "vCPUs", 60 GiB RAM
tmpfsas backing filesystem for all tests except
devicemapper, LVM was configured as recommended by the docs, using a ramdisk as the backing physical volume. Doing all writes against a tmpfs/ramdisk was an attempt to avoid variances in the underlying IO speed of EBS or instance storage.