docker pull / push slow #7291

Closed
sammcj opened this Issue Jul 29, 2014 · 23 comments

Comments

Projects
None yet
@sammcj

sammcj commented Jul 29, 2014

Issue related to #1888 that wasn't resolved.

As Ross said here: #1888 (comment)


We're seeing this.

Steps to reproduce:
Run latest (or any version) of docker-registry.

Run Docker 1.1.1 (or any version) on another machine on the local network. "Local network" in our case is a 10GbE connection with ample available bandwidth.

Push and pull any docker image from the Docker host to the docker-registry. Observe the speeds.

For reference, here's a concrete example (names changed, everything else as per output).

root@int-build-systems:~  # time docker pull our-registry.example.com/ourapp:v0.90.19
Pulling repository our-registry.example.com/ourapp
f53e24aff1ea: Download complete
1db82e0c82b2: Download complete
5ad7cde6f934: Download complete
1b63e01a661e: Download complete
76bf39c7538d: Download complete
52490f4c441f: Download complete
4711ab69462c: Download complete
a3298b1086dd: Download complete
6ec7f7a84999: Download complete
12286e1683de: Download complete
e65d790b892e: Download complete
68a857ae02fd: Download complete
19fa32328999: Download complete
35dc94353234: Download complete
9bfba14d5960: Download complete
3ca1dfe05755: Download complete
3e836ac53ec4: Download complete
e940f2494e01: Download complete
59a6acada74e: Download complete
17bc599649b4: Download complete
451cb4e63973: Download complete
29a0981d8008: Download complete
205a854f59a5: Download complete
3471990c34d0: Download complete

real    2m6.888s
user    0m0.148s
sys    0m0.108s

OK, so 2 mins 6 seconds to pull this container. Now, I took the access log from the registry server (to see every file Docker downloaded from the registry) and created a Bash script from it to curl each of the files downloaded:

#!/bin/bash
curl -O https://our-registry.example.com/v1/images/746c70792250bddb7ca5ab4ec0cc85ac2858dcc32a7f44fd11e562733b3f484d/ancestry
curl -O https://our-registry.example.com/v1/images/f53e24aff1ead3dce35711567f26b3ad01624f168aebc53d04f9dd7bd16e14d0/json
curl -O https://our-registry.example.com/v1/images/f53e24aff1ead3dce35711567f26b3ad01624f168aebc53d04f9dd7bd16e14d0/layer
curl -O https://our-registry.example.com/v1/images/f53e24aff1ead3dce35711567f26b3ad01624f168aebc53d04f9dd7bd16e14d0/layer
curl -O https://our-registry.example.com/v1/images/1db82e0c82b29793aec6a14f88ec92f7062e9f54e9aea8a112b8721eaf800110/json
curl -O https://our-registry.example.com/v1/images/1db82e0c82b29793aec6a14f88ec92f7062e9f54e9aea8a112b8721eaf800110/layer
curl -O https://our-registry.example.com/v1/images/1db82e0c82b29793aec6a14f88ec92f7062e9f54e9aea8a112b8721eaf800110/layer
curl -O https://our-registry.example.com/v1/images/5ad7cde6f934f1be64ec3589093f08f666af17bc70391400cdfdd21d0ad7f0bb/json
curl -O https://our-registry.example.com/v1/images/5ad7cde6f934f1be64ec3589093f08f666af17bc70391400cdfdd21d0ad7f0bb/layer
curl -O https://our-registry.example.com/v1/images/5ad7cde6f934f1be64ec3589093f08f666af17bc70391400cdfdd21d0ad7f0bb/layer
curl -O https://our-registry.example.com/v1/images/1b63e01a661e2f8dfb8019b8028e1b88481cc00fa3f18b876f68aecace8dd883/json
curl -O https://our-registry.example.com/v1/images/1b63e01a661e2f8dfb8019b8028e1b88481cc00fa3f18b876f68aecace8dd883/layer
curl -O https://our-registry.example.com/v1/images/1b63e01a661e2f8dfb8019b8028e1b88481cc00fa3f18b876f68aecace8dd883/layer
curl -O https://our-registry.example.com/v1/images/76bf39c7538d3ec1f6509cceb220dc648873fcd6db35b1fe6cdced69fbe04634/json
curl -O https://our-registry.example.com/v1/images/76bf39c7538d3ec1f6509cceb220dc648873fcd6db35b1fe6cdced69fbe04634/layer
curl -O https://our-registry.example.com/v1/images/76bf39c7538d3ec1f6509cceb220dc648873fcd6db35b1fe6cdced69fbe04634/layer
curl -O https://our-registry.example.com/v1/images/52490f4c441fc183eb6940bb87fb5a6c54d2784e171dc8c3692261ffa0c0cdf0/json
curl -O https://our-registry.example.com/v1/images/52490f4c441fc183eb6940bb87fb5a6c54d2784e171dc8c3692261ffa0c0cdf0/layer
curl -O https://our-registry.example.com/v1/images/52490f4c441fc183eb6940bb87fb5a6c54d2784e171dc8c3692261ffa0c0cdf0/layer
curl -O https://our-registry.example.com/v1/images/4711ab69462c595b12ca725a2a0ae3d95a77a5b11f435e4cfbb2da2c643e60ba/json
curl -O https://our-registry.example.com/v1/images/4711ab69462c595b12ca725a2a0ae3d95a77a5b11f435e4cfbb2da2c643e60ba/layer
curl -O https://our-registry.example.com/v1/images/4711ab69462c595b12ca725a2a0ae3d95a77a5b11f435e4cfbb2da2c643e60ba/layer
curl -O https://our-registry.example.com/v1/images/a3298b1086ddb50efab152693b82f87354bc179cea45a95329783f8e3170d6e8/json
curl -O https://our-registry.example.com/v1/images/a3298b1086ddb50efab152693b82f87354bc179cea45a95329783f8e3170d6e8/layer
curl -O https://our-registry.example.com/v1/images/a3298b1086ddb50efab152693b82f87354bc179cea45a95329783f8e3170d6e8/layer
curl -O https://our-registry.example.com/v1/images/6ec7f7a849995b715f6ed616d2daffc48eb2cc3b57dc04fda1733a5f0a504717/json
curl -O https://our-registry.example.com/v1/images/6ec7f7a849995b715f6ed616d2daffc48eb2cc3b57dc04fda1733a5f0a504717/layer
curl -O https://our-registry.example.com/v1/images/6ec7f7a849995b715f6ed616d2daffc48eb2cc3b57dc04fda1733a5f0a504717/layer
curl -O https://our-registry.example.com/v1/images/12286e1683de9c8165dc0a916d1227b2a22a880620c31c0bcc359b75a75affed/json
curl -O https://our-registry.example.com/v1/images/12286e1683de9c8165dc0a916d1227b2a22a880620c31c0bcc359b75a75affed/layer
curl -O https://our-registry.example.com/v1/images/12286e1683de9c8165dc0a916d1227b2a22a880620c31c0bcc359b75a75affed/layer
curl -O https://our-registry.example.com/v1/images/e65d790b892e9041963f20c24c2224a738650fdabc4fa70b3870edc44ec59156/json
curl -O https://our-registry.example.com/v1/images/e65d790b892e9041963f20c24c2224a738650fdabc4fa70b3870edc44ec59156/layer
curl -O https://our-registry.example.com/v1/images/e65d790b892e9041963f20c24c2224a738650fdabc4fa70b3870edc44ec59156/layer
curl -O https://our-registry.example.com/v1/images/68a857ae02fd4d97ddb43ba21dc3241adf718b1140a348ae064853177180bf2a/json
curl -O https://our-registry.example.com/v1/images/68a857ae02fd4d97ddb43ba21dc3241adf718b1140a348ae064853177180bf2a/layer
curl -O https://our-registry.example.com/v1/images/68a857ae02fd4d97ddb43ba21dc3241adf718b1140a348ae064853177180bf2a/layer
curl -O https://our-registry.example.com/v1/images/19fa323289990f5685a78bf523a4946aea93ec5dfb0fa4b7a2edf26ac42412aa/json
curl -O https://our-registry.example.com/v1/images/19fa323289990f5685a78bf523a4946aea93ec5dfb0fa4b7a2edf26ac42412aa/layer
curl -O https://our-registry.example.com/v1/images/19fa323289990f5685a78bf523a4946aea93ec5dfb0fa4b7a2edf26ac42412aa/layer
curl -O https://our-registry.example.com/v1/images/35dc9435323495e6dc048556128f5075b5869aede8f57af235a601b3f38ea353/json
curl -O https://our-registry.example.com/v1/images/35dc9435323495e6dc048556128f5075b5869aede8f57af235a601b3f38ea353/layer
curl -O https://our-registry.example.com/v1/images/35dc9435323495e6dc048556128f5075b5869aede8f57af235a601b3f38ea353/layer
curl -O https://our-registry.example.com/v1/images/9bfba14d5960242ff7acbc31e3229ebe3bdedc471d1ecf144807017c557108a3/json
curl -O https://our-registry.example.com/v1/images/9bfba14d5960242ff7acbc31e3229ebe3bdedc471d1ecf144807017c557108a3/layer
curl -O https://our-registry.example.com/v1/images/9bfba14d5960242ff7acbc31e3229ebe3bdedc471d1ecf144807017c557108a3/layer
curl -O https://our-registry.example.com/v1/images/3ca1dfe05755b7c1af9b7264f59ef442dd5451cce0ca403e7b0549f07db37af4/json
curl -O https://our-registry.example.com/v1/images/3ca1dfe05755b7c1af9b7264f59ef442dd5451cce0ca403e7b0549f07db37af4/layer
curl -O https://our-registry.example.com/v1/images/3ca1dfe05755b7c1af9b7264f59ef442dd5451cce0ca403e7b0549f07db37af4/layer
curl -O https://our-registry.example.com/v1/images/3e836ac53ec401cb88ca784045953929cfc619aac37bd8b669d7929ff371357e/json
curl -O https://our-registry.example.com/v1/images/3e836ac53ec401cb88ca784045953929cfc619aac37bd8b669d7929ff371357e/layer
curl -O https://our-registry.example.com/v1/images/3e836ac53ec401cb88ca784045953929cfc619aac37bd8b669d7929ff371357e/layer
curl -O https://our-registry.example.com/v1/images/e940f2494e01939e28b555a3ff6894754c1e79b2da3a06e06cad5c68ae66e94f/json
curl -O https://our-registry.example.com/v1/images/e940f2494e01939e28b555a3ff6894754c1e79b2da3a06e06cad5c68ae66e94f/layer
curl -O https://our-registry.example.com/v1/images/e940f2494e01939e28b555a3ff6894754c1e79b2da3a06e06cad5c68ae66e94f/layer
curl -O https://our-registry.example.com/v1/images/59a6acada74e3522a78270c1eb3de11964bd03168131b8bd9a52ea812c7c49ef/json
curl -O https://our-registry.example.com/v1/images/59a6acada74e3522a78270c1eb3de11964bd03168131b8bd9a52ea812c7c49ef/layer
curl -O https://our-registry.example.com/v1/images/59a6acada74e3522a78270c1eb3de11964bd03168131b8bd9a52ea812c7c49ef/layer
curl -O https://our-registry.example.com/v1/images/17bc599649b4a9c340b14320b1525b42c9def351813c230c9dcad0ce4f2edac4/json
curl -O https://our-registry.example.com/v1/images/17bc599649b4a9c340b14320b1525b42c9def351813c230c9dcad0ce4f2edac4/layer
curl -O https://our-registry.example.com/v1/images/17bc599649b4a9c340b14320b1525b42c9def351813c230c9dcad0ce4f2edac4/layer
curl -O https://our-registry.example.com/v1/images/451cb4e639736cadcf7a52111cf4483ee7211e06fdb4c6df882ab6ec1f0d7be2/json
curl -O https://our-registry.example.com/v1/images/451cb4e639736cadcf7a52111cf4483ee7211e06fdb4c6df882ab6ec1f0d7be2/layer
curl -O https://our-registry.example.com/v1/images/451cb4e639736cadcf7a52111cf4483ee7211e06fdb4c6df882ab6ec1f0d7be2/layer
curl -O https://our-registry.example.com/v1/images/29a0981d8008246a329fd1b7d19270011aaa78ce274d5751246078b24b6bbb5c/json
curl -O https://our-registry.example.com/v1/images/29a0981d8008246a329fd1b7d19270011aaa78ce274d5751246078b24b6bbb5c/layer
curl -O https://our-registry.example.com/v1/images/29a0981d8008246a329fd1b7d19270011aaa78ce274d5751246078b24b6bbb5c/layer
curl -O https://our-registry.example.com/v1/images/205a854f59a549f8b4edc0c14ec9f565415592a36e3a8b6fd7836bd07a98d904/json
curl -O https://our-registry.example.com/v1/images/205a854f59a549f8b4edc0c14ec9f565415592a36e3a8b6fd7836bd07a98d904/layer
curl -O https://our-registry.example.com/v1/images/205a854f59a549f8b4edc0c14ec9f565415592a36e3a8b6fd7836bd07a98d904/layer
curl -O https://our-registry.example.com/v1/images/3471990c34d0572d28bbf9a14c189900fb81d5a010ce4c4da2146224edaddfe3/json
curl -O https://our-registry.example.com/v1/images/3471990c34d0572d28bbf9a14c189900fb81d5a010ce4c4da2146224edaddfe3/layer
curl -O https://our-registry.example.com/v1/images/3471990c34d0572d28bbf9a14c189900fb81d5a010ce4c4da2146224edaddfe3/layer
curl -O https://our-registry.example.com/v1/images/746c70792250bddb7ca5ab4ec0cc85ac2858dcc32a7f44fd11e562733b3f484d/json
curl -O https://our-registry.example.com/v1/images/746c70792250bddb7ca5ab4ec0cc85ac2858dcc32a7f44fd11e562733b3f484d/layer
curl -O https://our-registry.example.com/v1/images/746c70792250bddb7ca5ab4ec0cc85ac2858dcc32a7f44fd11e562733b3f484d/layer

Now, how long does this script take?

root@int-build-systems:~  # time ./test.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2380  100  2380    0     0   126k      0 --:--:-- --:--:-- --:--:--  145k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1437  100  1437    0     0  74340      0 --:--:-- --:--:-- --:--:-- 84529
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    23  100    23    0     0   1665      0 --:--:-- --:--:-- --:--:--  1916
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    23  100    23    0     0   2091      0 --:--:-- --:--:-- --:--:--  3285
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1466  100  1466    0     0  80456      0 --:--:-- --:--:-- --:--:-- 91625
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    23  100    23    0     0   2175      0 --:--:-- --:--:-- --:--:--  2875
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    23  100    23    0     0   2867      0 --:--:-- --:--:-- --:--:--  3833
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1544  100  1544    0     0   131k      0 --:--:-- --:--:-- --:--:--  150k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    23  100    23    0     0   2327      0 --:--:-- --:--:-- --:--:--  3285
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    23  100    23    0     0   2852      0 --:--:-- --:--:-- --:--:--  3833
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1529  100  1529    0     0   112k      0 --:--:-- --:--:-- --:--:--  135k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 8645k  100 8645k    0     0  87.1M      0 --:--:-- --:--:-- --:--:-- 89.8M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 8645k  100 8645k    0     0  92.8M      0 --:--:-- --:--:-- --:--:-- 94.8M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1665  100  1665    0     0   154k      0 --:--:-- --:--:-- --:--:--  203k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  162M  100  162M    0     0   131M      0  0:00:01  0:00:01 --:--:--  133M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  162M  100  162M    0     0   159M      0  0:00:01  0:00:01 --:--:--  159M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1540  100  1540    0     0   138k      0 --:--:-- --:--:-- --:--:--  167k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 4833k  100 4833k    0     0  74.3M      0 --:--:-- --:--:-- --:--:-- 77.3M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 4833k  100 4833k    0     0   103M      0 --:--:-- --:--:-- --:--:--  109M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1552  100  1552    0     0   161k      0 --:--:-- --:--:-- --:--:--  216k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   164  100   164    0     0  18935      0 --:--:-- --:--:-- --:--:-- 23428
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   164  100   164    0     0  22422      0 --:--:-- --:--:-- --:--:-- 27333
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1555  100  1555    0     0   158k      0 --:--:-- --:--:-- --:--:--  189k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   173  100   173    0     0  22709      0 --:--:-- --:--:-- --:--:-- 28833
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   173  100   173    0     0  22552      0 --:--:-- --:--:-- --:--:-- 24714
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1519  100  1519    0     0   169k      0 --:--:-- --:--:-- --:--:--  211k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  292k  100  292k    0     0  28.7M      0 --:--:-- --:--:-- --:--:-- 35.6M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  292k  100  292k    0     0  28.9M      0 --:--:-- --:--:-- --:--:-- 35.6M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1531  100  1531    0     0   174k      0 --:--:-- --:--:-- --:--:--  213k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1889  100  1889    0     0   239k      0 --:--:-- --:--:-- --:--:--  307k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1889  100  1889    0     0   248k      0 --:--:-- --:--:-- --:--:--  307k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1531  100  1531    0     0   175k      0 --:--:-- --:--:-- --:--:--  213k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    23  100    23    0     0   3052      0 --:--:-- --:--:-- --:--:--  3833
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    23  100    23    0     0   3115      0 --:--:-- --:--:-- --:--:--  4600
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1620  100  1620    0     0   183k      0 --:--:-- --:--:-- --:--:--  226k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1432  100  1432    0     0   185k      0 --:--:-- --:--:-- --:--:--  233k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1432  100  1432    0     0   171k      0 --:--:-- --:--:-- --:--:--  233k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1615  100  1615    0     0   168k      0 --:--:-- --:--:-- --:--:--  225k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   337  100   337    0     0  26731      0 --:--:-- --:--:-- --:--:-- 56166
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   337  100   337    0     0  36112      0 --:--:-- --:--:-- --:--:-- 56166
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1612  100  1612    0     0   170k      0 --:--:-- --:--:-- --:--:--  224k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   214  100   214    0     0  25295      0 --:--:-- --:--:-- --:--:-- 35666
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   214  100   214    0     0  27860      0 --:--:-- --:--:-- --:--:-- 35666
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1613  100  1613    0     0   184k      0 --:--:-- --:--:-- --:--:--  225k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   333  100   333    0     0  43218      0 --:--:-- --:--:-- --:--:-- 55500
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   333  100   333    0     0  45164      0 --:--:-- --:--:-- --:--:-- 55500
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1610  100  1610    0     0   176k      0 --:--:-- --:--:-- --:--:--  224k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   342  100   342    0     0  46035      0 --:--:-- --:--:-- --:--:-- 68400
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   342  100   342    0     0  22643      0 --:--:-- --:--:-- --:--:-- 26307
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1574  100  1574    0     0  96280      0 --:--:-- --:--:-- --:--:--  102k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  158M  100  158M    0     0   136M      0  0:00:01  0:00:01 --:--:--  136M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  158M  100  158M    0     0   150M      0  0:00:01  0:00:01 --:--:--  150M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1608  100  1608    0     0  69634      0 --:--:-- --:--:-- --:--:-- 80400
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  142M  100  142M    0     0   126M      0  0:00:01  0:00:01 --:--:--  127M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  142M  100  142M    0     0   155M      0 --:--:-- --:--:-- --:--:--  156M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1690  100  1690    0     0   120k      0 --:--:-- --:--:-- --:--:--  137k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 19.5M  100 19.5M    0     0   100M      0 --:--:-- --:--:-- --:--:--  101M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 19.5M  100 19.5M    0     0   117M      0 --:--:-- --:--:-- --:--:--  118M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1561  100  1561    0     0  86204      0 --:--:-- --:--:-- --:--:--  101k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    23  100    23    0     0   1330      0 --:--:-- --:--:-- --:--:--  1533
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    23  100    23    0     0   2674      0 --:--:-- --:--:-- --:--:--  3285
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1564  100  1564    0     0  94885      0 --:--:-- --:--:-- --:--:--  101k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    23  100    23    0     0   1776      0 --:--:-- --:--:-- --:--:--  2090
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    23  100    23    0     0   1940      0 --:--:-- --:--:-- --:--:--  2300
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1534  100  1534    0     0   146k      0 --:--:-- --:--:-- --:--:--  187k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 1929k  100 1929k    0     0  81.5M      0 --:--:-- --:--:-- --:--:-- 89.7M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 1929k  100 1929k    0     0  86.4M      0 --:--:-- --:--:-- --:--:-- 99.1M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1551  100  1551    0     0  87301      0 --:--:-- --:--:-- --:--:--  126k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   184  100   184    0     0  15011      0 --:--:-- --:--:-- --:--:-- 18400
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   184  100   184    0     0  23535      0 --:--:-- --:--:-- --:--:-- 30666
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1556  100  1556    0     0   107k      0 --:--:-- --:--:-- --:--:--  126k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    23  100    23    0     0   2873      0 --:--:-- --:--:-- --:--:--  3833
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    23  100    23    0     0   1782      0 --:--:-- --:--:-- --:--:--  2090
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1580  100  1580    0     0   109k      0 --:--:-- --:--:-- --:--:--  128k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    23  100    23    0     0   1971      0 --:--:-- --:--:-- --:--:--  2300
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    23  100    23    0     0   1713      0 --:--:-- --:--:-- --:--:--  1916

real    0m10.708s
user    0m4.032s
sys    0m3.560s

10 seconds.

I realise that pull is doing other stuff besides downloading the layers (tar?), but should it really take over 12 times as long to 'docker pull' than to download the files? It seems to me there's something slowing down the process considerably.

The problem is especially noticable on large containers, pulling them from a docker-registry to hosts takes a considerable length of time despite ample free bandwidth over a local network.

Edit: Replaced semicolons with newlines in bash script to enhance readability

@RickyCook

This comment has been minimized.

Show comment
Hide comment
@RickyCook

RickyCook Jul 29, 2014

+1 to this! It's really slowing down our deployments considerably

+1 to this! It's really slowing down our deployments considerably

@dmp42 dmp42 added the Distribution label Jul 29, 2014

@dmp42

This comment has been minimized.

Show comment
Hide comment
@dmp42

dmp42 Jul 29, 2014

Contributor

@shin- ping

Contributor

dmp42 commented Jul 29, 2014

@shin- ping

@dmp42

This comment has been minimized.

Show comment
Hide comment
@dmp42

dmp42 Jul 30, 2014

Contributor

@sammcj @RickyCook this is being looked at (cc @unclejack )

It's possible that TCP congestion control kicks in as we are stalling the TCP connection when performing on the fly untar. Increasing buffer sizes might help, and @unclejack is testing that.

@sammcj you seem to have a very relevant test setup that demonstrates that problem. Would you mind helping test a custom build with a tentative fix for that on your side? If so, ping me on irc.

Contributor

dmp42 commented Jul 30, 2014

@sammcj @RickyCook this is being looked at (cc @unclejack )

It's possible that TCP congestion control kicks in as we are stalling the TCP connection when performing on the fly untar. Increasing buffer sizes might help, and @unclejack is testing that.

@sammcj you seem to have a very relevant test setup that demonstrates that problem. Would you mind helping test a custom build with a tentative fix for that on your side? If so, ping me on irc.

@sammcj

This comment has been minimized.

Show comment
Hide comment
@sammcj

sammcj Jul 30, 2014

Not a problem, I can pop on IRC tomorrow (it's 6PM here in Aus) - if you
want to flick me a link to the Deb I'll install tomorrow and let you know
how it goes.

Sent from my iPhone

On 30 Jul 2014, at 5:49 pm, Olivier Gambier notifications@github.com
wrote:

@sammcj https://github.com/sammcj @RickyCook
https://github.com/RickyCook this is being looked at (cc @unclejack
https://github.com/unclejack )

It's possible that TCP congestion control kicks in as we are stalling the
TCP connection when performing on the fly untar. Increasing buffer sizes
might help, and @unclejack https://github.com/unclejack is testing that.

@sammcj https://github.com/sammcj you seem to have a very relevant test
setup that demonstrates that problem. Would you mind helping test a custom
build with a tentative fix for that on your side? If so, ping me on irc.


Reply to this email directly or view it on GitHub
#7291 (comment).

sammcj commented Jul 30, 2014

Not a problem, I can pop on IRC tomorrow (it's 6PM here in Aus) - if you
want to flick me a link to the Deb I'll install tomorrow and let you know
how it goes.

Sent from my iPhone

On 30 Jul 2014, at 5:49 pm, Olivier Gambier notifications@github.com
wrote:

@sammcj https://github.com/sammcj @RickyCook
https://github.com/RickyCook this is being looked at (cc @unclejack
https://github.com/unclejack )

It's possible that TCP congestion control kicks in as we are stalling the
TCP connection when performing on the fly untar. Increasing buffer sizes
might help, and @unclejack https://github.com/unclejack is testing that.

@sammcj https://github.com/sammcj you seem to have a very relevant test
setup that demonstrates that problem. Would you mind helping test a custom
build with a tentative fix for that on your side? If so, ping me on irc.


Reply to this email directly or view it on GitHub
#7291 (comment).

@stevenschlansker

This comment has been minimized.

Show comment
Hide comment
@stevenschlansker

stevenschlansker Jul 30, 2014

Another thought here is that push and pull operations should really batch operations. Especially given that metadata updates each generate their own tiny (<1KB) layer, a pull operation of 10 layers should grab all 10 in 1 request rather than sending off 10 serial requets. I filed this separately as #7335.

Another thought here is that push and pull operations should really batch operations. Especially given that metadata updates each generate their own tiny (<1KB) layer, a pull operation of 10 layers should grab all 10 in 1 request rather than sending off 10 serial requets. I filed this separately as #7335.

@wyaeld

This comment has been minimized.

Show comment
Hide comment
@wyaeld

wyaeld Jul 31, 2014

@stevenschlansker I looked into their briefly a while back, and thought why can't the client and server just sync a list of layers they each have or need.

Its infuriating seeing this:

511136ea3c5a: Image already pushed, skipping 
8a1d8569bf87: Image already pushed, skipping 
2be841034d7d: Image already pushed, skipping 
99e40d806d07: Image already pushed, skipping 
ef83896b7fb9: Image already pushed, skipping 
cac543e58470: Image already pushed, skipping 
782ae334b98c: Image already pushed, skipping 
507b6e24989f: Image already pushed, skipping 
4d83bf01f90d: Image already pushed, skipping 
d4acdc069bc9: Image already pushed, skipping 
08ab262dccbb: Image already pushed, skipping 
11c1aa2e9c3d: Image already pushed, skipping 
828cae0f87ee: Image already pushed, skipping 
d980fd2f6b81: Image already pushed, skipping 
cbbfaad874b2: Image already pushed, skipping 
25ad02a7d969: Image already pushed, skipping 
7f8858e852f9: Image already pushed, skipping 
50b804cf7c78: Image already pushed, skipping 
d3851d098af9: Image already pushed, skipping 
ed4fafb8ae0c: Image already pushed, skipping 
6fb517d6c91d: Image already pushed, skipping 

take well over a minute.

Why can't the client just say "my push is going to be these layers", and the server say "I have a,b,c, only send me x,y,z"

wyaeld commented Jul 31, 2014

@stevenschlansker I looked into their briefly a while back, and thought why can't the client and server just sync a list of layers they each have or need.

Its infuriating seeing this:

511136ea3c5a: Image already pushed, skipping 
8a1d8569bf87: Image already pushed, skipping 
2be841034d7d: Image already pushed, skipping 
99e40d806d07: Image already pushed, skipping 
ef83896b7fb9: Image already pushed, skipping 
cac543e58470: Image already pushed, skipping 
782ae334b98c: Image already pushed, skipping 
507b6e24989f: Image already pushed, skipping 
4d83bf01f90d: Image already pushed, skipping 
d4acdc069bc9: Image already pushed, skipping 
08ab262dccbb: Image already pushed, skipping 
11c1aa2e9c3d: Image already pushed, skipping 
828cae0f87ee: Image already pushed, skipping 
d980fd2f6b81: Image already pushed, skipping 
cbbfaad874b2: Image already pushed, skipping 
25ad02a7d969: Image already pushed, skipping 
7f8858e852f9: Image already pushed, skipping 
50b804cf7c78: Image already pushed, skipping 
d3851d098af9: Image already pushed, skipping 
ed4fafb8ae0c: Image already pushed, skipping 
6fb517d6c91d: Image already pushed, skipping 

take well over a minute.

Why can't the client just say "my push is going to be these layers", and the server say "I have a,b,c, only send me x,y,z"

@kuon

This comment has been minimized.

Show comment
Hide comment
@kuon

kuon Jul 31, 2014

Contributor

I agree with @wyaeld something must be done to speed things up, in addition to the HTTP batch issue, there is also #332 and #2333 which would help a lot with those "gazillion layers" images.

Contributor

kuon commented Jul 31, 2014

I agree with @wyaeld something must be done to speed things up, in addition to the HTTP batch issue, there is also #332 and #2333 which would help a lot with those "gazillion layers" images.

@timbot

This comment has been minimized.

Show comment
Hide comment
@timbot

timbot Aug 2, 2014

Contributor

FWIW, I've noticed that when path bandwidth is high (for example, over a 1Gbit/s or 10Gbit/s local network), pull times are limited by the uncompress/untar steps. Some mechanism for solving #332 would go a long way there.

Contributor

timbot commented Aug 2, 2014

FWIW, I've noticed that when path bandwidth is high (for example, over a 1Gbit/s or 10Gbit/s local network), pull times are limited by the uncompress/untar steps. Some mechanism for solving #332 would go a long way there.

@sammcj

This comment has been minimized.

Show comment
Hide comment
@sammcj

sammcj Aug 6, 2014

So as a summary so far we have a list of things to investigate:

  • Parallelisation of push/pull tasks
  • Investigate TCP congestion control
  • More efficient handling of already up-to-date layers
  • Batching operations

This is really impacting our ability to use Docker across our environments and it's currently giving Docker a bit of a bad name around the business due to the increase in time to release applications.

sammcj commented Aug 6, 2014

So as a summary so far we have a list of things to investigate:

  • Parallelisation of push/pull tasks
  • Investigate TCP congestion control
  • More efficient handling of already up-to-date layers
  • Batching operations

This is really impacting our ability to use Docker across our environments and it's currently giving Docker a bit of a bad name around the business due to the increase in time to release applications.

@jwilder

This comment has been minimized.

Show comment
Hide comment
@jwilder

jwilder Aug 8, 2014

I looked into the push side of things by running a registry on the same host and benchmarking pushes to the same host to eliminate any remote network issues. I'm using docker 1.1.2 and latest master for benchmarking.

There appears to be a bottleneck when pushing a layer and calculating a tarsum.

The problem line is: https://github.com/docker/docker/blob/master/registry/registry.go#L643 and the two main issues are:

  1. tarsum.go uses sha256 for checksum which is pretty slow compared to md5.
  2. Doing gzip compression while calculating tarsum is very slow. tarsum.go flushes the gzip writer very frequently which might be limiting throughput. Perhap it would be better to gzip the whole layer on the way out w/ HTTP content encoding instead.

Running a patched version of docker that disables compression and uses md5 for tarsum improved my docker push times from 41s to 7s (~6x improvement).

jwilder@jwilder-laptop:~/docker$ time docker push localhost:8080/whoami
The push refers to a repository [localhost:8080/whoami] (len: 1)
Sending image list
Pushing repository localhost:8080/whoami (1 tags)
511136ea3c5a: Image successfully pushed 
9bad880da3d2: Image successfully pushed 
25f11f5fb0cb: Image successfully pushed 
ebc34468f71d: Image successfully pushed 
2318d26665ef: Image successfully pushed 
ba5877dc9bec: Image successfully pushed 
09626508f783: Image successfully pushed 
f6821c648739: Image successfully pushed 
e86e7049f4a6: Image successfully pushed 
c8b49f867469: Image successfully pushed 
5856309b3a95: Image successfully pushed 
53b1c773d321: Image successfully pushed 
a41e1b029a65: Image successfully pushed 
Pushing tag for rev [a41e1b029a65] on {http://localhost:8080/v1/repositories/whoami/tags/latest}
2014/08/08 14:29:35 
real    0m40.811s
user    0m0.128s
sys 0m0.098s

to

jwilder@jwilder-laptop:~/docker$ time docker push localhost:8080/whoami
The push refers to a repository [localhost:8080/whoami] (len: 1)
Sending image list
Pushing repository localhost:8080/whoami (1 tags)
511136ea3c5a: Image successfully pushed 
9bad880da3d2: Image successfully pushed 
25f11f5fb0cb: Image successfully pushed 
ebc34468f71d: Image successfully pushed 
2318d26665ef: Image successfully pushed 
ba5877dc9bec: Image successfully pushed 
09626508f783: Image successfully pushed 
f6821c648739: Image successfully pushed 
e86e7049f4a6: Image successfully pushed 
c8b49f867469: Image successfully pushed 
5856309b3a95: Image successfully pushed 
53b1c773d321: Image successfully pushed 
a41e1b029a65: Image successfully pushed 
Pushing tag for rev [a41e1b029a65] on {http://localhost:8080/v1/repositories/whoami/tags/latest}
2014/08/08 14:27:28 
real    0m6.946s
user    0m0.124s
sys 0m0.090s

The docker image I'm pushing is about 391MB.

#5956 would allow md5 to be used for registry checksums and the diff below disables compresssion.

diff --git a/registry/registry.go b/registry/registry.go
index a590bb5..00ac3ce 100644
--- a/registry/registry.go
+++ b/registry/registry.go
@@ -640,7 +640,7 @@ func (r *Registry) PushImageLayerRegistry(imgID string, layer io.Reader, registr

        utils.Debugf("[registry] Calling PUT %s", registry+"images/"+imgID+"/layer")

-       tarsumLayer := &tarsum.TarSum{Reader: layer}
+       tarsumLayer := &tarsum.TarSum{Reader: layer, DisableCompression: true}
        h := sha256.New()
        h.Write(jsonRaw)
        h.Write([]byte{'\n'})

jwilder commented Aug 8, 2014

I looked into the push side of things by running a registry on the same host and benchmarking pushes to the same host to eliminate any remote network issues. I'm using docker 1.1.2 and latest master for benchmarking.

There appears to be a bottleneck when pushing a layer and calculating a tarsum.

The problem line is: https://github.com/docker/docker/blob/master/registry/registry.go#L643 and the two main issues are:

  1. tarsum.go uses sha256 for checksum which is pretty slow compared to md5.
  2. Doing gzip compression while calculating tarsum is very slow. tarsum.go flushes the gzip writer very frequently which might be limiting throughput. Perhap it would be better to gzip the whole layer on the way out w/ HTTP content encoding instead.

Running a patched version of docker that disables compression and uses md5 for tarsum improved my docker push times from 41s to 7s (~6x improvement).

jwilder@jwilder-laptop:~/docker$ time docker push localhost:8080/whoami
The push refers to a repository [localhost:8080/whoami] (len: 1)
Sending image list
Pushing repository localhost:8080/whoami (1 tags)
511136ea3c5a: Image successfully pushed 
9bad880da3d2: Image successfully pushed 
25f11f5fb0cb: Image successfully pushed 
ebc34468f71d: Image successfully pushed 
2318d26665ef: Image successfully pushed 
ba5877dc9bec: Image successfully pushed 
09626508f783: Image successfully pushed 
f6821c648739: Image successfully pushed 
e86e7049f4a6: Image successfully pushed 
c8b49f867469: Image successfully pushed 
5856309b3a95: Image successfully pushed 
53b1c773d321: Image successfully pushed 
a41e1b029a65: Image successfully pushed 
Pushing tag for rev [a41e1b029a65] on {http://localhost:8080/v1/repositories/whoami/tags/latest}
2014/08/08 14:29:35 
real    0m40.811s
user    0m0.128s
sys 0m0.098s

to

jwilder@jwilder-laptop:~/docker$ time docker push localhost:8080/whoami
The push refers to a repository [localhost:8080/whoami] (len: 1)
Sending image list
Pushing repository localhost:8080/whoami (1 tags)
511136ea3c5a: Image successfully pushed 
9bad880da3d2: Image successfully pushed 
25f11f5fb0cb: Image successfully pushed 
ebc34468f71d: Image successfully pushed 
2318d26665ef: Image successfully pushed 
ba5877dc9bec: Image successfully pushed 
09626508f783: Image successfully pushed 
f6821c648739: Image successfully pushed 
e86e7049f4a6: Image successfully pushed 
c8b49f867469: Image successfully pushed 
5856309b3a95: Image successfully pushed 
53b1c773d321: Image successfully pushed 
a41e1b029a65: Image successfully pushed 
Pushing tag for rev [a41e1b029a65] on {http://localhost:8080/v1/repositories/whoami/tags/latest}
2014/08/08 14:27:28 
real    0m6.946s
user    0m0.124s
sys 0m0.090s

The docker image I'm pushing is about 391MB.

#5956 would allow md5 to be used for registry checksums and the diff below disables compresssion.

diff --git a/registry/registry.go b/registry/registry.go
index a590bb5..00ac3ce 100644
--- a/registry/registry.go
+++ b/registry/registry.go
@@ -640,7 +640,7 @@ func (r *Registry) PushImageLayerRegistry(imgID string, layer io.Reader, registr

        utils.Debugf("[registry] Calling PUT %s", registry+"images/"+imgID+"/layer")

-       tarsumLayer := &tarsum.TarSum{Reader: layer}
+       tarsumLayer := &tarsum.TarSum{Reader: layer, DisableCompression: true}
        h := sha256.New()
        h.Write(jsonRaw)
        h.Write([]byte{'\n'})
@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Aug 8, 2014

Member

@jwilder thank you!

Couple of questions;

  • I'd be interested to see the differences with only MD5 used and compression enabled vs SHA256 without compression. To narrow down the biggest bottleneck
  • In general; is there a requirement to use sha256 (which, afaik, is designed for cryptography, not for speed/checksums). Git uses SHA1 and a proven record, will this be faster and still be sufficient?
  • MD5 might be faster, but a bigger chance on collisions (is that a problem in this context?)
  • Would SHA1 be an option, and would it be possible to benchmark that?
Member

thaJeztah commented Aug 8, 2014

@jwilder thank you!

Couple of questions;

  • I'd be interested to see the differences with only MD5 used and compression enabled vs SHA256 without compression. To narrow down the biggest bottleneck
  • In general; is there a requirement to use sha256 (which, afaik, is designed for cryptography, not for speed/checksums). Git uses SHA1 and a proven record, will this be faster and still be sufficient?
  • MD5 might be faster, but a bigger chance on collisions (is that a problem in this context?)
  • Would SHA1 be an option, and would it be possible to benchmark that?
@jwilder

This comment has been minimized.

Show comment
Hide comment
@jwilder

jwilder Aug 8, 2014

Here's times for SHA256, SHA1, MD5 w/ and w/o Gzip.

Hash Gzip Time Improvement
SHA256 Y 39.2s -
SHA256 N 8.0s 4.9
SHA1 Y 37.5s 0.04
SHA1 N 6.7s 5.6
MD5 Y 39.1s 0.002
MD5 N 6.6s 5.9

Looks like Gzip (while tarsumming) is the main bottleneck. Interestingly, SHA1 was slightly faster than the other two.

SHA256 w/ Gzip (baseline)

jwilder@jwilder-laptop:~/docker$ time docker push localhost:8080/whoami:latest
The push refers to a repository [localhost:8080/whoami] (len: 1)
Sending image list
Pushing repository localhost:8080/whoami (1 tags)
511136ea3c5a: Image successfully pushed 
9bad880da3d2: Image successfully pushed 
25f11f5fb0cb: Image successfully pushed 
ebc34468f71d: Image successfully pushed 
2318d26665ef: Image successfully pushed 
ba5877dc9bec: Image successfully pushed 
09626508f783: Image successfully pushed 
f6821c648739: Image successfully pushed 
e86e7049f4a6: Image successfully pushed 
c8b49f867469: Image successfully pushed 
5856309b3a95: Image successfully pushed 
53b1c773d321: Image successfully pushed 
a41e1b029a65: Image successfully pushed 
Pushing tag for rev [a41e1b029a65] on {http://localhost:8080/v1/repositories/whoami/tags/latest}
2014/08/08 16:28:16 
real    0m39.162s
user    0m0.147s
sys 0m0.102s

MD5 w/ Gzip

jwilder@jwilder-laptop:~/docker$ time docker push localhost:8080/whoami:latest
The push refers to a repository [localhost:8080/whoami] (len: 1)
Sending image list
Pushing repository localhost:8080/whoami (1 tags)
511136ea3c5a: Image successfully pushed 
9bad880da3d2: Image successfully pushed 
25f11f5fb0cb: Image successfully pushed 
ebc34468f71d: Image successfully pushed 
2318d26665ef: Image successfully pushed 
ba5877dc9bec: Image successfully pushed 
09626508f783: Image successfully pushed 
f6821c648739: Image successfully pushed 
e86e7049f4a6: Image successfully pushed 
c8b49f867469: Image successfully pushed 
5856309b3a95: Image successfully pushed 
53b1c773d321: Image successfully pushed 
a41e1b029a65: Image successfully pushed 
Pushing tag for rev [a41e1b029a65] on {http://localhost:8080/v1/repositories/whoami/tags/latest}
2014/08/08 16:20:50 
real    0m39.104s
user    0m0.161s
sys 0m0.105s

SHA1 w/ Gzip

jwilder@jwilder-laptop:~/docker$ time docker push localhost:8080/whoami:latest
The push refers to a repository [localhost:8080/whoami] (len: 1)
Sending image list
Pushing repository localhost:8080/whoami (1 tags)
511136ea3c5a: Image successfully pushed 
9bad880da3d2: Image successfully pushed 
25f11f5fb0cb: Image successfully pushed 
ebc34468f71d: Image successfully pushed 
2318d26665ef: Image successfully pushed 
ba5877dc9bec: Image successfully pushed 
09626508f783: Image successfully pushed 
f6821c648739: Image successfully pushed 
e86e7049f4a6: Image successfully pushed 
c8b49f867469: Image successfully pushed 
5856309b3a95: Image successfully pushed 
53b1c773d321: Image successfully pushed 
a41e1b029a65: Image successfully pushed 
Pushing tag for rev [a41e1b029a65] on {http://localhost:8080/v1/repositories/whoami/tags/latest}
2014/08/08 16:23:30 
real    0m37.486s
user    0m0.151s
sys 0m0.107s

SHA256 w/o Gzip

jwilder@jwilder-laptop:~/docker$ time docker push localhost:8080/whoami:latest
The push refers to a repository [localhost:8080/whoami] (len: 1)
Sending image list
Pushing repository localhost:8080/whoami (1 tags)
511136ea3c5a: Image successfully pushed 
9bad880da3d2: Image successfully pushed 
25f11f5fb0cb: Image successfully pushed 
ebc34468f71d: Image successfully pushed 
2318d26665ef: Image successfully pushed 
ba5877dc9bec: Image successfully pushed 
09626508f783: Image successfully pushed 
f6821c648739: Image successfully pushed 
e86e7049f4a6: Image successfully pushed 
c8b49f867469: Image successfully pushed 
5856309b3a95: Image successfully pushed 
53b1c773d321: Image successfully pushed 
a41e1b029a65: Image successfully pushed 
Pushing tag for rev [a41e1b029a65] on {http://localhost:8080/v1/repositories/whoami/tags/latest}
2014/08/08 16:29:30 
real    0m8.015s
user    0m0.124s
sys 0m0.091s

SHA1 w/o Gzip

jwilder@jwilder-laptop:~/docker$ time docker push localhost:8080/whoami:latest
The push refers to a repository [localhost:8080/whoami] (len: 1)
Sending image list
Pushing repository localhost:8080/whoami (1 tags)
511136ea3c5a: Image successfully pushed 
9bad880da3d2: Image successfully pushed 
25f11f5fb0cb: Image successfully pushed 
ebc34468f71d: Image successfully pushed 
2318d26665ef: Image successfully pushed 
ba5877dc9bec: Image successfully pushed 
09626508f783: Image successfully pushed 
f6821c648739: Image successfully pushed 
e86e7049f4a6: Image successfully pushed 
c8b49f867469: Image successfully pushed 
5856309b3a95: Image successfully pushed 
53b1c773d321: Image successfully pushed 
a41e1b029a65: Image successfully pushed 
Pushing tag for rev [a41e1b029a65] on {http://localhost:8080/v1/repositories/whoami/tags/latest}
2014/08/08 16:31:19 
real    0m6.685s
user    0m0.122s
sys 0m0.088s

MD5 w/o Gzip

jwilder@jwilder-laptop:~/docker$ time docker push localhost:8080/whoami:latest
The push refers to a repository [localhost:8080/whoami] (len: 1)
Sending image list
Pushing repository localhost:8080/whoami (1 tags)
511136ea3c5a: Image successfully pushed 
9bad880da3d2: Image successfully pushed 
25f11f5fb0cb: Image successfully pushed 
ebc34468f71d: Image successfully pushed 
2318d26665ef: Image successfully pushed 
ba5877dc9bec: Image successfully pushed 
09626508f783: Image successfully pushed 
f6821c648739: Image successfully pushed 
e86e7049f4a6: Image successfully pushed 
c8b49f867469: Image successfully pushed 
5856309b3a95: Image successfully pushed 
53b1c773d321: Image successfully pushed 
a41e1b029a65: Image successfully pushed 
Pushing tag for rev [a41e1b029a65] on {http://localhost:8080/v1/repositories/whoami/tags/latest}
2014/08/08 16:32:36 
real    0m6.627s
user    0m0.123s
sys 0m0.088s

jwilder commented Aug 8, 2014

Here's times for SHA256, SHA1, MD5 w/ and w/o Gzip.

Hash Gzip Time Improvement
SHA256 Y 39.2s -
SHA256 N 8.0s 4.9
SHA1 Y 37.5s 0.04
SHA1 N 6.7s 5.6
MD5 Y 39.1s 0.002
MD5 N 6.6s 5.9

Looks like Gzip (while tarsumming) is the main bottleneck. Interestingly, SHA1 was slightly faster than the other two.

SHA256 w/ Gzip (baseline)

jwilder@jwilder-laptop:~/docker$ time docker push localhost:8080/whoami:latest
The push refers to a repository [localhost:8080/whoami] (len: 1)
Sending image list
Pushing repository localhost:8080/whoami (1 tags)
511136ea3c5a: Image successfully pushed 
9bad880da3d2: Image successfully pushed 
25f11f5fb0cb: Image successfully pushed 
ebc34468f71d: Image successfully pushed 
2318d26665ef: Image successfully pushed 
ba5877dc9bec: Image successfully pushed 
09626508f783: Image successfully pushed 
f6821c648739: Image successfully pushed 
e86e7049f4a6: Image successfully pushed 
c8b49f867469: Image successfully pushed 
5856309b3a95: Image successfully pushed 
53b1c773d321: Image successfully pushed 
a41e1b029a65: Image successfully pushed 
Pushing tag for rev [a41e1b029a65] on {http://localhost:8080/v1/repositories/whoami/tags/latest}
2014/08/08 16:28:16 
real    0m39.162s
user    0m0.147s
sys 0m0.102s

MD5 w/ Gzip

jwilder@jwilder-laptop:~/docker$ time docker push localhost:8080/whoami:latest
The push refers to a repository [localhost:8080/whoami] (len: 1)
Sending image list
Pushing repository localhost:8080/whoami (1 tags)
511136ea3c5a: Image successfully pushed 
9bad880da3d2: Image successfully pushed 
25f11f5fb0cb: Image successfully pushed 
ebc34468f71d: Image successfully pushed 
2318d26665ef: Image successfully pushed 
ba5877dc9bec: Image successfully pushed 
09626508f783: Image successfully pushed 
f6821c648739: Image successfully pushed 
e86e7049f4a6: Image successfully pushed 
c8b49f867469: Image successfully pushed 
5856309b3a95: Image successfully pushed 
53b1c773d321: Image successfully pushed 
a41e1b029a65: Image successfully pushed 
Pushing tag for rev [a41e1b029a65] on {http://localhost:8080/v1/repositories/whoami/tags/latest}
2014/08/08 16:20:50 
real    0m39.104s
user    0m0.161s
sys 0m0.105s

SHA1 w/ Gzip

jwilder@jwilder-laptop:~/docker$ time docker push localhost:8080/whoami:latest
The push refers to a repository [localhost:8080/whoami] (len: 1)
Sending image list
Pushing repository localhost:8080/whoami (1 tags)
511136ea3c5a: Image successfully pushed 
9bad880da3d2: Image successfully pushed 
25f11f5fb0cb: Image successfully pushed 
ebc34468f71d: Image successfully pushed 
2318d26665ef: Image successfully pushed 
ba5877dc9bec: Image successfully pushed 
09626508f783: Image successfully pushed 
f6821c648739: Image successfully pushed 
e86e7049f4a6: Image successfully pushed 
c8b49f867469: Image successfully pushed 
5856309b3a95: Image successfully pushed 
53b1c773d321: Image successfully pushed 
a41e1b029a65: Image successfully pushed 
Pushing tag for rev [a41e1b029a65] on {http://localhost:8080/v1/repositories/whoami/tags/latest}
2014/08/08 16:23:30 
real    0m37.486s
user    0m0.151s
sys 0m0.107s

SHA256 w/o Gzip

jwilder@jwilder-laptop:~/docker$ time docker push localhost:8080/whoami:latest
The push refers to a repository [localhost:8080/whoami] (len: 1)
Sending image list
Pushing repository localhost:8080/whoami (1 tags)
511136ea3c5a: Image successfully pushed 
9bad880da3d2: Image successfully pushed 
25f11f5fb0cb: Image successfully pushed 
ebc34468f71d: Image successfully pushed 
2318d26665ef: Image successfully pushed 
ba5877dc9bec: Image successfully pushed 
09626508f783: Image successfully pushed 
f6821c648739: Image successfully pushed 
e86e7049f4a6: Image successfully pushed 
c8b49f867469: Image successfully pushed 
5856309b3a95: Image successfully pushed 
53b1c773d321: Image successfully pushed 
a41e1b029a65: Image successfully pushed 
Pushing tag for rev [a41e1b029a65] on {http://localhost:8080/v1/repositories/whoami/tags/latest}
2014/08/08 16:29:30 
real    0m8.015s
user    0m0.124s
sys 0m0.091s

SHA1 w/o Gzip

jwilder@jwilder-laptop:~/docker$ time docker push localhost:8080/whoami:latest
The push refers to a repository [localhost:8080/whoami] (len: 1)
Sending image list
Pushing repository localhost:8080/whoami (1 tags)
511136ea3c5a: Image successfully pushed 
9bad880da3d2: Image successfully pushed 
25f11f5fb0cb: Image successfully pushed 
ebc34468f71d: Image successfully pushed 
2318d26665ef: Image successfully pushed 
ba5877dc9bec: Image successfully pushed 
09626508f783: Image successfully pushed 
f6821c648739: Image successfully pushed 
e86e7049f4a6: Image successfully pushed 
c8b49f867469: Image successfully pushed 
5856309b3a95: Image successfully pushed 
53b1c773d321: Image successfully pushed 
a41e1b029a65: Image successfully pushed 
Pushing tag for rev [a41e1b029a65] on {http://localhost:8080/v1/repositories/whoami/tags/latest}
2014/08/08 16:31:19 
real    0m6.685s
user    0m0.122s
sys 0m0.088s

MD5 w/o Gzip

jwilder@jwilder-laptop:~/docker$ time docker push localhost:8080/whoami:latest
The push refers to a repository [localhost:8080/whoami] (len: 1)
Sending image list
Pushing repository localhost:8080/whoami (1 tags)
511136ea3c5a: Image successfully pushed 
9bad880da3d2: Image successfully pushed 
25f11f5fb0cb: Image successfully pushed 
ebc34468f71d: Image successfully pushed 
2318d26665ef: Image successfully pushed 
ba5877dc9bec: Image successfully pushed 
09626508f783: Image successfully pushed 
f6821c648739: Image successfully pushed 
e86e7049f4a6: Image successfully pushed 
c8b49f867469: Image successfully pushed 
5856309b3a95: Image successfully pushed 
53b1c773d321: Image successfully pushed 
a41e1b029a65: Image successfully pushed 
Pushing tag for rev [a41e1b029a65] on {http://localhost:8080/v1/repositories/whoami/tags/latest}
2014/08/08 16:32:36 
real    0m6.627s
user    0m0.123s
sys 0m0.088s
@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Aug 8, 2014

Member

Awesome! Sounds to me like very useful information on what to concentrate on. Not really sure who the maintainer is for this issue, @shin @unclejack ?

Member

thaJeztah commented Aug 8, 2014

Awesome! Sounds to me like very useful information on what to concentrate on. Not really sure who the maintainer is for this issue, @shin @unclejack ?

@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Aug 8, 2014

Contributor

@thaJeztah @jwilder This issue is being worked on and it's well known. I've already tried a few things, but it requires further investigation for the final fix.

Rather than coming up with a solution which actually fixes this problem in 20-50% of the cases and makes other problems worse (or introduces new ones), I'd like to have a proper fix. This takes a bit more time and a bit more work, but that will avoid regressions.

Contributor

unclejack commented Aug 8, 2014

@thaJeztah @jwilder This issue is being worked on and it's well known. I've already tried a few things, but it requires further investigation for the final fix.

Rather than coming up with a solution which actually fixes this problem in 20-50% of the cases and makes other problems worse (or introduces new ones), I'd like to have a proper fix. This takes a bit more time and a bit more work, but that will avoid regressions.

@stevenschlansker

This comment has been minimized.

Show comment
Hide comment
@stevenschlansker

stevenschlansker Aug 9, 2014

For what it's worth, we found our application bound by GZIP time, and found LZ4 superior in both compression ratio and in terms of CPU usage.

For what it's worth, we found our application bound by GZIP time, and found LZ4 superior in both compression ratio and in terms of CPU usage.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Aug 9, 2014

Member

@unclejack 100% clear. I thought these benchmarks could be useful to narrow down the (biggest?) bottleneck.

Member

thaJeztah commented Aug 9, 2014

@unclejack 100% clear. I thought these benchmarks could be useful to narrow down the (biggest?) bottleneck.

@moby moby locked and limited conversation to collaborators Aug 11, 2014

@unclejack unclejack self-assigned this Aug 11, 2014

@dmp42

This comment has been minimized.

Show comment
Hide comment
@dmp42

dmp42 Aug 13, 2014

Contributor

Heads up everyone.
This here #7542 has @unclejack 's first round of optimizations.
Any numeric feedback of docker mainline vs. this patch are welcome.
Please keep your comments there short and to the point - and link your benchmarks on gist/pastebin.

Thanks a lot!

Contributor

dmp42 commented Aug 13, 2014

Heads up everyone.
This here #7542 has @unclejack 's first round of optimizations.
Any numeric feedback of docker mainline vs. this patch are welcome.
Please keep your comments there short and to the point - and link your benchmarks on gist/pastebin.

Thanks a lot!

@vbatts

This comment has been minimized.

Show comment
Hide comment
@vbatts

vbatts Aug 19, 2014

Contributor

also, while the sha256 hash will be used for the deterministic IDs of layers, allowing for faster hashes used for transactions like push/pull or others, is one of the use-cases for this PR of allowing generic hashes for the TarSum #5956

For the sake of your benchmarking, you ought to check on the blake2b hash as well.

Contributor

vbatts commented Aug 19, 2014

also, while the sha256 hash will be used for the deterministic IDs of layers, allowing for faster hashes used for transactions like push/pull or others, is one of the use-cases for this PR of allowing generic hashes for the TarSum #5956

For the sake of your benchmarking, you ought to check on the blake2b hash as well.

@ewindisch

This comment has been minimized.

Show comment
Hide comment
@ewindisch

ewindisch Aug 20, 2014

Contributor

I do not agree with supporting the weaker algorithms (particularly MD5) as it will not be clear to users that the images they've pulled may be compromised.

Contributor

ewindisch commented Aug 20, 2014

I do not agree with supporting the weaker algorithms (particularly MD5) as it will not be clear to users that the images they've pulled may be compromised.

@LK4D4 LK4D4 closed this in #7542 Sep 3, 2014

@unclejack unclejack reopened this Sep 3, 2014

@LK4D4

This comment has been minimized.

Show comment
Hide comment
@LK4D4

LK4D4 Dec 18, 2014

Contributor

Probably fixed by #9720

Contributor

LK4D4 commented Dec 18, 2014

Probably fixed by #9720

@jessfraz

This comment has been minimized.

Show comment
Hide comment
@jessfraz

jessfraz Feb 26, 2015

Contributor

related #8188

Contributor

jessfraz commented Feb 26, 2015

related #8188

@aaronlehmann

This comment has been minimized.

Show comment
Hide comment
@aaronlehmann

aaronlehmann Sep 2, 2015

Contributor

With #15493 merged, I think we can finally consider this fixed.

Contributor

aaronlehmann commented Sep 2, 2015

With #15493 merged, I think we can finally consider this fixed.

@unclejack

This comment has been minimized.

Show comment
Hide comment
@unclejack

unclejack Jan 24, 2016

Contributor

The code has been optimized through a few changes. I'll close this now as I think performance improvements can still be made, but performance has improved enough to make closing this issue reasonable.

Contributor

unclejack commented Jan 24, 2016

The code has been optimized through a few changes. I'll close this now as I think performance improvements can still be made, but performance has improved enough to make closing this issue reasonable.

@unclejack unclejack closed this Jan 24, 2016

@thaJeztah thaJeztah added this to the 1.9.0 milestone Jan 24, 2016

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.