New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make compression when pushing to a private registry optional #1266

Open
JeremyGrosser opened this Issue Jul 22, 2013 · 31 comments

Comments

Projects
None yet
@JeremyGrosser
Copy link
Contributor

JeremyGrosser commented Jul 22, 2013

When I "docker push" to a private repository hosted on the local network or host, the upload process is entirely bound by the speed of compressing each layer with xz, using a single CPU.

I'd like an option to disable compression when pushing, or to specify an alternate compression command (pigz http://zlib.net/pigz/ for example).

@shykes

This comment has been minimized.

Copy link
Collaborator

shykes commented Jul 22, 2013

That makes sense. It will be possible once the image checksum is image-independent (which is scheduled for 0.6). 'docker push' already auto-detects compression, which will make this easier.

@shykes

This comment has been minimized.

Copy link
Collaborator

shykes commented Aug 24, 2013

As promised the checksum system has been changed in 0.6. We can now schedule this for 0.7.

@crosbymichael

This comment has been minimized.

Copy link
Member

crosbymichael commented Oct 9, 2013

Tagged as an easy fix, help wanted.

@mrallen1

This comment has been minimized.

Copy link
Contributor

mrallen1 commented Nov 12, 2013

I'll take this issue

@mrallen1

This comment has been minimized.

Copy link
Contributor

mrallen1 commented Nov 13, 2013

Having gone through the code last night, I am confused - it looks like compression is disabled on push operations already. @shykes @crosbymichael -- is the idea on this issue to add a CLI flag to the push command to specify a compression level when pushing layers?

@mrallen1

This comment has been minimized.

Copy link
Contributor

mrallen1 commented Nov 22, 2013

ping @shykes and/or @crosbymichael

@crosbymichael

This comment has been minimized.

Copy link
Member

crosbymichael commented Nov 22, 2013

Yes we use uncompressed when taring layers. I would like to keep this consistent and use the same compression because there are issues with the go tar libs not supporting all compressors.

@crosbymichael

This comment has been minimized.

Copy link
Member

crosbymichael commented Dec 14, 2013

I think we can close this now because we do not compress layers anymore on push.

@crosbymichael

This comment has been minimized.

Copy link
Member

crosbymichael commented Dec 14, 2013

Reopening. I guess I was wrong. We work with uncompressed layers but they the final push to the registry is a compressed tar.

@crosbymichael crosbymichael reopened this Dec 14, 2013

@vgeta

This comment has been minimized.

Copy link
Contributor

vgeta commented Jan 7, 2014

@crosbymichael ,

Should we provide flags for picking one of the compression mode "gzip/lzw/zlib/none" (http://golang.org/pkg/compress/) ?

@vgeta

This comment has been minimized.

Copy link
Contributor

vgeta commented Jan 10, 2014

@crosbymichael crosbymichael removed this from the 0.7.1 milestone May 15, 2014

@crosbymichael

This comment has been minimized.

Copy link
Member

crosbymichael commented May 15, 2014

ping @unclejack did you have any work with this? Just checking

@shin- shin- added the Distribution label Jul 1, 2014

@bobrik

This comment has been minimized.

Copy link
Contributor

bobrik commented Jan 19, 2015

Link to discussion in another issue: #9060 (comment)

Proposed change:

docker push --compression-algo=gzip --compression-level=3 my-image
@shouze

This comment has been minimized.

Copy link
Contributor

shouze commented Apr 10, 2015

👍 for compression

@nicocesar

This comment has been minimized.

Copy link

nicocesar commented May 12, 2015

+1 for @bobrik extra params

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Jun 7, 2015

There's also this PR; #10181

@bobrik

This comment has been minimized.

Copy link
Contributor

bobrik commented Jun 8, 2015

@thaJeztah you linked to my issue. What PR are you talking about?

@thaJeztah

This comment has been minimized.

Copy link
Member

thaJeztah commented Jun 8, 2015

@bobrik oh, shucks you're right, I thought there was a PR linked from that, but looks like I'm mistaken :D

@andrask

This comment has been minimized.

Copy link

andrask commented Aug 3, 2015

I see gzipping is burnt in the code and there is no way to turn it off https://github.com/docker/docker/blob/master/graph/graph.go#L337
Actually, this is ruining the performance on local networks. We have hosts with gigabit ethernet and SSDs and we are bound by a single core of an octo core CPU? Copying the 1gig layers files from one host to the other would take about 10 seconds. Now we are waiting more than 2.5 minutes. I hope it will get better. I don't mind using more bandwidth and disk space. Pushing speed to private repos should depend on the user's decisions.

@shouze

This comment has been minimized.

Copy link
Contributor

shouze commented Aug 4, 2015

@andrask what you ask is a way to set the compression level between 0 and 9 (higher)

@unclejack

This comment has been minimized.

Copy link
Contributor

unclejack commented Dec 7, 2015

Would it not be better to use different compression altogether, rather than remove it completely?

@bobrik

This comment has been minimized.

Copy link
Contributor

bobrik commented Dec 7, 2015

--compression-algo=none :)

@andrask

This comment has been minimized.

Copy link

andrask commented Dec 8, 2015

@shouze as I examined the docker code, as I understood, the client already adapts to the compression of the incoming data automatically. Even uncompressed. Why is it impossible to create uncompressed layers/images and upload those in the registry? I care less about the network overhead than the time it takes to compute the compression. In a CI system, the most crucial is to have images ready ASAP.

@bobrik This would be great!

@ghost

This comment has been minimized.

Copy link

ghost commented Jan 26, 2016

Any update on this guys? We have a use case where speed is crucial, and we want to disable compression because it takes long time in comparison to download. Would really appreciate if this can be released soon. Like the idea proposed by @bobrik --compression-algo=none :)

@ghost

This comment has been minimized.

Copy link

ghost commented Jan 26, 2016

@andrask I am in same boat as you. Were you able to find a workaround on this?

@andrask

This comment has been minimized.

Copy link

andrask commented Jan 26, 2016

@dupperinc I tried to modify the corresponding code in the Docker source to
not compress anything but I'm not a Go programmer and couldn't get it
working in a day.

The only workaround currently is to minimize the image sizes. It's still
significant overhead and I'd love this feature to be implemented.

I have also thought about leaving out the registry completely and
implementing my own image distribution mechanism but the image size
reduction worked well enough that I don't feel the urge to plunge into this
mud just now.

2016-01-26 13:10 GMT+02:00 dupperinc notifications@github.com:

@andrask https://github.com/andrask I am in same boat as you. Were you
able to find a workaround on this?


Reply to this email directly or view it on GitHub
#1266 (comment).

@ghost

This comment has been minimized.

Copy link

ghost commented Jan 27, 2016

@andrask how did you do image size reduction?

@andrask

This comment has been minimized.

Copy link

andrask commented Jan 27, 2016

@aidansteele

This comment has been minimized.

Copy link

aidansteele commented Oct 21, 2016

This is especially relevant with the introduction of Windows Server Core images and Windows 2016 being available on AWS. Running docker pull microsoft/windowsservercore takes 12 minutes. Approximately one minute is spent downloading at ~900mbit, the remainder of the time is decompressing the images.

justincormack added a commit to justincormack/docker that referenced this issue Apr 18, 2017

Move Go code to `src/cmd`
This does not get everything where we want it finally, see moby#1266
nor the optimal way of building, but it gets it out of top level.

Added instructions to build if you have a Go installation.

Not moving `vendor` yet.

Signed-off-by: Justin Cormack <justin.cormack@docker.com>

justincormack added a commit to justincormack/docker that referenced this issue Apr 18, 2017

Move Go code to `src/cmd`
This does not get everything where we want it finally, see moby#1266
nor the optimal way of building, but it gets it out of top level.

Added instructions to build if you have a Go installation.

Not moving `vendor` yet.

Signed-off-by: Justin Cormack <justin.cormack@docker.com>
@GaretJax

This comment has been minimized.

Copy link
Contributor

GaretJax commented Aug 23, 2017

I'd like to work on this as we really need it to get some better performance out of our deployment pipeline.
I have a working fork which simply disables compression by setting the gzip compression level to gzip.NoCompression, but I have some questions:

  1. Do we want to handle just the levels which can be passed to the gzip writer (https://golang.org/pkg/compress/gzip/#pkg-constants) or do we want to support other compression algorithms altogether?
  2. Where should the level/algorithm be specified? Via the CLI (which then also requires changes to the API) or in the configuration? In the latter case, would it be per daemon or per registry (more appropriate)?

/cc @crosbymichael @shykes

@GaretJax

This comment has been minimized.

Copy link
Contributor

GaretJax commented Aug 23, 2017

#dibs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment