Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make compression when pushing to a private registry optional #1266

Open
JeremyGrosser opened this issue Jul 22, 2013 · 47 comments
Open

Make compression when pushing to a private registry optional #1266

JeremyGrosser opened this issue Jul 22, 2013 · 47 comments

Comments

@JeremyGrosser
Copy link
Contributor

@JeremyGrosser JeremyGrosser commented Jul 22, 2013

When I "docker push" to a private repository hosted on the local network or host, the upload process is entirely bound by the speed of compressing each layer with xz, using a single CPU.

I'd like an option to disable compression when pushing, or to specify an alternate compression command (pigz http://zlib.net/pigz/ for example).

@shykes
Copy link
Contributor

@shykes shykes commented Jul 22, 2013

That makes sense. It will be possible once the image checksum is image-independent (which is scheduled for 0.6). 'docker push' already auto-detects compression, which will make this easier.

@shykes
Copy link
Contributor

@shykes shykes commented Aug 24, 2013

As promised the checksum system has been changed in 0.6. We can now schedule this for 0.7.

@crosbymichael
Copy link
Contributor

@crosbymichael crosbymichael commented Oct 9, 2013

Tagged as an easy fix, help wanted.

@jadeallenx
Copy link
Contributor

@jadeallenx jadeallenx commented Nov 12, 2013

I'll take this issue

@jadeallenx
Copy link
Contributor

@jadeallenx jadeallenx commented Nov 13, 2013

Having gone through the code last night, I am confused - it looks like compression is disabled on push operations already. @shykes @crosbymichael -- is the idea on this issue to add a CLI flag to the push command to specify a compression level when pushing layers?

@jadeallenx
Copy link
Contributor

@jadeallenx jadeallenx commented Nov 22, 2013

ping @shykes and/or @crosbymichael

@crosbymichael
Copy link
Contributor

@crosbymichael crosbymichael commented Nov 22, 2013

Yes we use uncompressed when taring layers. I would like to keep this consistent and use the same compression because there are issues with the go tar libs not supporting all compressors.

@crosbymichael
Copy link
Contributor

@crosbymichael crosbymichael commented Dec 14, 2013

I think we can close this now because we do not compress layers anymore on push.

@crosbymichael
Copy link
Contributor

@crosbymichael crosbymichael commented Dec 14, 2013

Reopening. I guess I was wrong. We work with uncompressed layers but they the final push to the registry is a compressed tar.

@crosbymichael crosbymichael reopened this Dec 14, 2013
@vgeta
Copy link
Contributor

@vgeta vgeta commented Jan 7, 2014

@crosbymichael ,

Should we provide flags for picking one of the compression mode "gzip/lzw/zlib/none" (http://golang.org/pkg/compress/) ?

@vgeta
Copy link
Contributor

@vgeta vgeta commented Jan 10, 2014

@crosbymichael crosbymichael removed this from the 0.7.1 milestone May 15, 2014
@crosbymichael
Copy link
Contributor

@crosbymichael crosbymichael commented May 15, 2014

ping @unclejack did you have any work with this? Just checking

@bobrik
Copy link
Contributor

@bobrik bobrik commented Jan 19, 2015

Link to discussion in another issue: #9060 (comment)

Proposed change:

docker push --compression-algo=gzip --compression-level=3 my-image

@shouze
Copy link
Contributor

@shouze shouze commented Apr 10, 2015

👍 for compression

@nicocesar
Copy link

@nicocesar nicocesar commented May 12, 2015

+1 for @bobrik extra params

@thaJeztah
Copy link
Member

@thaJeztah thaJeztah commented Jun 7, 2015

There's also this PR; #10181

@bobrik
Copy link
Contributor

@bobrik bobrik commented Jun 8, 2015

@thaJeztah you linked to my issue. What PR are you talking about?

@thaJeztah
Copy link
Member

@thaJeztah thaJeztah commented Jun 8, 2015

@bobrik oh, shucks you're right, I thought there was a PR linked from that, but looks like I'm mistaken :D

@andrask
Copy link

@andrask andrask commented Aug 3, 2015

I see gzipping is burnt in the code and there is no way to turn it off https://github.com/docker/docker/blob/master/graph/graph.go#L337
Actually, this is ruining the performance on local networks. We have hosts with gigabit ethernet and SSDs and we are bound by a single core of an octo core CPU? Copying the 1gig layers files from one host to the other would take about 10 seconds. Now we are waiting more than 2.5 minutes. I hope it will get better. I don't mind using more bandwidth and disk space. Pushing speed to private repos should depend on the user's decisions.

@shouze
Copy link
Contributor

@shouze shouze commented Aug 4, 2015

@andrask what you ask is a way to set the compression level between 0 and 9 (higher)

@microstacks
Copy link

@microstacks microstacks commented Jan 27, 2016

@andrask how did you do image size reduction?

@andrask
Copy link

@andrask andrask commented Jan 27, 2016

@aidansteele
Copy link

@aidansteele aidansteele commented Oct 21, 2016

This is especially relevant with the introduction of Windows Server Core images and Windows 2016 being available on AWS. Running docker pull microsoft/windowsservercore takes 12 minutes. Approximately one minute is spent downloading at ~900mbit, the remainder of the time is decompressing the images.

justincormack added a commit to justincormack/docker that referenced this issue Apr 18, 2017
This does not get everything where we want it finally, see moby#1266
nor the optimal way of building, but it gets it out of top level.

Added instructions to build if you have a Go installation.

Not moving `vendor` yet.

Signed-off-by: Justin Cormack <justin.cormack@docker.com>
@GaretJax
Copy link
Contributor

@GaretJax GaretJax commented Aug 23, 2017

I'd like to work on this as we really need it to get some better performance out of our deployment pipeline.
I have a working fork which simply disables compression by setting the gzip compression level to gzip.NoCompression, but I have some questions:

  1. Do we want to handle just the levels which can be passed to the gzip writer (https://golang.org/pkg/compress/gzip/#pkg-constants) or do we want to support other compression algorithms altogether?
  2. Where should the level/algorithm be specified? Via the CLI (which then also requires changes to the API) or in the configuration? In the latter case, would it be per daemon or per registry (more appropriate)?

/cc @crosbymichael @shykes

@GaretJax
Copy link
Contributor

@GaretJax GaretJax commented Aug 23, 2017

#dibs

@rushtehrani
Copy link

@rushtehrani rushtehrani commented Feb 26, 2019

Is there any work being done on this or is there already a flag for disabling compression that I'm missing?

@gcs278
Copy link

@gcs278 gcs278 commented Mar 25, 2019

I'm also interested in the status of this change. Any update?

@Smashmint
Copy link

@Smashmint Smashmint commented Mar 26, 2019

Same for me

@coding-horror
Copy link

@coding-horror coding-horror commented Jun 28, 2019

Seriously guys and gals, there's STILL NO way to indicate "hey I don't care about compression, give me max speed of pushing images to my local registry?" 🤦‍♀️

@aidansteele
Copy link

@aidansteele aidansteele commented Jun 28, 2019

@coding-horror It’s especially bad when I can pull enormous Windows images from the registry at >2 Gbps and then spend 90% of the time unzipping the layers..

@thaJeztah
Copy link
Member

@thaJeztah thaJeztah commented Jun 28, 2019

@YorickPeterse
Copy link

@YorickPeterse YorickPeterse commented Aug 2, 2019

At least on Windows, the performance (or lack thereof) of compression is extremely noticeable. For example, on both Windows Server 2016 and 2019 it can easily take hours for compression to run. On my Windows 2016 server I was building a Docker image at least 1GB in size. One of the layers which added maybe 600MB or so took well over 24 hours to compress, after I just terminated it since that is way too long.

I have been experimenting with different approaches to get layer sizes smaller, but it seems that Docker even struggles to compress a few hundred megabytes in a reasonable time frame.

All of this is especially wasteful in multistage builds, as Docker will spend time compressing layers of one stage, only to throw them away later, then compress that layer again. Worse, all of this is done using just a single CPU core, instead of making use of multiple cores (if present).

@bduclaux
Copy link

@bduclaux bduclaux commented Sep 19, 2019

Any update on this one ? The gzip performance impact is very annoying when you have to deal with large containers.

@programster
Copy link

@programster programster commented Nov 26, 2019

bump

@olafik
Copy link

@olafik olafik commented Apr 14, 2020

+1

DISABLING compression would provide a huge performance boost to our CI pipeline.
I imagine it could be beneficial to anyone dealing with larger docker images and local registry (e.g. self-hosted Gitlab users).

@1-1is0
Copy link

@1-1is0 1-1is0 commented Jun 21, 2020

I'm having the same problem, disabling the compression will be a huge speed boost for CI.
when most of the images hosted locally bandwidth is not a big problem but many tasks are CPU bound.

@tuxity
Copy link

@tuxity tuxity commented Aug 20, 2020

Bump!

I would also like to disable compression for my CI.

Someone have a custom build of docker #34610 in the meanwhile?

@alitoufighi
Copy link

@alitoufighi alitoufighi commented Nov 21, 2020

@thaJeztah Is it still looking for someone's interest? Or it is on track by somebody?

@YanDavKMS
Copy link

@YanDavKMS YanDavKMS commented Jan 17, 2021

I also would like to disable compression

@haampie
Copy link

@haampie haampie commented Feb 5, 2021

I'm interested in this too

@ipsingh06
Copy link

@ipsingh06 ipsingh06 commented Mar 20, 2021

+1 This would be a huge boost to CI/CD pipelines.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.