-
Notifications
You must be signed in to change notification settings - Fork 18.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker pull/push with max concurrency limits. #22445
Docker pull/push with max concurrency limits. #22445
Conversation
Thanks! ping @aaronlehmann wdyt? |
Thanks for working on this. I wasn't sure whether this should be a daemon-level flag or a flag that can be passed to the client and transmitted through the API. Now that I think about it some more, I think a client flag would be very unfriendly, because it would have to be specified on every pull. So I think the approach here makes sense. I'd be interested to hear opinions on this, though. |
@@ -18,6 +18,11 @@ import ( | |||
) | |||
|
|||
const ( | |||
defaultMaxDownloadConcurrency = 5 | |||
defaultMaxUploadConcurrency = 3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noticed that this reverses the old defaults. The old code was:
// maxDownloadConcurrency is the maximum number of downloads that
// may take place at a time for each pull.
maxDownloadConcurrency = 3
// maxUploadConcurrency is the maximum number of uploads that
// may take place at a time for each push.
maxUploadConcurrency = 5
I was a little surprised to see that there are more simultaneous uploads allowed than downloads, but I think I remember why it's like this. Registries often have high latencies to a storage backend, such as S3. At the end of a layer upload, there are multiple steps that happen to commit that upload, and because of the latency, this can take several seconds. If a push is uploading many small layers, it's faster overall to upload more at the same time, so less time is wasted in between uploads waiting for the registry to do these commits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @aaronlehmann that was a typo I must have been confused about which one is which when I was working on it.
I'm okay with flags. However now we need validation and probably a way to specify "unlimited". |
@aaronlehmann There is a client-side config file that could be used for this flag in the client. I like 'max' something as the flag. That is the word I would first look for. |
194f9ae
to
7a18b4e
Compare
Thanks @LK4D4 I just updated the validation to the added flags. Please let me know if there are any issues. |
Yes, was also thinking along the lines of |
7a18b4e
to
b9bd316
Compare
Thanks @LK4D4 @aaronlehmann @thaJeztah I just updated the pull request to change the names and make |
b9bd316
to
811ea10
Compare
LGTM |
hm, before it's merged; should this setting be "reloadable"? https://docs.docker.com/engine/reference/commandline/daemon/#configuration-reloading, so that it can be re-configured without restarting the daemon? |
@thaJeztah: It already is; see the changes to the |
Looks like this doc needs to be updated though - is that what you mean? |
Oh! Missed that that was implemented now, didn't look at the code. But yes, it should be mentioned in the docs, below this header https://github.com/docker/docker/blob/master/docs/reference/commandline/dockerd.md#configuration-reloading |
811ea10
to
720595a
Compare
Thanks @aaronlehmann @thaJeztah I just updated the docs. Please let me know if there are any other issues. |
Haha, so no day waiting is needed. LGTM |
Sorry but I really dislike the names Explicit is better than short, can we please rename to |
This fix tries to address issues raised in moby#20936 and moby#22443 where `docker pull` or `docker push` fails because of the concurrent connection failing. Currently, the number of maximum concurrent connections is controlled by `maxDownloadConcurrency` and `maxUploadConcurrency` which are hardcoded to 3 and 5 respectively. Therefore, in situations where network connections don't support multiple downloads/uploads, failures may encounter for `docker push` or `docker pull`. This fix tries changes `maxDownloadConcurrency` and `maxUploadConcurrency` to adjustable by passing `--max-concurrent-uploads` and `--max-concurrent-downloads` to `docker daemon` command. The documentation related to docker daemon has been updated. Additional test case have been added to cover the changes in this fix. This fix fixes moby#20936. This fix fixes moby#22443. Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
113b3af
to
7368e41
Compare
Thanks @icecrime just updated the PR. Please let me know if there are any other issues. |
re-LGTM |
LGTM ( |
@yongtang Thanks a lot for the rapid update! |
ping @albers @sdurrheimer think this needs changes to the completion scripts. Sorry forgot to ping earlier ❤️ |
I'll add it to bash completion. Thanks for the ping. |
This fix tries to address a separate issue raised during the review of another PR moby#22445. The issue was raised because currently, daemon reload will skip the action if the field (e.g., `debug`, `labels`, etc) is not specified. This potentially could cause some confusion: 1. Users will need to explicitly set the field if they want to unset and they cannot assume `default` value any more 2. Users have to check the previous state of the daemon in order to figure out the expected behavior of a reload, instead of relying on the config file they intend to reload. Without knowing the previous state of the daemon the behavior will be unpredictable. In this fix, we always unset the value if not specified so that users know what to expect based on the config file to be loaded, instead of the previous state of the daemon (before reload). Additional test has been added to cover the changes in this fix. This fix is related to moby#22445. Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
- What I did
This fix tries to address issues raised in #20936 and #22443 where
docker pull
ordocker push
fails because of the concurrent connection failing.Currently, the number of maximum concurrent connections is controlled by
maxDownloadConcurrency
andmaxUploadConcurrency
which are set to 3 and 5 respectively. Therefore, in situations where network connections don't support multiple downloads/uploads, failures may encounter fordocker push
ordocker pull
.- How I did it
This fix changes
maxDownloadConcurrency
andmaxUploadConcurrency
to adjustable by passing--max-concurrent-uploads
and--max-concurrent-downloads
todocker daemon
command.The documentation related to docker daemon has been updated.
- How to verify it
Additional test case have been added to cover the changes in this fix.
- Description for the changelog
Add
--max--concurrent-uploads
and--max--concurrent-downloads
todocker daemon
command so thatdocker pull
anddocker push
could control the max number of concurrent connections during uploads or downloads.- A picture of a cute animal (not mandatory but encouraged)
This fix fixes #20936. This fix fixes #22443.
Signed-off-by: Yong Tang yong.tang.github@outlook.com