Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

150 parallel pull #352

Closed
wants to merge 7 commits into from
Closed

150 parallel pull #352

wants to merge 7 commits into from

Conversation

lfn3
Copy link

@lfn3 lfn3 commented Apr 8, 2013

First half of #150

It works as expected with the 'docker-ut' and 'base' images.
I'd like to test it against something that will stress it a little more, but I can't think of an easy way of doing this at present.

byAtlas added 2 commits April 8, 2013 21:28
Should now continue pulling the rest of the image even if one part fails.
@cespare
Copy link
Contributor

cespare commented Apr 8, 2013

You should probably remove the relevant // FIXME as part of the change.

errChan := make(chan error)
maxWorkers := 4

for i := 0; i < maxWorkers; i++ {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the purpose of limiting the number of workers to 4? Spawning a whole bunch of goroutines is fine. Maybe you're trying to limit the number of concurrent HTTP requests?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it's to limit the number of concurrent requests.


var errMsg error

for i := 0; i < expectedResponseCount; i++ {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't you range over errChan instead of doing this loop? (And then you can get rid of expectedResponseCount.)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so. I'd have to close the channel from one of the goroutines, and I can't tell if another is still in progress.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah sorry, I didn't read this closely enough. My mistake.

@cespare
Copy link
Contributor

cespare commented Apr 8, 2013

OK, I'm done :)

@lfn3
Copy link
Author

lfn3 commented Apr 8, 2013

Lol. Thanks. Appreciate the help.

}
errMsg = fmt.Errorf("%s\n%s", errMsg, err)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe a simple []string and strings.Join would be nicer here? You wouldn't allocate as many new strings, you wouldn't have the overhead of reflection and the code would be more straightforward to understand, by not having the "is it initialized yet?" check on every iteration.

errors := []string{"The following errors..."}
for ... {
  // append if error
}

if len(errors) > 1 {
  return errors.New(strings.Join(errors, "\n"))
}

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call. Updated

Should make it easier to read
@shykes
Copy link
Contributor

shykes commented Apr 11, 2013

Hey @byatlas, thanks for the contribution. It seems to work well, and definitely speeds up downloads :)

I found one problem: the progress bars write over each other. Not sure what's the simplest way to solve that: maybe add an optional "position" parameter to ProgressReader, which would add N newlines before writing? This is how pv(1) does it for multiple streams (see pv -c -N).

@shykes
Copy link
Contributor

shykes commented Apr 17, 2013

Ping :)

@lfn3
Copy link
Author

lfn3 commented Apr 26, 2013

Sorry for the radio silence, I just got a new job so that's been keeping me busy.

Anyway, I took a swing at this, wrapping a io.Writer both using newlines and console escape codes. Due the the whole async thing, this can cause the cursor to jump around like crazy, and overwrite random lines.
Not particularly useful.

Maybe we just ignore percentage counters and go 'downloading layer %d of %d'?

@ghost ghost assigned samalba Jun 1, 2013
@shykes
Copy link
Contributor

shykes commented Jun 1, 2013

So, there is now a new HTTP api, and is streams progress using json. So now we can use the json stream to interleave updates about multiple downloads. This will need a small change to the api (specifically, using "progress_FOO" and "progress_BAR" as json keys instead of just "progress").

@byatlas, if you're still motivated to do this, let me know and I will keep this open for you to update it. If you don't have the time, or will get around to it later, let's close it for now and re-open it when it makes sense.

Thanks for the contribution and sorry we couldn't take it in as is.

@lfn3
Copy link
Author

lfn3 commented Jun 2, 2013

I'll close it for now. I might come up with something in a bit, but we'll see.

And no worries, maybe next time.

@lfn3 lfn3 closed this Jun 2, 2013
TomSweeneyRedHat pushed a commit to TomSweeneyRedHat/moby that referenced this pull request Aug 17, 2019
thaJeztah pushed a commit to thaJeztah/docker that referenced this pull request Sep 20, 2019
…lid_linked_container

[19.03 backport] Return "invalid parameter" when linking to non-existing container
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants