Conversation
| if err != nil { | ||
| return nil, fmt.Errorf("failed to resolve blob: %w", err) | ||
| } | ||
| var chunkSize int64 = 1024 * 1024 * 100 // 100MB |
There was a problem hiding this comment.
maybe make configurable?
| }, nil | ||
| } | ||
|
|
||
| type blobReader struct { |
There was a problem hiding this comment.
since i copied this from the client package, maybe worth moving this to a common place?
| var chunkSize int64 = 1024 * 1024 * 100 // 100MB | ||
| buf := make([]byte, chunkSize) | ||
| pr, pw := io.Pipe() | ||
| go func() { |
There was a problem hiding this comment.
A goroutine here honestly feels kind of wrong, especially since cancellation then becomes more complicated 🤔
Shouldn't this be wired up directly into the Read method of the returned BlobReader instead? Especially since you've had to define your own BlobReader type anyways. 🤔
There was a problem hiding this comment.
and because this is an io.Pipe, it'll naturally back up and block when consumers aren't pulling fast enough anyways, which is exactly the behavior you'd have with this as part of the Read method instead (and the "leftover buffer" being explicit on that end instead of implicit in the hung pipe)
| dgstr.Hash().Write(buf[:n]) | ||
|
|
||
| for i := 0; i < 3; i++ { // try writing each chunk three times | ||
| _, err = bw.Write(buf[:n]) |
There was a problem hiding this comment.
It's too bad the BlobWriter interface doesn't have a way for you to be explicit about the offset, etc here so that the interfaces could make sure nothing happens out of order 😞 (so you don't have to get all the way to the end of a 100GiB blob before it tells you that something had a hiccup in the middle)
No description provided.