-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem: client potentially loses data #22
Comments
Are we agreed that the API should deliver a stream, not chunks? On Tue, Mar 25, 2014 at 11:53 PM, Ron Pedde notifications@github.comwrote:
|
I think from the client perspective, it should be a stream, yes. |
OK, that's clear, thanks. |
Closing this and replacing with #24. |
If the buffer provided by a client is smaller than the size of the pending data chunk, the excess data in the chunk is thrown away.
The client read function really needs to keep an offset of the position of the current chunk, and continue to serve from it until the chunk is exhausted.
In a perfect world, we'd never return short reads, either. Reads shorter then requested size happen, but are surprising unless at end of file, I think. So a single read in excess of the pending chunk size should probably continue to pull chunks until the client supplied buffer is full.
This might be something better deferred to after the raw client re-write, though. Just noting this for posterity.
The text was updated successfully, but these errors were encountered: