You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I had a brief look, but couldn't find a discussion of downloading a large file in parallel, resumable chunks.
I have a use case that may involve passing large confidential blobs back and forth between the client and server. Before I found tus, I was considering using https://github.com/feross/webtorrent .
Upload bandwidth does seem more precious than download bandwidth, but I wonder if there wouldn't be some benefit to addressing the same concerns for both directions.
I imagine getting downloads to work would require any client-side solution to define a storage adapter, so that an official storage-agnostic download algorithm could keep identify missing / corrupted chunks, etc.
The text was updated successfully, but these errors were encountered:
Thanks, that's an interesting question. tus so far is focussed on solving the problem of unreliable file uploads. Bandwidth is like you said smaller, and there's a risk that if the upload breaks, the content will never make it to "the internet". Whereas typically, downloads profit from higher bandwidth, and can be retried at any time because they already are on highly available server infra.
I think HTTP/1.1 should allow you to retry where you left off using the Range header. And I think this can also be used to download parallel chunks. I think programs such as GetRight 😱 or aria2 used to leverage this since the very early days already.
While I agree that leaves something to be desired regarding e.g. checksums and standardization, I also think the status quo is much less problematic - and it's probably not worth complicating our protocol and all its implementations over. With a dash of more certainty I can say: at least not for the 1.0.
We had this discussion previously and came to the conclusion that tus (currently) focuses on providing a solution for upload and not download content. I can't add a lot to @kvz's response since he mentioned all the relevant points. :)
You can read more in the end of the thread in #13.
I had a brief look, but couldn't find a discussion of downloading a large file in parallel, resumable chunks.
I have a use case that may involve passing large confidential blobs back and forth between the client and server. Before I found tus, I was considering using https://github.com/feross/webtorrent .
Upload bandwidth does seem more precious than download bandwidth, but I wonder if there wouldn't be some benefit to addressing the same concerns for both directions.
I imagine getting downloads to work would require any client-side solution to define a storage adapter, so that an official storage-agnostic download algorithm could keep identify missing / corrupted chunks, etc.
The text was updated successfully, but these errors were encountered: