New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Try to resume interrupted downloads in FileDownloader by using HTTP range requests #6791
Try to resume interrupted downloads in FileDownloader by using HTTP range requests #6791
Conversation
This is looking good, it is implemented elegantly and easy to understand with that recursion.
I would say that it is missing to cover the case where downloads are done to memory directly, not to file (arg path
is None
). Probably a unittest checking that will show a bug there (the return of the method is the actual contents of the file transferred in this case). Typically done for small Conan files only, but better be complete, just in case. Could you please have a look?
Thanks for your feedback. For the memory case I would need to carry the content over to the next iteration/recursion. Do you want me to introduce an additional parameter? |
Actually, as a second thought, I think it is good enough to avoid the recursion with resumable download when |
I see. I'll add one or two more test cases to cover the false path of the following condition:
|
try to resume at the last position and append to wriiten file retry as usual if not making any progress
…on if the filedownload doesn't make any progress.
the server sometimes doesn't respond with the "Content-Range" field we rather throw an error then
Hi @flashdagger
Very nice work here, I think we are almost there. I had a couple of concerns/questions about the content length, so I have branched from your work here: https://github.com/memsharded/conan/tree/feature/download-resume-review
Please have a look to the comments and such branch (I tried to do a PR for review to your fork, but apparently I can't).
That branch also includes the minor changes to the Progress
class to avoid private access.
Please let me know if that makes sense or not. Thanks!
Hello @memsharded |
…roperly support content ranges
Thanks @flashdagger !
It is looking good to me. I will ask for another review, to make sure, lets try to include this in next 1.25.
Thanks for all the efforts.
Some feedback from my company: We tried a huge package download from our Artifactory Pro server (v6.18) via the VPN. As the connection reliably drops at 1 GB, conan was able to continue from there.
I figured out if you set the try_resume parameter to True by default it even works within the retry loop (e.g. when there is a network error). But then you have to make sure the file does not exist initially...
Excellent, great news!
Oh, yes, that would make sense, it would be a problem if the file was already there (and maybe corrupted) from previous Conan runs (not the retry loop), but in theory the mechanisms (the dirty flag, and removing files) should take care of this. Do you think then it is better to activate |
I'm fine with the implementation so far. If users need more robust solutions they can provide more feedback and we can work on it. What do you think? |
yeah, agree. Let's wait for feedback then. |
I've been trying to find a flaw in the implementation but it looks like any scenario I can think of will result in the same behavior as before, so
Changelog: Feature: Resume interrupted file downloads if server supports it.
Docs: Omit
Close #5708
Close #3610,
develop
branch, documenting this one.