Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retry on ERROR #506

Closed
Sepero opened this issue Oct 31, 2012 · 6 comments
Closed

Retry on ERROR #506

Sepero opened this issue Oct 31, 2012 · 6 comments

Comments

@Sepero
Copy link

@Sepero Sepero commented Oct 31, 2012

I have an imperfect internet connection, so I set the options to -c continue, and -R retry 100 times. Unfortunately, if the program reaches ERROR, it thinks it's not supposed to retry any more, and quits.

[youtube] Pq_JFaqsNYI: Downloading video webpage
[youtube] Pq_JFaqsNYI: Downloading video info webpage
[youtube] Pq_JFaqsNYI: Extracting video information
[download] Destination: Bioshock_Playthrough_pt_7-Pq_JFaqsNYI.flv
[download] 59.5% of 115.43M at 88.34k/s ETA 09:01
ERROR: unable to download video

It would be great if youtube-dl didn't give up at 59.5% of the download, and instead finished exhausting retries (possibly with a short 2 second pause between retries).

@Sepero
Copy link
Author

@Sepero Sepero commented Oct 31, 2012

I tested using a timeout in FileDownloader.py on line 589:
data = urllib2.urlopen(request, timeout=10)

Then while downloading a video, I temporarily disconnected my internet to test it, which gave me this:
$ ./main.py -R0 http://www.youtube.com/watch?v=mqT82oGeax0
[youtube] Setting language
[youtube] mqT82oGeax0: Downloading video webpage
[youtube] mqT82oGeax0: Downloading video info webpage
[youtube] mqT82oGeax0: Extracting video information
[download] Resuming download at byte 2349649
[download] Destination: mqT82oGeax0.mp4
[download] 0.3% of 991.83M at 207.13k/s ETA 81:31aha
ERROR: unable to download video data: timed out

With the way things currently are, it looks like it would be a little messy to get the program to use Retries in the downloading section of the code. I think to do it properly may require dividing the code up into smaller functions- one function for initializing the connection, another function for downloading the data.

Take those two functions and wrap them in a while loop that does overall counting.

@FiloSottile
Copy link
Collaborator

@FiloSottile FiloSottile commented Oct 31, 2012

I'm not really into the downloading routines, but I'm +1 on this and I'm ok with this being assigned to me (I'll figure it out) if @phihag doesn't have time.

@phihag phihag closed this Oct 31, 2012
@phihag phihag reopened this Oct 31, 2012
@phihag
Copy link
Contributor

@phihag phihag commented Oct 31, 2012

Oops, I see this issue is potentiall unrelated to #507, isn't it?

@Sepero
Copy link
Author

@Sepero Sepero commented Oct 31, 2012

Yes, this is a separate issue from #507 and #505

@weedy
Copy link

@weedy weedy commented Mar 16, 2016

I would really like you guys to set urllib2.urlopen(request, timeout=XX) to something shorter then the current "tens of minutes" or whatever the default is.

@jaimeMF
Copy link
Collaborator

@jaimeMF jaimeMF commented Mar 19, 2016

@weedy Have you tried the --socket-timeout option? Open a new issue if it doesn't work.

@dstftw dstftw closed this in a3c3a1e Aug 26, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
5 participants
You can’t perform that action at this time.