Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upGitHub is where the world builds software
Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world.
youtube-dl autorestart on "content too short" exception #809
Comments
|
i got this error sometimes too from youtube |
|
actually it doesn't matter which speed do you have. |
|
if this true, a "retry" parameter can be good then if we got a "content too short" error, the program is retry (resume) the downloading |
|
still get this error in the latest update version :(( |
|
Does the retry counter reset after every "partial success", or is it an "absolute" number? By this, I mean: even a partial ("content too short") download should not decrement the --retries counter if some progress is made. Or have a separate retry counter for "content too short" errors. |
|
Does "--retry" work with "content too short" errors? With "-R 10" it did not completely d/l an flv last night. |
No, that's the point of this issue, to make it work with these errors. |
|
As a workaround, I run youtube-dl with:
This re-runs youtube-dl until it succeeds in downloading all videos. The |
|
That gives me: |
|
Isn't it possible for youtube-dl to recognize the error and quit with specific error code, so that wrappers will know what to do? It now quits with error code 1 for Content too short, when it cannot encode the final movie for some reason, and for example if interrupted with Ctrl-C. I haven't tried to simulate other errors, so I do not know whether there are special designated error codes for them. It would be good to have different exit codes for different errors - will make the task of the wrappers a lot easier. Thanks. |
Requested in #882. |
|
Machine-readable errors #2913 also kinda related. |
|
This is especially troubling for playlists. It is happening nearly every time for me in the middle of a 200 (max count) sized playlist, my while true; do ... ; done is slowly achieving it, but each failure means I hit youtube for each of the preceding videos each time. |
|
youtube-dl uses .part files for incomplete files. Can you use their On 05/07/2016 08:07 PM, Levi wrote:
"I compare what the data tells me. I don't do things by votes or authority." |
|
@RonJohn2 I'm trying out -i to skip errors, and then I'll have it loop over and try to redownload all the error'd incomplete vids over time. Hopefully this will lessen the ridiculous burden. |
|
@LeviSchuck Do you have any syntax available that you have created for this? |
|
@anon54 I believe it was replicated by using youtube-dl on a big playlist like https://www.youtube.com/playlist?list=PL3BDB159758919C4D |
|
@koenvanderdrift I think in that script |
|
Bug(?) still exists, you can duplicate with the newest version (2017.01.08) all day long on dailymotion. I made a batch file to get these two videos, dailymotion throttles to about 70KiB/s. These work well for example, as they are fairly long (17 min) and will most likely fail.
Now and then the download stalls and kicks back something like, note that the ERROR does not have a carriage return in front of it, this is how it appears on the screen.
My work around is a bit convoluted but does eventually get the result I want. I basically just re-run the batch file over and over until all the episodes are complete, now and then removing completed files from the batch file.
|
|
Auto-retries might be nice but couldn't we just run the command with --continue and then hit Up and then Enter to re-run the command and it will restart where it snapped off? |
|
Yes. This issue is not a bug report that manual retrying is broken, it's a feature request for auto-retry and a discussion of workarounds for when manual retrying is basically unworkable. We've had unreliable networks or overload servers that mean hundreds of disconnects in a single download. That's a long, tedious process to manually babysit. |
|
Seems like a decent approach would be building in support for This should deal with
One edge case I would add
|
|
For example on the Day 5 (talking about RiscV hartid) video of this list: https://www.youtube.com/watch?v=YDjYsYDBtXs&list=PLExHkW81k6eboVT3nkOvmSF6LLWFWwWdv I get this error pretty often, wouldn't default retry more desired. Plus on slightly similar topic, should error like his stop downloading the whole list? Wouldn't better to continue with others in the list? That might depend on the type of error. If no space on hdd there is no point, but if it's just a server error for that video and not whole YT down then continuing should be better? |
|
Kind of found this happens when I'm opening many connections to YouTube or have batched a bunch, after getting a bunch of things it starts to fail. If I run a batch and get some fails then try again a couple hours later, I don't get the error any more. I wonder if it is a rate limiting "feature" on YouTube to buffer automated requests. |
|
This is only after 5. Probably just the infrastructure is not perfect. What the Google guys were saying in the open day when I was visiting, that they are not aiming for 100% perfect because of that too high cost. And this could be the case? For sure some retry (maybe wait 30 seconds and 2-3 retries by default) and continuing in the list in case of server error (if it's local error/ no space then no point to continue). If there is a list which got faulty a video and you don't own the list, but you still interested to get as much as possible from it, you don't want to stop in the middle of the list. So I think continuing by default should be preferred. |
Can i somehow make downloader just restart when errors occurred?
Or can you add this function to the program?
Because you know - to see every single percent of downloading that there "content too short", or "did not received any data of block" or something like that, and after that manually restating the program hundreds of times per single file is boooooring, really boring.
Thanks you for your support and making this program. :)