-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
convert urls to the format that the option --download-archive archiveFile.txt converts them to #32730
Comments
The format is If you still have the info-json files for the archived items you can use a jq command to extract and format these values, or a Python (etc) script. In general the only way to generate the archive entry is to process the URL with yt-dl, but that won't help with items that are no longer available (though the effect for such items is as if they were in the archive anyway). Subject to that, I think that the only simple way to regenerate the archive is to re-download the items to a junk location. By using If you can reliably work out the archive index values for some site, then it could be easy to make a script to write a fake archive file. Related: #13687. |
many thanks @dirkf, I was trying to avoid redownloading all the files. I have a bash script which uses gnu You might think 'that's what a playlist is for,' but creating playlists is time consuming, you have to select the video then add it to the playlist one by one. What I do is:
Would you accept a PR for that? |
Isn't this just: <whatever xargs --max-procs=1 --max-args=1 --delimiter=' ' youtube-dl args... If the URL list contains items whose generated filename happens to be the same, those downloads could interfere with each other. Ideally two yt-dl instances running at the same time should have different current directories, and the output templates should be relative to those directories. |
having wrote this I changed my logic. My script has no issues with same url downloads.
say I do also you'd have to create a dir and cd to it, a manual step, this script is just cmd run. Also why do you extract things like view count etc? I wanted to add some sites but can't be bothered with that kind of stuff |
solution given |
Checklist
Question
Is there a way I can convert urls I have into the format that the option
--download-archive archiveFIle.txt
converts them to?I deleted my archive file by accident, but still have the urls, I'd like to convert them back into the format they'd be in
archiveFile.txt
so I don't duplicate downloads.The text was updated successfully, but these errors were encountered: