Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upGitHub is where the world builds software
Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world.
Checklist
Question
I'm writing a script that passes metadata and file locations for videos, thumbnails, subtitles, etc. downloaded by youtube-dl into a database. The script is run by the
--execoption in youtube-dl and gets passed the name of the video downloaded as an argument. The video name is then used to find other files related to the video (for example, removing the video file extension and replacing it with.info.jsonto find the file created by--write-info-json).However, I'm running into trouble when it comes to indexing thumbnails and subtitles. For example, thumbnails across many different extractors don't have predictable file extensions or names (if there are multiple thumbnails) making it difficult to identify which files are what type and go in what order.
In the case of my script I believe it may be possible to guess the thumbnail name with a high degree of accuracy (using information from
.info.jsoncheck"thumbnails"for the thumbnail ext in the"url", checking the"url"response header for ext, if there are more then one thumbnails add the"id"to the guessed file name, etc.) however this could be prone to sometimes being wrong and I hope there is a better way to do this.I'm curious to see if there is another way of getting all of the files related to the video downloaded by youtube-dl and to tell what type of files they are (thumbnails, subtitles, metadata, etc.). I did not see a way to do this in the README or FAQ.
If there is no way of doing this a possible solution could be adding a parameter like
--write-files-listwhich would write the all files downloaded by youtube-dl when downloading the video tovideoname.files.json.