Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upGitHub is where the world builds software
Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world.
Feature request: Download all videos referenced in online page #7341
Comments
|
It's already implemented via generic extractor. |
|
Doesn't seem to work for me:
There are plenty of links to youtube videos on that page. |
|
Those are video links but not actual videos. |
|
Yes I KNOW they are video links! That's what I asked for in the feature request! That's why this is a feature request and not a bug report. Can you please reopen it? |
|
In my personal idea youtube-dl is a tool for collecting videos and/or audios playing directly on web pages. I'm not sure if there are other developers with a different expectation on youtube-dl. If so, just reopen it. |
|
The |
|
Please, leave the Besides, I don't understand what you are trying to solve here, because you can do the following things:
It should not be hard to get the links. And when you have them, just paste them in a text file, hand over that text file to |
|
It's just a timesaver. I have a page full of videos I want to watch. They play badly in my browser, so I play them with There are lots of ways to do this, sure, including your suggestions. It was just a way of quickly grabbing a whole page full of videos, that's all. The |
|
Yes, nothing wrong with the idea, I would like to use it myself. But I think that this is not so trivial to implement reliably. Parsing a single HTML page for exactly one target (a link, or in yt-dl's case, a video, whatever) is one thing, parsing HTML, following the links there to other pages (more HTML), i.e. doing this recursively is entirely a different beast. That is, for example, why the recursion option in |
|
The idea is only to look for URLs on the page, for sites/url patterns that i.e. for each URL found (in |
|
The problem with extracting all urls from a page is that with the most naive approach (extracting all As @yan12125 has pointed we support videos that are played in the webpage, so if there's some embedded video that youtube-dl doesn't extract feel free to open a new issue. |
|
That's why I said to only extract videos if the URL matches a known pattern. |
Even if we only matched urls supported by youtube-dl, for example on any post at https://www.reddit.com/r/dota2 there are links on the sidebar to twitch livestreams (which are supported by youtube-dl) and therefore they would also be downloaded, which is not the expected behaviour. That's why I'd suggest you to use a different tool that is able to ignore the sidebars and only look into the main content, which is probably not trivial to implement in a generic way and must be tuned for each website. |
|
You make a good point about the sidebar, but can I request the feature anyway (reopen it)? After all, it's on me to use it right, there are times it will be the right tool for the job. No reason to avoid the feature because sometimes it's not the right tool. |
Extend the
-aoption to allow passing in a url, and youtube-dl will get that page, parse it for links to sites it knows about, and any embedded videos (again from sites it knows about), and downloads all the referenced videos.