Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[discovery] "GO" Channels have been discontinued #14954

Closed
5 of 8 tasks
StevenDTX opened this issue Dec 11, 2017 · 104 comments
Closed
5 of 8 tasks

[discovery] "GO" Channels have been discontinued #14954

StevenDTX opened this issue Dec 11, 2017 · 104 comments
Labels

Comments

@StevenDTX
Copy link

Please follow the guide below

  • You will be asked some questions and requested to provide some information, please read them carefully and answer honestly
  • Put an x into all the boxes [ ] relevant to your issue (like this: [x])
  • Use the Preview tab to see what your issue will actually look like

Make sure you are using the latest version: run youtube-dl --version and ensure your version is 2017.12.10. If it's not, read this FAQ entry and update. Issues with outdated version will be rejected.

  • I've verified and I assure that I'm running youtube-dl 2017.12.10

Before submitting an issue make sure you have:

  • At least skimmed through the README, most notably the FAQ and BUGS sections
  • Searched the bugtracker for similar issues including closed ones

What is the purpose of your issue?

  • Bug report (encountered problems with youtube-dl)
  • Site support request (request for adding support for a new site)
  • Feature request (request for a new functionality)
  • Question
  • Other

The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your issue


If the purpose of this issue is a bug report, site support request or you are not completely sure provide the full verbose output as follows:

Add the -v flag to your command line you run youtube-dl with (youtube-dl -v <your command line>), copy the whole output and insert it here. It should look similar to one below (replace it with your log inserted between triple ```):

E:\>youtube-dl https://www.discovery.com/tv-shows/gold-rush/full-episodes/gold-bars-and-hail-marys --verbose
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['https://www.discovery.com/tv-shows/gold-rush/full-episodes/gold-bars-and-hail-marys', '--verbose']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2017.12.10
[debug] Python version 3.4.4 - Windows-10-10.0.14393
[debug] exe versions: ffmpeg N-89395-g71421f382f, ffprobe N-72383-g7206b94, rtmpdump 2.4
[debug] Proxy map: {}
[Discovery] gold-bars-and-hail-marys: Downloading JSON metadata
ERROR: gold-bars-and-hail-marys: Failed to parse JSON  (caused by ValueError('Expecting value: line 1 column 1 (char 0)',)); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type  youtube-dl -U  to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
  File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp9arpqqmf\build\youtube_dl\extractor\common.py", line 686, in _parse_json
  File "C:\Python\Python34\lib\json\__init__.py", line 318, in loads
  File "C:\Python\Python34\lib\json\decoder.py", line 343, in decode
  File "C:\Python\Python34\lib\json\decoder.py", line 361, in raw_decode
ValueError: Expecting value: line 1 column 1 (char 0)
Traceback (most recent call last):
  File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp9arpqqmf\build\youtube_dl\extractor\common.py", line 686, in _parse_json
  File "C:\Python\Python34\lib\json\__init__.py", line 318, in loads
  File "C:\Python\Python34\lib\json\decoder.py", line 343, in decode
  File "C:\Python\Python34\lib\json\decoder.py", line 361, in raw_decode
ValueError: Expecting value: line 1 column 1 (char 0)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp9arpqqmf\build\youtube_dl\YoutubeDL.py", line 784, in extract_info
  File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp9arpqqmf\build\youtube_dl\extractor\common.py", line 437, in extract
  File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp9arpqqmf\build\youtube_dl\extractor\discovery.py", line 67, in _real_extract
  File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp9arpqqmf\build\youtube_dl\extractor\common.py", line 680, in _download_json
  File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp9arpqqmf\build\youtube_dl\extractor\common.py", line 690, in _parse_json
youtube_dl.utils.ExtractorError: gold-bars-and-hail-marys: Failed to parse JSON  (caused by ValueError('Expecting value: line 1 column 1 (char 0)',)); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type  youtube-dl -U  to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
...
<end of log>

If the purpose of this issue is a site support request please provide all kinds of example URLs support for which should be included (replace following example URLs by yours):


Description of your issue, suggested solution and other information

All of the Discovery "GO" channels (discoverygo.com, tlcgo.com, animalplanetgo.com, etc) are being discontinued. They have moved all of the Full Episodes to the non-GO channels (discovery.com, tlc.com, animalplanet.com, etc).

The current [discovery] extractor does not work on these sites.

They also have lowered the quality of the videos on the GO channels to 720p. It appears that the 1080p videos are available on the non-GO channels.

Thanks.!

@StevenDTX
Copy link
Author

Thanks @remitamine! Free videos are working great.

The *GO channels all redirect to the "regular" sites now.

@StevenDTX
Copy link
Author

Would it be possible to use a --ap-mso login instead of cookies for the Discovery sites? I was constantly having issues keeping my cookies up to date.

@StevenDTX StevenDTX changed the title [discovery] "GO" Channels are being discontinued [discovery] "GO" Channels have been discontinued Dec 28, 2017
@StevenDTX
Copy link
Author

Can I provide someone with a cookies file to work on getting the non-free episodes?

@cookieguru
Copy link

@StevenDTX I have found that the name of the cookie varies, even when logged in. So far I have only seen eosAf and eosAn. This is a doubly URL encoded JSON string. The access token is stored in a JSON key named either access_token or a. Note that multiple permutations can be seen in the same session! I have a working (though not the most efficient) solution that I have in a browser userscript. It iterates over all the browser's cookies and looks for one that starts with a % as it will be the only one with the aforementioned JSON string. From there, I decode that string and iterate over all its keys to find the longest, as that is always the string needed to be sent in the Authorization header.

let cookies = document.cookie.split(';').map(function(x) {
	return x.trim().split(/(=)/);
}).reduce(function(a, b) {
	a[b[0]] = a[b[0]] ? a[b[0]] + ', ' + b.slice(2).join('') :
	b.slice(2).join('');
	return a;
}, {});
let token;
for(let i in cookies) {
	if(cookies[i].substr(0, 1) == '%') {
		let temp = JSON.parse(decodeURIComponent(decodeURIComponent(cookies[i])));
		let longest = 0;
		for(let j in temp) {
			if(temp[j].length > longest) {
				token = temp[j];
				longest = temp[j].length;
			}
		}
		break;
	}
}

From there I fetch the m3u8 link:

fetch('https://api.discovery.com/v1/streaming/video/' + video.id, {
	headers: {
		'authorization': 'Bearer ' + token,
	},
}).then(function(result) {
	return result.json();
}).then(function(json) {
	//json.streamUrl is the episode's master m3u8
});

This method works fine even on the free episodes.

I would say the current method (grabbing an anonymous token on every download) from cb0c231 is equally suited to the free videos.

@StevenDTX
Copy link
Author

@cookieguru

I apologize, as a lot of what you said if a bit over my head. Is the first section of code the userscript you run? In, I assume, TamperMonkey or something?

@cookieguru
Copy link

@StevenDTX Exactly

@StevenDTX
Copy link
Author

Thanks a lot @cookieguru !!

I was able to get the script installed and in the Firefox console I get a link that points to https://content-ausc4.uplynk.com/444fe784b93347829dce878e052b952d/i.m3u8. If I expand that, I get the full link with authorization and stuff. I am actually able to download the video, in 1080p!! The free videos are only getting downloaded in 720p.

It's not an automated process, but I only need a few shows a week.

@cookieguru
Copy link

@StevenDTX We have the same process ;)

I'm hoping someone more fluent in python can help integrate that methodology in to the existing extractor. AFAIK the extractor as it is will work fine; it just doesn't know how to get to the playlist file containing all the formats.

@Allmight3
Copy link

@cookieguru I understand your first code snippet is a userscript. I put that into a script with @grant none. However, I don't understand what to do with your second code snippet, do you mind expanding?

Pasting that second snippet into the script along with the first snippet causes an execution error. I noticed @StevenDTX mention getting a link in the console, but I see no console output code and pasting the second snippet of code into the console directly yields a similar execution error about video not being defined.

I see there's some sort of working method here but it's just a little out of my grasp of understanding. Hoping you'll be willing to help. I miss being able to grab shows from discovery! I used to use the cookies/directv command but they stopped working a while ago.

@cookieguru
Copy link

@Allmight3 The first script iterates over the browser's cookies and extracts the necessary authorization token that is needed to perform a request to Discovery's API to get the link to the video playlist (with multiple formats).

The second script is missing context and wasn't meant to be copy/pastable; rather just an example of how to use the authorization token. Since there's obviously a desire for others to use this before the changes can be worked in to youtube-dl I'll post the full user script here:

// ==UserScript==
// @name         Science Channel Go/Discovery Go
// @namespace    https://github.com/violentmonkey/violentmonkey
// @version      1.0
// @author       https://github.com/cookieguru
// @match        https://www.discovery.com/*
// @match        https://www.sciencechannel.com/*
// @grant        none
// ==/UserScript==

(function() {
	'use strict';

	let video;
	__reactTransmitPacket.layout[window.location.pathname].contentBlocks.forEach((block) => {
		if(block.type === 'video') {
			video = block.content.items[0];
		}
	});

	let cookies = document.cookie.split(';').map(function(x) {
		return x.trim().split(/(=)/);
	}).reduce(function(a, b) {
		a[b[0]] = a[b[0]] ? a[b[0]] + ', ' + b.slice(2).join('') :
		b.slice(2).join('');
		return a;
	}, {});
	let token;
	for(let i in cookies) {
		if(cookies[i].substr(0, 1) == '%') {
			let temp = JSON.parse(decodeURIComponent(decodeURIComponent(cookies[i])));
			let longest = 0;
			for(let j in temp) {
				if(temp[j].length > longest) {
					token = temp[j];
					longest = temp[j].length;
				}
			}
			break;
		}
	}

	let style = document.createElement('style');
	style.innerHTML = '#react-tooltip-lite-instace-3, #react-tooltip-lite-instace-4, #react-tooltip-lite-instace-5 { display:none; }';
	document.head.appendChild(style);

	fetch('https://api.discovery.com/v1/streaming/video/' + video.id, {
		headers: {
			'authorization': 'Bearer ' + token,
		},
	}).then(function(result) {
		return result.json();
	}).then(function(json) {
		document.body.innerHTML = "'S" + ('0' + video.season.number).slice(-2) + 'E' + ('0' + video.episodeNumber).slice(-2) + ' ' + video.name.replace(/'/g, '') + "' => '" + json.streamUrl + "',";
	});
})();

Note that this just sends you to an m3u8 which contains links to the other m3u8s of different formats. You'll have to visit the file linked in the browser and figure out which format you want to download. I paste this line in to another (non-browser based) script that does it in batches. The TL;DR of that script is to grab the m3u8 link to the resolution you want and pass that to ffmpeg:

ffmpeg -i "http://www.example.com/1920x1080.m3u8" -acodec copy -bsf:a aac_adtstoasc -vcodec copy "filename.mkv"

@Allmight3
Copy link

@cookieguru Thank you! That worked perfectly and was easy to follow. I have successfully downloaded my show in 1080p. My family will enjoy this. I appreciate your time and effort.

@Mr-Jake
Copy link

Mr-Jake commented Jan 31, 2018

@cookieguru

Thanks so much for working on this. The userscript you posted works with Greasemonkey and I am able to get the video.

I compiled youtube-dl with your discovery.py commit. But when I try to get a video from the Discovery site I get an error, both for free videos and videos that require a login cookie.

C:\youtube-dl\youtube-dl.exe "https://www.discovery.com/tv-shows/ mythbusters/full-episodes/heads-will-roll" --cookies C:\youtube-dl\cookies.txt -F -v

[Discovery] heads-will-roll: Downloading webpage
Traceback (most recent call last):
  File "__main__.py", line 19, in <module>
  File "youtube_dl\__init__.pyo", line 465, in main
  File "youtube_dl\__init__.pyo", line 455, in _real_main
  File "youtube_dl\YoutubeDL.pyo", line 1988, in download
  File "youtube_dl\YoutubeDL.pyo", line 784, in extract_info
  File "youtube_dl\extractor\common.pyo", line 438, in extract
  File "youtube_dl\extractor\discovery.pyo", line 64, in _real_extract
AttributeError: 'module' object has no attribute 'parse'

I compiled it multiple times to make sure I didn't make a mistake, but still no luck. I will wait for the commit to be commited and perhaps the precompiled youtube-dl will work for me.

Until then I will use the userscript. Thanks again.

@cookieguru
Copy link

@Mr-Jake That indicates that something from urllib is missing. I developed against 3.6.4. Which version did you compile against?

@Mr-Jake
Copy link

Mr-Jake commented Jan 31, 2018

@cookieguru
I compiled with 2.7.12.

The reason I use an older version is because I had a conflict getting py2exe working with 3.x. I didn't think it would be an issue since the youtube-dl documentation says 2.6, 2.7, or 3.2+ can be used.

@Nii-90
Copy link
Contributor

Nii-90 commented Feb 1, 2018

Semi-related question: does youtube-dl generate the requisite json that gets output by the --write-info-json option, or is that json info transmitted as-is by the player interface?

I ask because, while the browser userscript @cookieguru posted works swimmingly to get the m3u8 link, it's obviously missing both the metadata (which can be reconstructed via the page dump, thankfully) and the link to the SCC and XML/TTML subtitles (which can't, unfortunately; those get served by a completely different URL). If the contents of the file output by --write-info-json are transmitted by the website, all the right data is there and the subtitles would still be grabbable with only minimal tweaking to the userscript, right?

@cookieguru
Copy link

@Mr-Jake I just pushed a new commit that should work with 2.6+. Could you try it again? It seems to (still) work OK for free videos, but I'm seeing some HTTP 403 errors when I log in with ap-mso.

@Nii-90 The video metadata comes from the page itself, that is, the URL that you pass to youtube-dl to initiate the download. The subtitles come from the stream metadata; IIRC they will be near the top of the m3u8 file of your chosen format.

@Nii-90
Copy link
Contributor

Nii-90 commented Feb 1, 2018

They aren't. Neither the 6 KB preplay playlist nor the large segment playlist for a particular resolution have the link to the vtt or xml/ttml file (me thinking it was scc was confusing Science Channel for Fox, since they also use Uplynk and I have to use similar script manipulation to restore the chapter marks there too). grep didn't find it, checking it visually in a text editor I couldn't see it, and the fusionddmcdn domain that they come from is not contained in the m3u8. The m3u8 only has the uplynk urls the video data is served from.

Using --write-info-json on one of the free videos on Science Channel, and then parsing the result (the actual URLs redacted here for paranoia):

$ sed 's/", "/",\n"/g' "HTUW - S06E02.info.json" | grep vtt
"subtitles": {"en": [{"url": "[VTT URL]",
"ext": "vtt"}, {"url": "[XML/TTML URL]",

Running a grep for fusionddmcdn (or vtt or ttml) on either the preplay or segment/resolution-specific m3u8 yields nothing.

@Mr-Jake
Copy link

Mr-Jake commented Feb 1, 2018

@cookieguru
Compiled without error with 2.7.
Works with free videos.

But when I include --cookies for a login video, I get:

[Discovery] heads-will-roll: Downloading webpage
ERROR: An extractor error has occurred. (caused by KeyError(u'access_token',));
please report this issue on https://yt-dl.org/bug . Make sure you are using the
latest version; type  youtube-dl -U  to update. Be sure to call youtube-dl with
the --verbose flag and include its complete output.
Traceback (most recent call last):
  File "youtube_dl\extractor\common.pyo", line 438, in extract
  File "youtube_dl\extractor\discovery.pyo", line 65, in _real_extract
KeyError: u'access_token'
Traceback (most recent call last):
  File "youtube_dl\YoutubeDL.pyo", line 784, in extract_info
  File "youtube_dl\extractor\common.pyo", line 451, in extract
ExtractorError: An extractor error has occurred. (caused by KeyError(u'access_to
ken',)); please report this issue on https://yt-dl.org/bug . Make sure you are u
sing the latest version; type  youtube-dl -U  to update. Be sure to call youtube
-dl with the --verbose flag and include its complete output.

EDIT: In your commit description I see you mentioned eosAf and eosAn. Not sure exactly what that is, but when I looked at my cookie file I have eosAd and eosAf.

@cookieguru
Copy link

@Nii-90 According to discoverygo.py#L69 that's where they come from. I don't use subs so I can't speak to when that last worked. Maybe things have changed since the switchover to Uplynk and/or the switch to Oauth for getting the stream URLs.

If you paste the six lines starting with and including let video in to your browser's console, and then run a line that's just video you will get an object that you can examine for the links to the subs. That object encapsulates everything the webpage knows about the video. I've never known chapter markers to work on videos ripped from Discovery; even back in the Akamai days.


@Mr-Jake The point of the commit was to eliminate the need for --cookies. All the necessary information to get the stream URL is sent with the initial page. eosAf and eosAn are the cookies that contain the authentication token needed to get the stream URLs. I don't think I've ever seen both at the same time though, so I may have to revise my code. Whichever one is longer is going to be the cookie that contains the token. Unlike my userscript, the code I committed checks eosAn first, and if that cookie exists then it won't even bother to check eosAf. But if both are defined and the token is in eosAf, it's going to fail, and that's on me. I'll have to improve that.

Regardless though I can't get this to work on authenticated videos. I think what is happening is that it's not logging in before getting the token.


If anyone can point me in the direction of an extractor that won't even run -F without logging in, that will help. I'll make the changes next time I have some free time.

@Nii-90
Copy link
Contributor

Nii-90 commented Feb 2, 2018

youtube-dl can get the subs from the free videos, so I think it's just that the authentication is getting in the way. Speaking of, shouldn't discovery.py (or discoverygo.py) be importing the adobepass module to streamline handling the auth stuff? I didn't think ap-mso/ap-username/ap-password parameters would work for a particular site without the extractor for that site using AdobePassIE.

I've never known chapter markers to work on videos ripped from Discovery; even back in the Akamai days.

The chapter marks for Uplynk-based sites don't actually exist in a way that youtube-dl is set up to parse, but they can be re-derived from scratch by parsing the m3u8. Every single time #UPLYNK-SEGMENT or #EXT-X-DISCONTINUITY appears in the resolution-specific m3u8 file, that's a break in the video stream, usually for the insertion of advertisements, which occurs at the same boundaries as the natural chapter segments. I simply whipped up a bash script that automates splitting the big m3u8 apart into child m3u8s, I then download the individual segments using a for loop, and then mkvtoolnix can generate chapters at append boundaries (for speed/size purposes I only append the audio track back together in mkvtoolnix, and then dump the chapter info from it using ffmpeg).

The regular metadata and the chapter info can then be merged into a single ffmetadata file and used when the individual segments get concatenated by ffmpeg (in two steps, as opposed to youtube-dl taking three steps to do the same things*).

*Youtube-DL currently:

  1. download and concatenate in one step
  2. fix the AAC stream
  3. add the metadata

vs.

  1. download the segments and fix the AAC streams in each segment at the same time
  2. concatenate and add metadata at the same time for the final output.

@cookieguru
Copy link

@Nii-90 This is just what I was looking for. I thought youtube-dl would automatically do the login stuff when the various ap switches were passed. If you have some additions to my PR to make this happen, I'm all ears, otherwise I'll look in to it when I have some free time in the next few days.

@halolordkiller3
Copy link

has there been any update on this? I too am passing cookies.txt but it just complains with "you should use --cookies" Thanks

@cookieguru
Copy link

@halolordkiller3 Cookies won't work as they aren't used to grab videos any more. #15455 still needs the adobepass module integrated in to it.

@ghost
Copy link

ghost commented Mar 24, 2018

@Mr-Jake All tabs works for me on Investigation Discovery - Official Site. I tired it on both Google Chrome 65.0.3325.181 (Official Build) (64-bit) and FireFox 59.0.1 (64-bit).

@StevenDTX - Did you download from Investigation Discovery - Official Site?

@Mr-Jake
Copy link

Mr-Jake commented Mar 25, 2018

@cookieguru
Thanks for the tip. From the console, I determined the tracker protection in Firefox was breaking the site. Once I disabled it, the navigation works.

Both Discovery and History appear to be using the same cable provider login authentication for Comcast. From the code changes I compiled, I got youtube-dl to authenticate and download videos from History. My plan is to debug the authentication of both sites to determine why Discovery fails and why History succeeds.

The Pull Request that I compiled that fixed authentication with History is #11080 by
raleeper.

@cookieguru
Copy link

@Mr-Jake If I'm not mistaken, Discovery fails because of this.

Not sure if you know about them already but the --write-pages and --print-traffic switches should prove useful.

@ghost
Copy link

ghost commented Apr 1, 2018

I was just wondering how Youtube-dl is coming to support the Discovery sites?
Since I am the only person that cannot download using FFmpeg with the m3u8 file, ( I keep getting errors) could someone be kind enough and video record your screen with the step-by-step tutorial and upload that video to Youtube or to another platform so I can see what I am doing wrong? Perhaps I am doing everything correct and its still not working? - Happy Easter!

@Mr-Jake
Copy link

Mr-Jake commented Apr 2, 2018

I tried to fix the authentication for Discovery, but in end did not have any luck. While I was attempting to fix it, I found a solution to authenticate A&E networks (History Channel, etc) for Comcast login. I was also successful at restoring HTTP 720p downloads for A&E networks. I'm not sure why the extractor was recently changed to HLS since HTTP is still available. HTTP is better since no remux is needed.

--cookies does not work for any site I try, while I used to use it all the time in the past. Since it is broken for all sites I tried, I wonder if a commit broke it. When I have time I will look back to see if commit is to blame.

@ghost
Copy link

ghost commented Apr 2, 2018

@Mr-Jake Thank you for your effort. --cookies doesn't work for me either, however; it did in the past. I use the DirecTV login, so I don't know if Youtube-dl will work for me with A&E networks (History Channel, etc) or TLC.

@cookieguru
Copy link

--cookies does not work for any site I try, while I used to use it all the time in the past. Since it is broken for all sites I tried, I wonder if a commit broke it. When I have time I will look back to see if commit is to blame.

I can only speak for the Discovery sites but can say with 100% certainty that the reason it works has nothing to do with youtube-dl and everything to do with Discovery's site. Previously, Discovery sent the URL to the playlist file with the webpage on all authenticated requests. Now the URL to the playlist is hidden behind a separate endpoint that the browser requests when the page is first loaded in the browser. This requires a separate set of cookies that is unrelated to the Adobe Pass cookies, and you have to be authenticated to get the cookies that the endpoint needs.

@ghost
Copy link

ghost commented Apr 24, 2018

Anyone know how to get ffmpeg to download from links stored in a txt file?

@cookieguru
Copy link

@hemps37 http://lmgtfy.com/?q=ffmpeg+download+from+links+stored+in+a+txt+file

This issue is only for discussion related to fixing youtube-dl. It is not a ffmpeg support forum

@lemstress
Copy link

Bumping this. I wish I could help. Having the exact same problem with some TLC shows, as it's a Discovery site. What's odd is some older episodes work fine, but I feel like it's newer episodes of shows that have this issue where youtube-dl is asking for cookies and the adobe pass isn't working.

@StevenDTX
Copy link
Author

Thanks, @remitamine. It works great with cookies.

@dstftw dstftw added the fixed label May 13, 2018
@beren12
Copy link

beren12 commented Jun 7, 2018

@cookieguru Hmm I think discovery changed the site again, I can't seem to get the playlist... Maybe it's me doing it wrong?

@cookieguru
Copy link

@beren12 I checked a couple of episodes and they still load fine for me

@besweeet
Copy link

besweeet commented Nov 3, 2018

I used the Tampermonkey script to download the full URL for the main manifest M3U8. I then replaced the beginning portion with i.m3u8 (1080p). I then took the full i.m3u8 URL and the download in both FFmpeg and Streamlink fail due to the lack of authorization (403) for check2, just like #14954 (comment).

Has Discovery defeated us?

@cookieguru
Copy link

@besweeet youtube-dl has been uploaded; why are you still using the userscript?

@besweeet
Copy link

besweeet commented Nov 3, 2018

@cookieguru youtube-dl, when just providing a URL (example: https://www.sciencechannel.com/tv-shows/outrageous-acts-of-science/full-episodes/savage-skills), throws the following error:
ERROR: This video is only available via cable service provider subscription that is not currently supported. You may want to use --cookies

It was said here that the --cookies method does not work.

@cookieguru
Copy link

#14954 (comment)

@besweeet
Copy link

besweeet commented Nov 3, 2018

@cookieguru All good now. Was using an incognito tab when exporting cookies with the cookies.txt Chrome extension which may not have included everything.

@nairobi1982
Copy link

@cookieguru the script is broken, kindly fix it. sent you an e-mail of the script. Thanks.

@StevenDTX
Copy link
Author

@cookieguru the script is broken, kindly fix it. sent you an e-mail of the script. Thanks.

You shouldnt be using @cookieguru userscript. All of the Discovery channels work just fine with youtube-dl if you use cookies.

@nairobi1982
Copy link

@StevenDTX, how do I use the cookies in the you tube-dl? Do I use the you tube-dl GUI or the command option? I'm very raw in these techno savvy items.

@nairobi1982
Copy link

@StevenDTX and @cookieguru , below is where I'm now stuck, how do I choose the best quality (1080p) to download:

C:\Users\user>C:\youtube-dl\youtube-dl.exe "https://www.sciencechannel.com/tv-shows/monster-black-hole-the-first-image/full-episodes/monster-black-hole-the-first-image" --cookies C:\youtube-dl\cookies.txt -F -v
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['https://www.sciencechannel.com/tv-shows/monster-black-hole-the-first-image/full-episodes/monster-black-hole-the-first-image', '--cookies', 'C:\youtube-dl\cookies.txt', '-F', '-v']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2019.08.13
[debug] Python version 3.4.4 (CPython) - Windows-10-10.0.17134
[debug] exe versions: ffmpeg N-90480-ge5819fa629, ffprobe N-90480-ge5819fa629
[debug] Proxy map: {}
[Discovery] monster-black-hole-the-first-image: Downloading content JSON metadata
[Discovery] monster-black-hole-the-first-image: Downloading streaming JSON metadata
[Discovery] monster-black-hole-the-first-image: Downloading m3u8 information
[info] Available formats for 5cf6893b85aeee22b7423725:
format code extension resolution note
hls-62 mp4 96x54 62k , avc1.42000a, 6.0fps, mp4a.40.5
hls-134 mp4 192x108 134k , avc1.42000b, 15.0fps, mp4a.40.5
hls-243 mp4 288x162 243k , avc1.42000c, 15.0fps, mp4a.40.5
hls-449 mp4 448x252 449k , avc1.420015, 30.0fps, mp4a.40.5
hls-758 mp4 768x432 758k , avc1.4d001e, 30.0fps, mp4a.40.5
hls-1204 mp4 992x558 1204k , avc1.4d001f, 30.0fps, mp4a.40.5
hls-1874 mp4 1088x612 1874k , avc1.4d001f, 30.0fps, mp4a.40.5
hls-3265 mp4 1280x720 3265k , avc1.64001f, 30.0fps, mp4a.40.5
hls-5163 mp4 1920x1080 5163k , avc1.640028, 30.0fps, mp4a.40.5 (best)

C:\Users\user>

@besweeet
Copy link

@nairobi1982: Replace "-F -v" with "-f best". To choose a specific quality, replace "-f best" with "-f hls-449". In that example, it will download the 448x252 version. So, the format is "-f format" where you replace "format" with the code that you see at the start of each line. Hope that makes sense.

@dare2
Copy link

dare2 commented Aug 15, 2019

I use this Chrome extension:

cookies.txt Offered by: Genuinous

https://chrome.google.com/webstore/detail/cookiestxt/njabckikapfpffapmjgojcnbfjonfjfg?hl=en

Copy the cookies from the extension into a text file (say, cookies.txt) and then pipe that file name into the yt-dl script with --cookies cookies.txt

...on a completely unrelated note, the latest episode of Battlebots from the discovery site does not download, instead it downloads a 2 minute ad for a TLC show. Playing the episode on the site works fine. Is anyone else seeing this issue?

https://go.discovery.com/tv-shows/battlebots/full-episodes/eyes-on-the-prize

EDIT: oops, I'm a little late to the party...but could someone check on my Battlebots issue?

@nairobi1982
Copy link

@besweeet , Thank you very much. It has worked.

@dare2
Copy link

dare2 commented Aug 15, 2019

...on a completely unrelated note, the latest episode of Battlebots from the discovery site does not download, instead it downloads a 2 minute ad for a TLC show. Playing the episode on the site works fine. Is anyone else seeing this issue?

https://go.discovery.com/tv-shows/battlebots/full-episodes/eyes-on-the-prize

EDIT: oops, I'm a little late to the party...but could someone check on my Battlebots issue?

Never mind. Problem solved by just updating to the latest version (2019.08.13)

@cookieguru
Copy link

cookieguru commented Aug 15, 2019

the script is broken, kindly fix it. sent you an e-mail

Bombarding me with demands is not a way to get a resolution. And as previously mentioned, this issue is for fixing youtube-dl, not the script I previously used to triage the issue. Glad someone else was able to spoonfeed you answers.

@nairobi1982
Copy link

@cookieguru , it is all well mate. Everything is now sorted. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests