Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upGitHub is where the world builds software
Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world.
Crunchyroll Login page throwing 503 #11572
Comments
|
I noticed this too, and when I looked more at it, it seems like cloudflare is at fault here. I suggest using cookies for now. |
|
Glad it's not just me then. |
|
Same issue here. |
|
Including cookies isn't allowing me to pass the cloudflare check. (The |
|
@braydenm303 Last time I checked, cloudflare uses more than just one cookie. These are the cookies that I have and it works for me: |
|
@Starsam80 Hmm. Right now, I have |
|
Add all of them and maybe also make your user agent string match your browser’s. |
|
Setting the user agent string worked. (FYI, cookies and agent string were from a Chromium-based browser (Opera beta 43.0) in case anyone else has a similar problem.) |
|
How are you all extracting cookies? Plugins? |
|
Maybe it’s time to integrate cf-specific workarounds because cf decided to treat users like crap? |
|
cfscrape is indeed an awesome and handy tool. I've replaced requests in some of my projects with cfscrape. But, cfscrape only works for those simple browser checks. If there's a captcha, it'll fail. |
|
Could someone pls show me a working command? Seems im too stupid to get the cookie file / user agent thing working :/ thanks in advance!! |
|
Find your useragent : http://www.esolutions.se/whatsmyinfo Pass that file with the argument |
|
thx! sadly doesn´t work for me: |
|
@Crank666 Hi there, make sure you browse to crunchyroll after you install the plugin, after you do that the top entries should be crunchyroll cookies in either the .txt or the plugin. |
|
'# HTTP Cookie File for domains related to crunchyroll.com. only the above mentioned "bm_last_load_status" is missing... -> after i run the youtube-dl command, the cookies.txt is replaced with this: .crunchyroll.com TRUE / FALSE __cfduid |
|
@Crank666 If you still haven't got it working, remove the username/password parameters and it should work (it will auth with the cookie file) |
|
Hi, You have to check these things :
Also what's your error ? |
|
This unfortunately does not work for me either. The command + output is the following:
My Cookies File looks like this:
|
|
@Ugrend worked perfectly! THX! |
|
@Crank666 Without the useragent it is working, thanks! |
|
It seems the whole crunchyroll page is now protected with cloudflare and access is not possible at all anymore. |
Not that bad :)
|
|
It's still working for me to download .flv but now I got error 500 for downloading Subtitles, think it's a new issue and need to be fixed in a new version of youtube-dl
I'm gonna look for it tomorrow. EDIT : Here is the error
|
|
Logging out using your browser (thereby refreshing the Cloudflare token) and
then logging back in and updating the cookie file helps in some cases.
|
|
@lachs0r Gonna try to refresh the cookie and also test it under windows for the subs. |
|
FWIW, CR and several other entertainment services are under DDoS attack right
now, so that’s the cause of this Cloudflare behavior and occasional error
pages.
|
|
Of course it makes sense... if you're using the site as originally intended, you HAVE to access a html page before the API is accessed. |
|
APIs can be accessed from apps in other devices and environments(even from the CLI like in the case of Amazon AWS), and it doesn't make sense to include or redirect the user to a browser just to pass the cf anti-bot page. |
|
Still does not work:
|
|
See #11730 (comment) for my previous comment about cfscrape. Or we can add an option --use-insecure-features. Not sure if it's a good idea. |
|
Oh, cfscrape now uses Node.js's vm feature. That's much better and we can revisit that :) |
|
That's old... cfscrape doesn't use js2py anymore, as he ported it to "NodeJS". Check the Dependecies Section of Cfscrape. |
|
Yep. Althought, people will need to have NodeJS installed. So, maybe you can pass an option like "--cf-bypass", which will enable cfscrape and hence, NodeJS? Or you can keep it on by default. |
|
note that now there is only one request that has the problem of cf anti-bot page, i think that the extraction of the needed |
|
Imposing restrictions on the mobile APIs is simpler. They can write complex HMAC codes in C, obfuscate it and strip all the useful debug info from shared libraries. That's a common practice in Chinese video sharing websites. Is there a crunchyroll staff watching this ticket? :) |
there are similar ways to achieve what Android NDK can do for apps on the web:
but it's not unbreakable, it just needs more work, even the some of the common DRM schemes are already broken. |
|
I still think utilizing cfscrape would be easiest and fastest way to solve this xD! |
|
@remitamine ffs, no
This is not a public API. It's private and undocumented, intended for the purpose of funimation's site only. Thus your argument doesn't apply. And in general, see #11572 (comment) for the method I've been using to keep CR extractor working for some time now. |
|
Even with I did a Find+Replace for |
|
I'm having trouble with this too as of late. It had been working fine until recently just passing the cookies file in. I've tried with and without user agent, with and without usename/password. I've used the new cookies exporter with Firefox's update and removed #HttpOnly_ sections. Also tried the cfscrape method. Nothing is logging me in properly. I do have a premium account. |
|
It seems to get me past Cloudflare, but authenticating beyond that fails. |
|
Could you paste verbose logs? |
|
Quick example of a premium-only video. Fresh cookies.txt with me logged in. Lack of a 503 leads me to believe it's getting past the Cloudflare layer.
|
|
@kueller: How about using --username and --password? |
|
"youtube-dl http://www.crunchyroll.com/monster-strike/episode-2-the-karmic-fist-of-wrath-734365 -f best --write-sub --sub-lang deDE --sub-format ass --cookies cookies.txt --download-archive archive.txt" |
|
Huh, ok. So I tried that Chrome extension and it works. I've been using a new cookie exporter from Firefox (after the recent update) and despite the cookies.txt looking valid it doesn't work. Whatever the reason at least I know it's not a problem here. Thank you for the help. |
|
you´re welcome =) (had the same problems with FF - now its only second choice browser :( ) |

Please follow the guide below
xinto all the boxes [ ] relevant to your issue (like that [x])Make sure you are using the latest version: run
youtube-dl --versionand ensure your version is 2016.12.22. If it's not read this FAQ entry and update. Issues with outdated version will be rejected.Before submitting an issue make sure you have:
What is the purpose of your issue?
The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your issue
If the purpose of this issue is a bug report, site support request or you are not completely sure provide the full verbose output as follows:
Add
-vflag to your command line you run youtube-dl with, copy the whole output and insert it here. It should look similar to one below (replace it with your log inserted between triple ```):Description of your issue, suggested solution and other information
If you provide login credentials for Crunchyroll, then the script shows a
HTTP Error 503: Service Temporarily Unavailable (caused by HTTPError())error. But, if you remove the login credentials, it'll download the video/playlist just fine. Above, I have provided a log with the login credentials. I can open the login page just fine on all the browsers and I'm not behind any firewall. I have tried this with 2 different systems, in different location and I had the same result.I checked the login page and it was just cloudflare browser check. There was no captcha to solve or anything, as mentioned in this issue.