Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Port calibre to python 3 #870

Closed
wants to merge 64 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
64 commits
Select commit Hold shift + click to select a range
442fa94
Un-overwrite print function
flaviut Aug 22, 2018
387f551
Apply libmodernize.fixes.fix_print
flaviut Aug 22, 2018
20ef4a0
Add IntelliJ & venv to gitignore
flaviut Aug 22, 2018
c80a4ba
python3-ize builtins
flaviut Aug 23, 2018
3bfd909
Update octal & long literals to py3
flaviut Aug 23, 2018
87803df
Convert future_builtins to six.moves
flaviut Aug 23, 2018
35bc745
Apply fix_{zip,filter,map}
flaviut Aug 23, 2018
13894a3
Convert to getcwd
flaviut Aug 23, 2018
d1abb20
Setup unicode() builtin for python2/3 compat
flaviut Aug 23, 2018
66b2738
Use string for CALIBRE_DEBUG envvar
flaviut Aug 23, 2018
806d6ea
Fix import compatibility
flaviut Aug 23, 2018
2c8e4ec
Eliminate cStringIO
flaviut Aug 23, 2018
4da5cee
Apply fix_exec
flaviut Aug 23, 2018
a1e964b
Replace unambigious ur'' with u''
flaviut Aug 23, 2018
7af7bc0
Replace non-unicode ur'' with r''
flaviut Aug 23, 2018
124066e
Convert exceptions to python3
flaviut Aug 23, 2018
12c46ed
Set error text before erroring
flaviut Aug 23, 2018
26f2bee
Use python-modernize for setup.py
flaviut Aug 23, 2018
f7cf4d6
Manual fixes to setup.py for py3-compat
flaviut Aug 24, 2018
1f0dfe8
Build hunspell in py3
flaviut Aug 24, 2018
ce2e8ae
Build monotonic in py3
flaviut Aug 24, 2018
a659c90
Build unicode_names in py3
flaviut Aug 24, 2018
8284d89
Build speedups for py3
flaviut Aug 24, 2018
37baa20
Some progress on setup.py test in py3
flaviut Aug 24, 2018
affc076
Build html in py3
flaviut Aug 24, 2018
439ec81
Build tinycss.tokenizer in py3
flaviut Aug 24, 2018
6c2e72c
Build _patiencediff_c in py3
flaviut Aug 24, 2018
00ed3a0
Build icu in py3
flaviut Aug 24, 2018
7d7a812
A little further in py3
flaviut Aug 24, 2018
f57e249
Replace all ob_type with macro for py3
flaviut Aug 24, 2018
2312f16
Update PyInt_ & PyObject_HEAD_INIT for py3
flaviut Aug 24, 2018
7c7e3b1
Use more readable syntax for PyTypeObjects
flaviut Aug 24, 2018
a081aa3
Build zlib2 in py3
flaviut Aug 24, 2018
6cb3d30
Build certgen in py3
flaviut Aug 24, 2018
fd7c31d
Build matcher in py3
flaviut Aug 24, 2018
b1e7fbe
Build sqlite_custom in py3
flaviut Aug 24, 2018
70b38b3
Import chm from upstream
flaviut Aug 24, 2018
01da3a0
Build freetype in py3
flaviut Aug 24, 2018
8013632
Build msdes in py3
flaviut Aug 24, 2018
468055c
Build podofo in py3
flaviut Aug 24, 2018
d5933e8
Build qt_hack in py3
flaviut Aug 24, 2018
4c60a6d
Build libmtp in py3
flaviut Aug 24, 2018
2703ed3
Fix most of py3 localization
flaviut Aug 24, 2018
1d26b77
Apply fix_dict_six to eliminate iter*()
flaviut Aug 24, 2018
d61f295
Build lmza_binding in py3
flaviut Aug 24, 2018
8ce73a7
Update utils/terminal to py3
flaviut Aug 24, 2018
5e61be4
Use six.string_types instead of basestr
flaviut Aug 24, 2018
b9a4808
More py3 fixes
flaviut Aug 25, 2018
ef9b4eb
Remove tab indentation
flaviut Aug 25, 2018
545e04b
More manual py3 fixes
flaviut Aug 25, 2018
fcf51c5
Eliminate xrange for py3
flaviut Aug 25, 2018
4c3094a
Expose SIP Py3 module init
flaviut Aug 25, 2018
d532b7b
Some more manual py3 fixes
flaviut Aug 25, 2018
8bc5001
Fix imap, izip, iteritems
flaviut Aug 25, 2018
f0e38b0
Use key= for sorting instead of cmp=
flaviut Aug 25, 2018
c84fecf
fu bed6da6
flaviut Aug 25, 2018
b37b0e4
Modernize recipies
flaviut Aug 25, 2018
93724e8
Fix ur'' strings
flaviut Aug 25, 2018
81daa64
Convert recipies to py3 print
flaviut Aug 25, 2018
23ac522
...
flaviut Aug 25, 2018
7768b04
More python3 work
flaviut Aug 30, 2018
d2288ab
GUI starts in py3!
flaviut Aug 30, 2018
916029d
More work
flaviut Aug 30, 2018
498e6dd
More manual python3 work
flaviut Sep 3, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ manual/locale
build
dist
docs
env*
.idea
resources/localization
resources/scripts.pickle
resources/ebook-convert-complete.pickle
Expand Down
4 changes: 2 additions & 2 deletions manual/custom.py
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ def render_options(cmd, groups, options_header=True, add_program=True, header_le


def mark_options(raw):
raw = re.sub(r'(\s+)--(\s+)', ur'\1``--``\2', raw)
raw = re.sub(r'(\s+)--(\s+)', r'\1``--``\2', raw)

def sub(m):
opt = m.group()
Expand Down Expand Up @@ -274,7 +274,7 @@ def cli_docs(app):
info(bold('creating CLI documentation...'))
documented_cmds, undocumented_cmds = get_cli_docs()

documented_cmds.sort(cmp=lambda x, y: cmp(x[0], y[0]))
documented_cmds.sort(key=lambda x: x[0])
undocumented_cmds.sort()

documented = [' '*4 + c[0] for c in documented_cmds]
Expand Down
7 changes: 4 additions & 3 deletions recipes/DrawAndCook.recipe
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
from __future__ import print_function
from calibre.web.feeds.news import BasicNewsRecipe
import re

Expand Down Expand Up @@ -30,7 +31,7 @@ class DrawAndCook(BasicNewsRecipe):
articles = self.make_links(url)
if articles:
feeds.append((title, articles))
print 'feeds are: ', feeds
print('feeds are: ', feeds)
return feeds

def make_links(self, url):
Expand All @@ -45,9 +46,9 @@ class DrawAndCook(BasicNewsRecipe):
'li', attrs={'data-id': re.compile(r'artwork_entry_\d+', re.DOTALL)})
for recipe in recipes:
page_url = self.INDEX + recipe.a['href']
print 'page_url is: ', page_url
print('page_url is: ', page_url)
title = recipe.find('strong').string
print 'title is: ', title
print('title is: ', title)
current_articles.append(
{'title': title, 'url': page_url, 'description': '', 'date': date})
return current_articles
Expand Down
26 changes: 13 additions & 13 deletions recipes/air_force_times.recipe
Original file line number Diff line number Diff line change
Expand Up @@ -26,17 +26,17 @@ class AirForceTimes(BasicNewsRecipe):
auto_cleanup = True

feeds = [
('Home','http://feeds.feedburner.com/rss/category/air-home?format=xml'),
('Health Benefits','http://feeds.feedburner.com/rss/category/air-healthbenefits?format=xml'),
('Retirement Benefits','http://feeds.feedburner.com/rss/category/air-retirementbenefits?format=xml'),
('Veterans Benefits','http://feeds.feedburner.com/rss/category/air-VeteransBenefits?format=xml'),
('Education Benefits','http://feeds.feedburner.com/rss/category/air-educationbenefits?format=xml'),
('Adventure','http://feeds.feedburner.com/rss/category/air-adventure?format=xml'),
('Entertainment','http://feeds.feedburner.com/rss/category/air-Entertainment?format=xml'),
('Careers','http://feeds.feedburner.com/rss/category/air-careers?format=xml'),
('Technology','http://feeds.feedburner.com/rss/category/air-technology?format=xml'),
('Opinion','http://feeds.feedburner.com/rss/category/air-opinion?format=xml'),
('Pay','http://feeds.feedburner.com/rss/category/air-pay?format=xml'),
('Guard','http://feeds.feedburner.com/rss/category/air-guard?format=xml'),
('Your Air Force','http://feeds.feedburner.com/rss/category/air-yourairforce?format=xml'),
('Home', 'http://feeds.feedburner.com/rss/category/air-home?format=xml'),
('Health Benefits', 'http://feeds.feedburner.com/rss/category/air-healthbenefits?format=xml'),
('Retirement Benefits', 'http://feeds.feedburner.com/rss/category/air-retirementbenefits?format=xml'),
('Veterans Benefits', 'http://feeds.feedburner.com/rss/category/air-VeteransBenefits?format=xml'),
('Education Benefits', 'http://feeds.feedburner.com/rss/category/air-educationbenefits?format=xml'),
('Adventure', 'http://feeds.feedburner.com/rss/category/air-adventure?format=xml'),
('Entertainment', 'http://feeds.feedburner.com/rss/category/air-Entertainment?format=xml'),
('Careers', 'http://feeds.feedburner.com/rss/category/air-careers?format=xml'),
('Technology', 'http://feeds.feedburner.com/rss/category/air-technology?format=xml'),
('Opinion', 'http://feeds.feedburner.com/rss/category/air-opinion?format=xml'),
('Pay', 'http://feeds.feedburner.com/rss/category/air-pay?format=xml'),
('Guard', 'http://feeds.feedburner.com/rss/category/air-guard?format=xml'),
('Your Air Force', 'http://feeds.feedburner.com/rss/category/air-yourairforce?format=xml'),
]
3 changes: 2 additions & 1 deletion recipes/al_monitor.recipe
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
from __future__ import print_function
__license__ = 'GPL v3'
__copyright__ = '2014, spswerling'
'''
Expand Down Expand Up @@ -210,4 +211,4 @@ class AlMonitor(BasicNewsRecipe):
curframe = inspect.currentframe()
calframe = inspect.getouterframes(curframe, 2)
calname = calframe[1][3].upper()
print('[' + calname + '] ' + msg[0:100])
print(('[' + calname + '] ' + msg[0:100]))
2 changes: 1 addition & 1 deletion recipes/alejakomiksu_com.recipe
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ class AlejaKomiksu(BasicNewsRecipe):
remove_attributes = ['style', 'font']
ignore_duplicate_articles = {'title', 'url'}

keep_only_tags = dict(attrs={'class': ['akNews__header','akNews__body']})
keep_only_tags = dict(attrs={'class': ['akNews__header', 'akNews__body']})

feeds = [(u'Wiadomości', 'http://www.alejakomiksu.com/rss.php5')]

Expand Down
9 changes: 5 additions & 4 deletions recipes/am730.recipe
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# vim:fileencoding=UTF-8
from __future__ import unicode_literals
from __future__ import print_function
__license__ = 'GPL v3'
__copyright__ = '2013, Eddie Lau'
__Date__ = ''
Expand Down Expand Up @@ -60,7 +61,7 @@ class AM730(BasicNewsRecipe):
title = href.split('/')[-1].split('-')[0]
title = urllib.unquote(title.encode('ASCII')) # .decode('utf-8')
if self.debug:
print title
print(title)
try:
if articles.index({'title':title,'url':href})>=0:
# print 'already added'
Expand All @@ -73,7 +74,7 @@ class AM730(BasicNewsRecipe):
if (len(articles) >= self.max_articles_per_feed):
break
if self.debug:
print articles
print(articles)
return (sectionName,articles)

def parse_index(self):
Expand All @@ -89,8 +90,8 @@ class AM730(BasicNewsRecipe):
SectionsArticles=[]
for (title, url) in Sections:
if self.debug:
print title
print url
print(title)
print(url)
SectionsArticles.append(self.getAMSectionArticles(title,url))
# feeds.append(articles[0]['url'])
return SectionsArticles
28 changes: 14 additions & 14 deletions recipes/ambito.recipe
Original file line number Diff line number Diff line change
Expand Up @@ -37,22 +37,22 @@ class Ambito(BasicNewsRecipe):

keep_only_tags = [
dict(name='h6', attrs={'class': lambda x: x and 'bajada' in x.split()})
,dict(name='span', attrs={'class': lambda x: x and 'dia' in x.split()})
,dict(attrs={'class': lambda x: x and 'titulo-noticia' in x.split()})
,dict(attrs={'class': lambda x: x and 'foto-perfil-columnista' in x.split()})
,dict(attrs={'class': lambda x: x and 'despliegue-noticia' in x.split()})
, dict(name='span', attrs={'class': lambda x: x and 'dia' in x.split()})
, dict(attrs={'class': lambda x: x and 'titulo-noticia' in x.split()})
, dict(attrs={'class': lambda x: x and 'foto-perfil-columnista' in x.split()})
, dict(attrs={'class': lambda x: x and 'despliegue-noticia' in x.split()})
]
remove_tags = [dict(name=['object','link','embed','iframe','meta','link'])]
remove_tags = [dict(name=['object', 'link', 'embed', 'iframe', 'meta', 'link'])]

feeds = [
(u'Principales Noticias', u'http://www.ambito.com/rss/noticiasp.asp')
,(u'Economia' , u'http://www.ambito.com/rss/noticias.asp?S=Econom%EDa')
,(u'Politica' , u'http://www.ambito.com/rss/noticias.asp?S=Pol%EDtica')
,(u'Informacion General' , u'http://www.ambito.com/rss/noticias.asp?S=Informaci%F3n%20General')
,(u'Campo' , u'http://www.ambito.com/rss/noticias.asp?S=Agro')
,(u'Internacionales' , u'http://www.ambito.com/rss/noticias.asp?S=Internacionales')
,(u'Deportes' , u'http://www.ambito.com/rss/noticias.asp?S=Deportes')
,(u'Espectaculos' , u'http://www.ambito.com/rss/noticias.asp?S=Espect%E1culos')
,(u'Tecnologia' , u'http://www.ambito.com/rss/noticias.asp?S=Tecnolog%EDa')
,(u'Ambito Nacional' , u'http://www.ambito.com/rss/noticias.asp?S=Ambito%20Nacional')
, (u'Economia', u'http://www.ambito.com/rss/noticias.asp?S=Econom%EDa')
, (u'Politica', u'http://www.ambito.com/rss/noticias.asp?S=Pol%EDtica')
, (u'Informacion General', u'http://www.ambito.com/rss/noticias.asp?S=Informaci%F3n%20General')
, (u'Campo', u'http://www.ambito.com/rss/noticias.asp?S=Agro')
, (u'Internacionales', u'http://www.ambito.com/rss/noticias.asp?S=Internacionales')
, (u'Deportes', u'http://www.ambito.com/rss/noticias.asp?S=Deportes')
, (u'Espectaculos', u'http://www.ambito.com/rss/noticias.asp?S=Espect%E1culos')
, (u'Tecnologia', u'http://www.ambito.com/rss/noticias.asp?S=Tecnolog%EDa')
, (u'Ambito Nacional', u'http://www.ambito.com/rss/noticias.asp?S=Ambito%20Nacional')
]
11 changes: 6 additions & 5 deletions recipes/ambito_financiero.recipe
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,12 @@ ambito.com/diario
'''

import time
import urllib
import six.moves.urllib.request, six.moves.urllib.parse, six.moves.urllib.error
import re
from calibre import strftime
from calibre.web.feeds.news import BasicNewsRecipe
from calibre.ebooks.BeautifulSoup import BeautifulSoup
import six


class Ambito_Financiero(BasicNewsRecipe):
Expand Down Expand Up @@ -58,7 +59,7 @@ class Ambito_Financiero(BasicNewsRecipe):
br = BasicNewsRecipe.get_browser(self)
br.open(self.INDEX)
if self.username is not None and self.password is not None:
data = urllib.urlencode({
data = six.moves.urllib.parse.urlencode({
'txtUser': self.username,
'txtPassword': self.password
})
Expand Down Expand Up @@ -98,7 +99,7 @@ class Ambito_Financiero(BasicNewsRecipe):
if self.session_id:
l, s, r = url.rpartition('/')
artid, s1, r1 = r.partition('-')
data = urllib.urlencode({'id': artid, 'id_session': self.session_id})
data = six.moves.urllib.parse.urlencode({'id': artid, 'id_session': self.session_id})
response = self.browser.open(
'http://data.ambito.com/diario/cuerpo_noticia.asp', data
)
Expand All @@ -109,12 +110,12 @@ class Ambito_Financiero(BasicNewsRecipe):
cfind = smallsoup.find('div', id="contenido_data")
if cfind:
p.append(cfind)
return unicode(soup)
return six.text_type(soup)
return raw_html

def cleanup(self):
if self.session_id is not None:
data = urllib.urlencode({'session_id': self.session_id})
data = six.moves.urllib.parse.urlencode({'session_id': self.session_id})
self.browser.open(
'http://www.ambito.com/diario/no-cache/login/x_logout.asp', data
)
Expand Down
3 changes: 2 additions & 1 deletion recipes/american_thinker.recipe
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ import html5lib
from calibre.web.feeds.news import BasicNewsRecipe
from calibre.utils.cleantext import clean_xml_chars
from lxml import etree
import six


class AmericanThinker(BasicNewsRecipe):
Expand Down Expand Up @@ -34,7 +35,7 @@ class AmericanThinker(BasicNewsRecipe):
namespaceHTMLElements=False)
for x in root.xpath('''descendant-or-self::*[@class and contains(concat(' ', normalize-space(@class), ' '), ' article_body ') and (@class and contains(concat(' ', normalize-space(@class), ' '), ' bottom '))]'''): # noqa
x.getparent().remove(x)
return etree.tostring(root, encoding=unicode)
return etree.tostring(root, encoding=six.text_type)

feeds = [(u'http://feeds.feedburner.com/americanthinker'),
(u'http://feeds.feedburner.com/AmericanThinkerBlog')
Expand Down
2 changes: 1 addition & 1 deletion recipes/android_com_pl.recipe
Original file line number Diff line number Diff line change
Expand Up @@ -15,5 +15,5 @@ class Android_com_pl(BasicNewsRecipe):
remove_tags_after = [{'class': 'post-content'}]
remove_tags = [dict(name='ul', attrs={'class': 'tags small-tags'}), dict(name='a', attrs={'onclick': 'return ss_plugin_loadpopup_js(this);'})]
preprocess_regexps = [
(re.compile(ur'<p>.{,1}</p>', re.DOTALL), lambda match: '')]
(re.compile(u'<p>.{,1}</p>', re.DOTALL), lambda match: '')]
feeds = [(u'Android', u'http://android.com.pl/feed/')]
16 changes: 9 additions & 7 deletions recipes/apple_daily.recipe
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# vim:fileencoding=UTF-8
from __future__ import unicode_literals
import six
from six.moves import range
__license__ = 'GPL v3'
__copyright__ = '2013-2015, Eddie Lau'
__Date__ = ''
Expand Down Expand Up @@ -162,7 +164,7 @@ class AppleDaily(BasicNewsRecipe):
article_titles.append(force_unicode(a.title, 'utf-8'))

mi.comments = self.description
if not isinstance(mi.comments, unicode):
if not isinstance(mi.comments, six.text_type):
mi.comments = mi.comments.decode('utf-8', 'replace')
mi.comments += ('\n\n' + _('Articles in this issue: ') + '\n' +
'\n\n'.join(article_titles))
Expand Down Expand Up @@ -256,24 +258,24 @@ class AppleDaily(BasicNewsRecipe):
if os.path.exists(last):
with open(last, 'rb') as fi:
src = fi.read().decode('utf-8')
src = src.replace('height:260px !important;','') # fix flow-player div tag parent
src = src.replace('height:260px !important;', '') # fix flow-player div tag parent
soup = BeautifulSoup(src)
body = soup.find('body')
if body is not None:
prefix = '/'.join('..'for i in range(2 *
len(re.findall(r'link\d+', last))))
prefix = '/'.join('..'for i in list(range(2 *
len(re.findall(r'link\d+', last)))))
templ = self.navbar.generate(True, num, j, len(f),
not self.has_single_feed,
a.orig_url, __appname__, prefix=prefix,
center=self.center_navbar)
translatedTempl =re.sub(
'<hr.*<br','<hr>本篇由 '+__appname__+
'<hr.*<br', '<hr>本篇由 '+__appname__+
' 快取自 <a href="http://hkm.appledaily.com/" >蘋果日報</a> ; <a href="'+a.orig_url+'">本篇來源位置</a>。'+
'<br',templ.render(doctype='xhtml').decode('utf-8'),flags=re.S)
'<br', templ.render(doctype='xhtml').decode('utf-8'), flags=re.S)
elem = BeautifulSoup(translatedTempl).find('div')
body.insert(len(body.contents), elem)
with open(last, 'wb') as fi:
fi.write(unicode(soup).encode('utf-8'))
fi.write(six.text_type(soup).encode('utf-8'))
if len(feeds) == 0:
raise Exception('All feeds are empty, aborting.')

Expand Down
4 changes: 2 additions & 2 deletions recipes/appledaily_tw.recipe
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ class AppledailyTW(BasicNewsRecipe):
]

def preprocess_raw_html(self, raw_html, url):
raw_html = re.sub(ur'<a href=".*?<br><br>.*?<\/a>', '', raw_html)
raw_html = re.sub(r'<a href=".*?<br><br>.*?<\/a>', '', raw_html)
raw_html = re.sub(
ur'<title>(.*?)[\s]+\|.*<\/title>', '<title>\1<\/title>', raw_html)
r'<title>(.*?)[\s]+\|.*<\/title>', '<title>\1<\/title>', raw_html)
return raw_html
3 changes: 2 additions & 1 deletion recipes/auto_prove.recipe
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
#!/usr/bin/env python2
from __future__ import print_function
__license__ = 'GPL v3'
__author__ = 'GabrieleMarini, based on Darko Miletic'
__copyright__ = '2009, Darko Miletic <darko.miletic at gmail.com>, Gabriele Marini'
Expand Down Expand Up @@ -56,7 +57,7 @@ class AutoPR(BasicNewsRecipe):
]:
soup = self.index_to_soup(url)
soup = soup.find('channel')
print soup
print(soup)

for article in soup.findAllNext('item'):
title = self.tag_to_string(article.title)
Expand Down
4 changes: 2 additions & 2 deletions recipes/azstarnet.recipe
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ __copyright__ = '2009-2010, Darko Miletic <darko.miletic at gmail.com>'
'''
azstarnet.com
'''
import urllib
import six.moves.urllib.request, six.moves.urllib.parse, six.moves.urllib.error
from calibre.web.feeds.news import BasicNewsRecipe


Expand All @@ -31,7 +31,7 @@ class Azstarnet(BasicNewsRecipe):
br = BasicNewsRecipe.get_browser(self)
br.open('http://azstarnet.com/')
if self.username is not None and self.password is not None:
data = urllib.urlencode({'m': 'login', 'u': self.username, 'p': self.password, 'z': 'http://azstarnet.com/'
data = six.moves.urllib.parse.urlencode({'m': 'login', 'u': self.username, 'p': self.password, 'z': 'http://azstarnet.com/'
})
br.open('http://azstarnet.com/app/registration/proxy.php', data)
return br
Expand Down
12 changes: 6 additions & 6 deletions recipes/banat_news.recipe
Original file line number Diff line number Diff line change
Expand Up @@ -52,12 +52,12 @@ class BanatNews(BasicNewsRecipe):

feeds = [

('Balita' , 'http://rss.philstar.com/Rss.aspx?publicationSubCategoryId=101'),
('Opinyon' , 'http://rss.philstar.com/Rss.aspx?publicationSubCategoryId=102'),
('Kalingawan' , 'http://rss.philstar.com/Rss.aspx?publicationSubCategoryId=104'),
('Showbiz' , 'http://rss.philstar.com/Rss.aspx?publicationSubCategoryId=62'),
('Palaro' , 'http://rss.philstar.com/Rss.aspx?publicationSubCategoryId=103'),
('Imong Kapalaran' , 'http://rss.philstar.com/Rss.aspx?publicationSubCategoryId=105')
('Balita', 'http://rss.philstar.com/Rss.aspx?publicationSubCategoryId=101'),
('Opinyon', 'http://rss.philstar.com/Rss.aspx?publicationSubCategoryId=102'),
('Kalingawan', 'http://rss.philstar.com/Rss.aspx?publicationSubCategoryId=104'),
('Showbiz', 'http://rss.philstar.com/Rss.aspx?publicationSubCategoryId=62'),
('Palaro', 'http://rss.philstar.com/Rss.aspx?publicationSubCategoryId=103'),
('Imong Kapalaran', 'http://rss.philstar.com/Rss.aspx?publicationSubCategoryId=105')
]

# process the printer friendly version of article
Expand Down
2 changes: 1 addition & 1 deletion recipes/barrons.recipe
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ from __future__ import (unicode_literals, division, absolute_import,

import json
from mechanize import Request
from urllib import quote
from six.moves.urllib.parse import quote

from calibre.web.feeds.news import BasicNewsRecipe

Expand Down
1 change: 1 addition & 0 deletions recipes/bash_org_pl.recipe
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
from calibre.web.feeds.news import BasicNewsRecipe
from six.moves import range


class Bash_org_pl(BasicNewsRecipe):
Expand Down
4 changes: 2 additions & 2 deletions recipes/benchmark_pl.recipe
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ class BenchmarkPl(BasicNewsRecipe):
extra_css = 'ul {list-style-type: none;}'
no_stylesheets = True
use_embedded_content = False
preprocess_regexps = [(re.compile(ur'<h3><span style="font-size: small;">&nbsp;Zobacz poprzednie <a href="http://www.benchmark.pl/news/zestawienie/grupa_id/135">Opinie dnia:</a></span>.*</body>', # noqa
re.DOTALL | re.IGNORECASE), lambda match: '</body>'), (re.compile(ur'Więcej o .*?</ul>', re.DOTALL | re.IGNORECASE), lambda match: '')] # noqa
preprocess_regexps = [(re.compile(u'<h3><span style="font-size: small;">&nbsp;Zobacz poprzednie <a href="http://www.benchmark.pl/news/zestawienie/grupa_id/135">Opinie dnia:</a></span>.*</body>', # noqa
re.DOTALL | re.IGNORECASE), lambda match: '</body>'), (re.compile(u'Więcej o .*?</ul>', re.DOTALL | re.IGNORECASE), lambda match: '')] # noqa

keep_only_tags = [dict(id=['articleHeader', 'articleGallery']), dict(
name='div', attrs={'class': ['m_zwykly', 'gallery']}), dict(id='article')]
Expand Down
Loading