Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Standardize providers #665

Merged
merged 59 commits into from
Jun 18, 2016
Merged

Standardize providers #665

merged 59 commits into from
Jun 18, 2016

Conversation

medariox
Copy link
Contributor

@medariox medariox commented Jun 4, 2016

Goal of this PR is to bring the existing providers to more standardized form, without rewriting any of them.

Planned changes:

  • Use unicode_literals
  • Add tracebacks
  • Use format()
  • Rename variables to snake case
  • Clean up imports
  • Change migration leftovers

This will have to be merged after #658

@medariox medariox added this to the 0.1.2 milestone Jun 4, 2016
@p0psicles p0psicles force-pushed the move-seeder-sorting branch 2 times, most recently from 094ec1f to f998d60 Compare June 4, 2016 20:14
@fernandog
Copy link
Contributor

fernandog commented Jun 7, 2016

@medariox doc strings if you are interested:

    def search(self, search_strings, age=0, ep_obj=None):  # pylint: disable=too-many-locals, too-many-branches
+        """
+        Searches indexer using the params in search_strings, either for latest releases, or a string/id search
+        :param search_strings: Search to perform
+        :param age: Not used for this provider
+        :param ep_obj: Not used for this provider / :param ep_obj: episode object (kat,newznab,rarbg, etc)
+
+        :return: A list of items found
         """
+

torrentday:

-    def search(self, search_params, age=0, ep_obj=None):  # pylint: disable=too-many-locals
+    def search(self, search_strings, age=0, ep_obj=None):  # pylint: disable=too-many-locals

from bs4 import BeautifulSoup


class Anizb(NZBProvider): # pylint: disable=too-many-instance-attributes
"""Nzb Provider using the open api of anizb.org for daily (rss) and backlog/forced searches"""
'''Nzb Provider using the open api of anizb.org for daily (rss) and backlog/forced searches'''
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aah, @medariox docstrings are always in double quotes.

@medariox
Copy link
Contributor Author

medariox commented Jun 9, 2016

@fernandog @p0psicles
Thanks for the feedback. Will fix/add your suggestions in the coming commits.

* Added unicode_literals to GenericProvider

* Also adapted all providers, to make use of the future import unicode_literals
* Removed the decode()/encode()
* Cleaned up some double to single quotes
* Added proper exceptions for the provider results items
* Some logging cleanup using format()

* Now Really remove the .decodes()

* Also removed the encodes.

* Fixed after a search/replace

* Fixed docstrings
@p0psicles
Copy link
Contributor

p0psicles commented Jun 10, 2016

With the new exception handling, this one pops up:

2016-06-10 11:13:16 SEARCHQUEUE-DAILY-SEARCH :: [TokyoToshokan] :: [7f7ad0c] Failed parsing provider. Traceback: 'Traceback (most recent call last):\n  File "/home/opt/medusa/sickbeard/providers/tokyotoshokan.py", line 89, in search\n    title = desc_top.get_text(strip=True)\nAttributeError: \'NoneType\' object has no attribute \'get_text\'\n'
Traceback (most recent call last):
  File "/home/opt/medusa/sickbeard/providers/tokyotoshokan.py", line 89, in search
    title = desc_top.get_text(strip=True)
AttributeError: 'NoneType' object has no attribute 'get_text'

@medariox medariox closed this Jun 10, 2016
@medariox medariox reopened this Jun 10, 2016
@medariox medariox added Concluded Needs review Needs testing Requires testing to make sure it's working as intended and removed In progress labels Jun 10, 2016
for search_string in search_strings[mode]:

if mode != 'RSS':
logger.log(u"Search string: {}".format(search_string.decode("utf-8")),
logger.log('Search string: {0}'.format(search_string.decode('utf-8')),
logger.DEBUG)

try:
search_url = (self.urls['rss'], self.urls['search'] + search_string + '/s/d/1/?fmt=rss')[mode != 'RSS']
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't we do this using urljoin by default now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we should, but since there are so many occurrences that would require urljoin and I can't test most of them, I decided to leave the urljoin to another PR (also, this PR is getting way too huge!)

medariox and others added 4 commits June 18, 2016 16:20
* Remove redundant character escapes

* Use augmented assignment

* Fix indentation

* Use six.iteritems for py3 compatibility
@p0psicles p0psicles merged commit 4994e6b into move-seeder-sorting Jun 18, 2016
@labrys labrys deleted the stand-providers branch June 19, 2016 01:44
p0psicles added a commit that referenced this pull request Jun 19, 2016
#658)

* Moved seeders sorting out of providers code and into the sickbeard/search.py searchProviders() code.

* Removed the lambda sort from all providers
* Corrected bug introducted in TVCache
* Removed bogus condition from GenericProvider. That condition can never be true.

* Standardize providers (#665)

* Standardize first 10 providers

* Small anizb update

* Small bluetigers update

* Next 10 providers

* Added unicode_literals to GenericProvider (#677)

* Added unicode_literals to GenericProvider

* Also adapted all providers, to make use of the future import unicode_literals
* Removed the decode()/encode()
* Cleaned up some double to single quotes
* Added proper exceptions for the provider results items
* Some logging cleanup using format()

* Now Really remove the .decodes()

* Also removed the encodes.

* Fixed after a search/replace

* Fixed docstrings

* Next 11 providers

* Next 11 providers, removed sceneelite

* Last 9 providers

* Remove sceneelite from init

* Renamed all search_params to search_strings

* Fix for GFTracker

* Fix TNTVillage

* Fix HDTorrents

* Fix Extratorrent

* Fix HDSpace

* Use string in SQL with unicode_literals in GenericProvider

* Fix BITHDTV

* Fix TVChaosUK

* Added flag to newznab, for torznab providers. If it's torznab then results are sorted by seeders in search.py.

* Improve BitSnoop

* Improve Anizb

* Improve Bluetigers

* Cleanup BTdigg

* Improve Hounddawgs

* Improve FreshOn

* More improvements and cleanups

* Fix ThePirateBay

* Fix for omgwtfnzb, needed a default value, cause getattr doesn't do that by default.

* Add size to freshon, cleanup, fix for tvchaosuk

* Improve size parsing code Freshon

* Fixes for ExtraTorrent and HDTorrents

* Fixed bithdtv

* For when it's not getting back the 750px tables.

* Fix tokyotoshokan provider errors

* Fixed properSearch.

* listPropers does an sql, but accessing the row, requires it to use b''

* Added newznab search by search_query fallback, when search by tvdbid does not give back results.

* Fix HDTorrents, use urljoin, partial rewrite

* Fix rare Zooqle error

* Improve HDTorrents, bring back ugly hack

* Improve TNTVillage, fix daily search, much more

* Fix BIT-HDTV

* More standardization

* More standardization

* Bring back eng releases only option

* small fixup

* Small tnt change

* Update daily search url

* Remove freeleech option for MTV

* Remove TypeError from connection time out

* FIx repeated keyword in dict

* More standardization

* Standardize method names and order

* FIx missed URL join

* Standardize string formatting

* Last small changes

* Change TPB url, update cache to 20 min

* More providers (#698)

* Remove redundant character escapes

* Use augmented assignment

* Fix indentation

* Use six.iteritems for py3 compatibility

* Store hash for torrentproject
@labrys labrys restored the stand-providers branch June 19, 2016 14:44
@fernandog fernandog removed the Needs testing Requires testing to make sure it's working as intended label Feb 9, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants