From c68798b3d5af417a1a4c30317280674c5eecc167 Mon Sep 17 00:00:00 2001 From: Edwin O Marshall Date: Fri, 7 Mar 2014 10:38:18 -0500 Subject: [PATCH 1/7] - converted sep 18 --- sep/sep-018.rst | 662 +++++++++++++++++++++++++++++++++++++++++++++++ sep/sep-018.trac | 551 --------------------------------------- 2 files changed, 662 insertions(+), 551 deletions(-) create mode 100644 sep/sep-018.rst delete mode 100644 sep/sep-018.trac diff --git a/sep/sep-018.rst b/sep/sep-018.rst new file mode 100644 index 00000000000..e3082191770 --- /dev/null +++ b/sep/sep-018.rst @@ -0,0 +1,662 @@ +======= =================== +SEP 18 +Title Spider Middleware v2 +Author Insophia Team +Created 2010-06-20 +Status Draft (in progress) +======= =================== + +============================= +SEP-018: Spider middleware v2 +============================= + +This SEP introduces a new architecture for spider middlewares which provides a +greater degree of modularity to combine functionality which can be plugged in +from different (reusable) middlewares. + +The purpose of !SpiderMiddleware-v2 is to define an architecture that +encourages more re-usability for building spiders based on smaller well-tested +components. Those components can be global (similar to current spider +middlewares) or per-spider that can be combined to achieve the desired +functionality. These reusable components will benefit all Scrapy users by +building a repository of well-tested components that can be shared among +different spiders and projects. Some of them will come bundled with Scrapy. + +Unless explicitly stated, in this document "spider middleware" refers to the +**new** spider middleware v2, not the old one. + +This document is a work in progress, see `Pending Issues`_ below. + +New spider middleware API +========================= + +A spider middleware can implement any of the following methods: + +- ``process_response(response, request, spider)`` + - Process a (downloaded) response + - Receives: The ``response`` to process, the ``request`` used to download + the response (not necessarily the request sent from the spider), and the + ``spider`` that requested it. + - Returns: A list containing requests and/or items +- ``process_error(error, request, spider)``: + - Process a error when trying to download a request, such as DNS errors, + timeout errors, etc. + - Receives: The ``error`` caused, the ``request`` that caused it (not + necessarily the request sent from the spider), and then ``spider`` that + requested it. + - Returns: A list containing request and/or items +- ``process_request(request, response, spider)`` + - Process a request after it has been extracted from the spider or previous + middleware ``process_response()`` methods. + - Receives: The ``request`` to process, the ``response`` where the request + was extracted from, and the ``spider`` that extracted it. + + - Note: ``response`` is ``None`` for start requests, or requests injected + directly (through ``manager.scraper.process_request()`` without + specifying a response (see below) + - Returns: A ``Request`` object (not necessarily the same received), or + ``None`` in which case the request is dropped. +- ``process_item(item, response, spider)`` + - Process an item after it has been extracted from the spider or previous + middleware ``process_response()`` methods. + - Receives: The ``item`` to process, the ``response`` where the item was + extracted from, and the ``spider`` that extracted it. + - Returns: An ``Item`` object (not necessarily the same received), or + ``None`` in which case the item is dropped. +- ``next_request(spider)`` + - Returns a the next request to crawl with this spider. This method is + called when the spider is opened, and when it gets idle. + - Receives: The ``spider`` to return the next request for. + - Returns: A ``Request`` object. +- ``open_spider(spider)`` + - This can be used to allocate resources when a spider is opened. + - Receives: The ``spider`` that has been opened. + - Returns: nothing +- ``close_spider(spider)`` + - This can be used to free resources when a spider is closed. + - Receives: The ``spider`` that has been closed. + - Returns: nothing + +Changes to core API +=================== + +Injecting requests to crawl +--------------------------- + +To inject start requests (or new requests without a response) to crawl, you used before: + +- ``manager.engine.crawl(request, spider)`` + +Now you'll use: + +- ``manager.scraper.process_request(request, spider, response=None)`` + +Which (unlike the old ``engine.crawl`` will make the requests pass through the spider middleware ``process_request()`` method). + +Scheduler middleware to be removed +---------------------------------- + +We're gonna remove the Scheduler Middleware, and move the duplicates filter to a new spider middleware. + +Scraper high-level API +====================== + +There is a simpler high-level API - the Scraper API - which is the API used by the engine and other core components. This is also the API implemented by this new middleware, with its own internal architecture and hooks. Here is the Scraper API: + +- ``process_response(response, request, spider)`` + - returns iterable of items and requests +- ``process_error(error, request, spider)`` + - returns iterable of items and requests +- ``process_request(request, spider, response=None)`` + - injects a request to crawl for the given spider +- ``process_item(item, spider, response)`` + - injects a item to process with the item processor (typically the item + pipeline) +- ``next_request(spider)`` + - returns the next request to process for the given spider +- ``open_spider(spider)`` + - opens a spider +- ``close_spider(spider)`` + - closes a spider + +How it works +================ + +The spider middlewares are defined in certain order with the top-most being the +one closer to the engine, and the bottom-most being the one closed to the +spider. + +Example: + +- Engine +- Global spider Middleware 3 +- Global spider Middleware 2 +- Global spider Middleware 1 +- Spider-specific middlewares (defined in ``Spider.middlewares``) + - Spider-specific middleware 3 + - Spider-specific middleware 2 + - Spider-specific middleware 1 +- Spider + +The data flow with Spider Middleware v2 is as follows: + +1. When a response arrives from the engine, it it passed through all the spider + middlewares (in descending order). The result of each middleware + ``process_response`` is kept and then returned along with the spider + callback result +2. Each item of the aggregated result from previous point is passed through all + middlewares (in ascending order) calling the ``process_request`` or + ``process_item`` method accordingly, and their results are kept for passing + to the following middlewares + +One of the spider middlewares (typically - but not necessarily - the last +spider middleware closer to the spider, as shown in the example) will be a +"spider-specific spider middleware" which would take care of calling the +additional spider middlewares defined in the ``Spider.middlewares`` attribute, +hence providing support for per-spider middlewares. If the middleware is well +written, it should work both globally and per-spider. + +Spider-specific middlewares +=========================== + +You can define in the spider itself a list of additional middlewares that will +be used for this spider, and only this spider. If the middleware is well +written, it should work both globally and per spider. + +Here's an example that combines functionality from multiple middlewares into +the same spider: + +:: + + #!python + class MySpider(BaseSpider): + + middlewares = [RegexLinkExtractor(), CallbackRules(), CanonicalizeUrl(), + ItemIdSetter(), OffsiteMiddleware()] + + allowed_domains = ['example.com', 'sub.example.com'] + + url_regexes_to_follow = ['/product.php?.*'] + + callback_rules = { + '/product.php.*': 'parse_product', + '/category.php.*': 'parse_category', + } + + canonicalization_rules = ['sort-query-args', 'normalize-percent-encoding', ...] + + id_field = 'guid' + id_fields_to_hash = ['supplier_name', 'supplier_id'] + + def parse_product(self, item): + # extract item from response + return item + + def parse_category(self, item): + # extract item from response + return item + +The Spider Middleware that implements spider code +================================================= + +There's gonna be one middleware that will take care of calling the proper +spider methods on each event such as: + +- call ``Request.callback`` (for 200 responses) or ``Request.errback`` for + non-200 responses and other errors. this behaviour can be changed through the + ``handle_httpstatus_list`` spider attribute. + + - if ``Request.callback`` is not set it will use ``Spider.parse`` + - if ``Request.errback`` is not set it will use ``Spider.errback`` +- call additional spider middlewares defined in the ``Spider.middlewares`` + attribute +- call ``Spider.next_request()`` and ``Spider.start_requests()`` on + ``next_request()`` middleware method (this would implicitly support backwards + compatibility) + +Differences with Spider middleware v1 +===================================== + +- adds support for per-spider middlewares through the ``Spider.middlewares`` + attribute +- allows processing initial requests (those returned from + ``Spider.start_requests()``) + +Use cases and examples +====================== + +This section contains several examples and use cases for Spider Middlewares. +Imports are intentionally removed for conciseness and clarity. + +Regex (HTML) Link Extractor +--------------------------- + +A typical application of spider middlewares could be to build Link Extractors. +For example: + +:: + + #!python + class RegexHtmlLinkExtractor(object): + + def process_response(self, response, request, spider): + if isinstance(response, HtmlResponse): + allowed_regexes = spider.url_regexes_to_follow + # extract urls to follow using allowed_regexes + return [Request(x) for x in urls_to_follow] + + # Example spider using this middleware + class MySpider(BaseSpider): + + middlewares = [RegexHtmlLinkExtractor()] + url_regexes_to_follow = ['/product.php?.*'] + + # parsing callbacks below + +RSS2 link extractor +------------------- + +:: + + #!python + class Rss2LinkExtractor(object): + + def process_response(self, response, request, spider): + if response.headers.get('Content-type') 'application/rss+xml': + xs = XmlXPathSelector(response) + urls = xs.select("//item/link/text()").extract() + return [Request(x) for x in urls] + +Callback dispatcher based on rules +---------------------------------- + +Another example could be to build a callback dispatcher based on rules: + +:: + + #!python + class CallbackRules(object): + + def __init__(self): + self.rules = {} + dispatcher.connect(signals.spider_opened, self.spider_opened) + dispatcher.connect(signals.spider_closed, self.spider_closed) + + def spider_opened(self, spider): + self.rules[spider] = {} + for regex, method_name in spider.callback_rules.items(): + r = re.compile(regex) + m = getattr(self.spider, method_name, None) + if m: + self.rules[spider][r] = m + + def spider_closed(self, spider): + del self.rules[spider] + + def process_response(self, response, request, spider): + for regex, method in self.rules[spider].items(): + m = regex.search(response.url) + if m: + return method(response) + return [] + + # Example spider using this middleware + class MySpider(BaseSpider): + + middlewares = [CallbackRules()] + callback_rules = { + '/product.php.*': 'parse_product', + '/category.php.*': 'parse_category', + } + + def parse_product(self, response): + # parse response and populate item + return item + +URL Canonicalizers +------------------ + +Another example could be for building URL canonicalizers: + +:: + + #!python + class CanonializeUrl(object): + + def process_request(self, request, response, spider): + curl = canonicalize_url(request.url, + rules=spider.canonicalization_rules) + return request.replace(url=curl) + + # Example spider using this middleware + class MySpider(BaseSpider): + + middlewares = [CanonicalizeUrl()] + canonicalization_rules = ['sort-query-args', + 'normalize-percent-encoding', ...] + + # ... + +Setting item identifier +----------------------- + +Another example could be for setting a unique identifier to items, based on +certain fields: + +:: + + #!python + class ItemIdSetter(object): + + def process_item(self, item, response, spider): + id_field = spider.id_field + id_fields_to_hash = spider.id_fields_to_hash + item[id_field] = make_hash_based_on_fields(item, id_fields_to_hash) + return item + + # Example spider using this middleware + class MySpider(BaseSpider): + + middlewares = [ItemIdSetter()] + id_field = 'guid' + id_fields_to_hash = ['supplier_name', 'supplier_id'] + + def parse(self, response): + # extract item from response + return item + +robots.txt exclusion +-------------------- + +A spider middleware to avoid visiting pages forbidden by robots.txt: + +:: + + #!python + class SpiderInfo(object): + + def __init__(self, useragent): + self.useragent = useragent + self.parsers = {} + self.pending = defaultdict(list) + + + class AllowAllParser(object): + + def can_fetch(useragent, url): + return True + + + class RobotsTxtMiddleware(object): + + REQUEST_PRIORITY = 1000 + + def __init__(self): + self.spiders = {} + dispatcher.connect(self.spider_opened, signal=signals.spider_opened) + dispatcher.connect(self.spider_closed, signal=signals.spider_closed) + + def process_request(self, request, response, spider): + return self.process_start_request(self, request) + + def process_start_request(self, request, spider): + info = self.spiders[spider] + url = urlparse_cached(request) + netloc = url.netloc + if netloc in info.parsers: + rp = info.parsers[netloc] + if rp.can_fetch(info.useragent, request.url): + res = request + else: + spider.log("Forbidden by robots.txt: %s" % request) + res = None + else: + if netloc in info.pending: + res = None + else: + robotsurl = "%s://%s/robots.txt" % (url.scheme, netloc) + meta = {'spider': spider, {'handle_httpstatus_list': [403, 404, 500]} + res = Request(robotsurl, callback=self.parse_robots, + meta=meta, priority=self.REQUEST_PRIORITY) + info.pending[netloc].append(request) + return res + + def parse_robots(self, response): + spider = response.request.meta['spider'] + netloc urlparse_cached(response).netloc + info = self.spiders[spider] + if response.status 200; + rp = robotparser.RobotFileParser(response.url) + rp.parse(response.body.splitlines()) + info.parsers[netloc] = rp + else: + info.parsers[netloc] = AllowAllParser() + return info.pending[netloc] + + def spider_opened(self, spider): + ua = getattr(spider, 'user_agent', None) or settings['USER_AGENT'] + self.spiders[spider] = SpiderInfo(ua) + + def spider_closed(self, spider): + del self.spiders[spider] + +Offsite middleware +------------------ + +This is a port of the Offsite middleware to the new spider middleware API: + +:: + + #!python + class SpiderInfo(object): + + def __init__(self, host_regex): + self.host_regex = host_regex + self.hosts_seen = set() + + + class OffsiteMiddleware(object): + + def __init__(self): + self.spiders = {} + dispatcher.connect(self.spider_opened, signal=signals.spider_opened) + dispatcher.connect(self.spider_closed, signal=signals.spider_closed) + + def process_request(self, request, response, spider): + return self.process_start_request(self, request) + + def process_start_request(self, request, spider): + if self.should_follow(request, spider): + return request + else: + info = self.spiders[spider] + host = urlparse_cached(x).hostname + if host and host not in info.hosts_seen: + spider.log("Filtered offsite request to %r: %s" % (host, request)) + info.hosts_seen.add(host) + + def should_follow(self, request, spider): + info = self.spiders[spider] + # hostanme can be None for wrong urls (like javascript links) + host = urlparse_cached(request).hostname or '' + return bool(info.regex.search(host)) + + def get_host_regex(self, spider): + """Override this method to implement a different offsite policy""" + domains = [d.replace('.', r'\.') for d in spider.allowed_domains] + regex = r'^(.*\.)?(%s)$' % '|'.join(domains) + return re.compile(regex) + + def spider_opened(self, spider): + info = SpiderInfo(self.get_host_regex(spider)) + self.spiders[spider] = info + + def spider_closed(self, spider): + del self.spiders[spider] + +Limit URL length +---------------- + +A middleware to filter out requests with long urls: + +:: + + #!python + + class LimitUrlLength(object): + + def __init__(self): + self.maxlength = settings.getint('URLLENGTH_LIMIT') + + def process_request(self, request, response, spider): + return self.process_start_request(self, request) + + def process_start_request(self, request, spider): + if len(request.url) <= self.maxlength: + return request + spider.log("Ignoring request (url length > %d): %s " % (self.maxlength, request.url)) + +Set Referer +----------- + +A middleware to set the Referer: + +:: + + #!python + class SetReferer(object): + + def process_request(self, request, response, spider): + request.headers.setdefault('Referer', response.url) + return request + +Set and limit crawling depth +---------------------------- + +A middleware to set (and limit) the request/response depth, taken from the +start requests: + +:: + + #!python + class SetLimitDepth(object): + + def __init__(self, maxdepth=0): + self.maxdepth = maxdepth or settings.getint('DEPTH_LIMIT') + + def process_request(self, request, response, spider): + depth = response.request.meta['depth'] + 1 + request.meta['depth'] = depth + if not self.maxdepth or depth <= self.maxdepth: + return request + spider.log("Ignoring link (depth > %d): %s " % (self.maxdepth, request) + + def process_start_request(self, request, spider): + request.meta['depth'] = 0 + return request + +Filter duplicate requests +------------------------- + +A middleware to filter out requests already seen: + +:: + + #!python + class FilterDuplicates(object): + + def __init__(self): + clspath = settings.get('DUPEFILTER_CLASS') + self.dupefilter = load_object(clspath)() + dispatcher.connect(self.spider_opened, signal=signals.spider_opened) + dispatcher.connect(self.spider_closed, signal=signals.spider_closed) + + def enqueue_request(self, spider, request): + seen = self.dupefilter.request_seen(spider, request) + if not seen or request.dont_filter: + return request + + def spider_opened(self, spider): + self.dupefilter.open_spider(spider) + + def spider_closed(self, spider): + self.dupefilter.close_spider(spider) + +Scrape data using Parsley +------------------------- + +A middleware to Scrape data using Parsley as described in UsingParsley + +:: + + #!python + from pyparsley import PyParsley + + class ParsleyExtractor(object): + + def __init__(self, parslet_json_code): + parslet = json.loads(parselet_json_code) + class ParsleyItem(Item): + def __init__(self, *a, **kw): + for name in parslet.keys(): + self.fields[name] = Field() + super(ParsleyItem, self).__init__(*a, **kw) + self.item_class = ParsleyItem + self.parsley = PyParsley(parslet, output='python') + + def process_response(self, response, request, spider): + return self.item_class(self.parsly.parse(string=response.body)) + + + +Pending issues +============== + +Resolved: + +- how to make ``start_requests()`` output pass through spider middleware + ``process_request()``? + + - Start requests will be injected through + ``manager.scraper.process_request()`` instead of + ``manager.engine.crawl()`` +- should we support adding additional start requests from a spider middleware? + - Yes - there is a spider middleware method (``start_requests``) for that +- should ``process_response()`` receive a ``request`` argument with the + ``request`` that originated it?. ``response.request`` is the latest request, + not the original one (think of redirections), but it does carry the ``meta`` + of the original one. The original one may not be available anymore (in + memory) if we're using a persistent scheduler., but in that case it would be + the deserialized request from the persistent scheduler queue. + + - No - this would make implementation more complex and we're not sure it's + really needed +- how to make sure ``Request.errback`` is always called if there is a problem + with the request?. Do we need to ensure that?. Requests filtered out (by + returning ``None``) in the ``process_request()`` method will never be + callback-ed or even errback-ed. this could be a problem for spiders that want + to be notified if their requests are dropped. should we support this + notification somehow or document (the lack of) it properly? + + - We won't support notifications of dropped requests, because: 1. it's hard + to implement and unreliable, 2. it's against not friendly with request + persistence, 3. we can't come up with a good api. +- should we make the list of default spider middlewares empty? (or the + "per-spider" spider middleware alone) + + - No - there are some useful spider middlewares that it's worth enabling by + default like referer, duplicates, robots2 +- should we allow returning deferreds in spider middleware methods? + - Yes - we should build a Deferred with the spider middleware methods as + callbacks and that would implicitly support returning Deferreds +- should we support processing responses before they're processed by the + spider, because ``process_response`` runs "in parallel" to the spider + callback, and can't stop from running it. + + - No - we haven't seen a practical use case for this, so we won't add an + additional hook. It should be trivial to add it later, if needed. +- should we make a spider middleware to handle calling the request and spider + callback, instead of letting the Scraper component do it? + + - Yes - there's gonna a spider middleware for execution spider-specific code + such as callbacks and also custom middlewares diff --git a/sep/sep-018.trac b/sep/sep-018.trac deleted file mode 100644 index e09356d4984..00000000000 --- a/sep/sep-018.trac +++ /dev/null @@ -1,551 +0,0 @@ -= SEP-018: Spider middleware v2 = - -[[PageOutline(2-5,Contents)]] - -||'''SEP:'''||18|| -||'''Title:'''||Spider Middleware v2|| -||'''Author:'''||Insophia Team|| -||'''Created:'''||2010-06-20|| -||'''Status'''||Draft (in progress)|| - -This SEP introduces a new architecture for spider middlewares which provides a greater degree of modularity to combine functionality which can be plugged in from different (reusable) middlewares. - -The purpose of !SpiderMiddleware-v2 is to define an architecture that encourages more re-usability for building spiders based on smaller well-tested components. Those components can be global (similar to current spider middlewares) or per-spider that can be combined to achieve the desired functionality. These reusable components will benefit all Scrapy users by building a repository of well-tested components that can be shared among different spiders and projects. Some of them will come bundled with Scrapy. - -Unless explicitly stated, in this document "spider middleware" refers to the '''new''' spider middleware v2, not the old one. - -This document is a work in progress, see [#Pendingissues Pending issues] below. - -== New spider middleware API == - -A spider middleware can implement any of the following methods: - - * `process_response(response, request, spider)` - * Process a (downloaded) response - * Receives: The `response` to process, the `request` used to download the response (not necessarily the request sent from the spider), and the `spider` that requested it. - * Returns: A list containing requests and/or items - * `process_error(error, request, spider)`: - * Process a error when trying to download a request, such as DNS errors, timeout errors, etc. - * Receives: The `error` caused, the `request` that caused it (not necessarily the request sent from the spider), and then `spider` that requested it. - * Returns: A list containing request and/or items - * `process_request(request, response, spider)` - * Process a request after it has been extracted from the spider or previous middleware `process_response()` methods. - * Receives: The `request` to process, the `response` where the request was extracted from, and the `spider` that extracted it. - * Note: `response` is `None` for start requests, or requests injected directly (through `manager.scraper.process_request()` without specifying a response (see below) - * Returns: A `Request` object (not necessarily the same received), or `None` in which case the request is dropped. - * `process_item(item, response, spider)` - * Process an item after it has been extracted from the spider or previous middleware `process_response()` methods. - * Receives: The `item` to process, the `response` where the item was extracted from, and the `spider` that extracted it. - * Returns: An `Item` object (not necessarily the same received), or `None` in which case the item is dropped. - * `next_request(spider)` - * Returns a the next request to crawl with this spider. This method is called when the spider is opened, and when it gets idle. - * Receives: The `spider` to return the next request for. - * Returns: A `Request` object. - * `open_spider(spider)` - * This can be used to allocate resources when a spider is opened. - * Receives: The `spider` that has been opened. - * Returns: nothing - * `close_spider(spider)` - * This can be used to free resources when a spider is closed. - * Receives: The `spider` that has been closed. - * Returns: nothing - -== Changes to core API == - -=== Injecting requests to crawl === - -To inject start requests (or new requests without a response) to crawl, you used before: - - * `manager.engine.crawl(request, spider)` - -Now you'll use: - - * `manager.scraper.process_request(request, spider, response=None)` - -Which (unlike the old `engine.crawl` will make the requests pass through the spider middleware `process_request()` method). - -=== Scheduler middleware to be removed === - -We're gonna remove the Scheduler Middleware, and move the duplicates filter to a new spider middleware. - -== Scraper high-level API == - -There is a simpler high-level API - the Scraper API - which is the API used by the engine and other core components. This is also the API implemented by this new middleware, with its own internal architecture and hooks. Here is the Scraper API: - - * `process_response(response, request, spider)` - * returns iterable of items and requests - * `process_error(error, request, spider)` - * returns iterable of items and requests - * `process_request(request, spider, response=None)` - * injects a request to crawl for the given spider - * `process_item(item, spider, response) - * injects a item to process with the item processor (typically the item pipeline) - * `next_request(spider)` - * returns the next request to process for the given spider - * `open_spider(spider)` - * opens a spider - * `close_spider(spider)` - * closes a spider - -== How it works == - -The spider middlewares are defined in certain order with the top-most being the one closer to the engine, and the bottom-most being the one closed to the spider. - -Example: - - * Engine - * Global spider Middleware 3 - * Global spider Middleware 2 - * Global spider Middleware 1 - * Spider-specific middlewares (defined in `Spider.middlewares`) - * Spider-specific middleware 3 - * Spider-specific middleware 2 - * Spider-specific middleware 1 - * Spider - -The data flow with Spider Middleware v2 is as follows: - - 1. When a response arrives from the engine, it it passed through all the spider middlewares (in descending order). The result of each middleware `process_response` is kept and then returned along with the spider callback result - 2. Each item of the aggregated result from previous point is passed through all middlewares (in ascending order) calling the `process_request` or `process_item` method accordingly, and their results are kept for passing to the following middlewares - -One of the spider middlewares (typically - but not necessarily - the last spider middleware closer to the spider, as shown in the example) will be a "spider-specific spider middleware" which would take care of calling the additional spider middlewares defined in the `Spider.middlewares` attribute, hence providing support for per-spider middlewares. If the middleware is well written, it should work both globally and per-spider. - -== Spider-specific middlewares == - -You can define in the spider itself a list of additional middlewares that will be used for this spider, and only this spider. If the middleware is well written, it should work both globally and per spider. - -Here's an example that combines functionality from multiple middlewares into the same spider: - -{{{ -#!python -class MySpider(BaseSpider): - - middlewares = [RegexLinkExtractor(), CallbackRules(), CanonicalizeUrl(), ItemIdSetter(), OffsiteMiddleware()] - - allowed_domains = ['example.com', 'sub.example.com'] - - url_regexes_to_follow = ['/product.php?.*'] - - callback_rules = { - '/product.php.*': 'parse_product', - '/category.php.*': 'parse_category', - } - - canonicalization_rules = ['sort-query-args', 'normalize-percent-encoding', ...] - - id_field = 'guid' - id_fields_to_hash = ['supplier_name', 'supplier_id'] - - def parse_product(self, item): - # extract item from response - return item - - def parse_category(self, item): - # extract item from response - return item -}}} - -== The Spider Middleware that implements spider code == - -There's gonna be one middleware that will take care of calling the proper spider methods on each event such as: - - * call `Request.callback` (for 200 responses) or `Request.errback` for non-200 responses and other errors. this behaviour can be changed through the `handle_httpstatus_list` spider attribute. - * if `Request.callback` is not set it will use `Spider.parse` - * if `Request.errback` is not set it will use `Spider.errback` - * call additional spider middlewares defined in the `Spider.middlewares` attribute - * call `Spider.next_request()` and `Spider.start_requests()` on `next_request()` middleware method (this would implicitly support backwards compatibility) - -== Differences with Spider middleware v1 == - - * adds support for per-spider middlewares through the `Spider.middlewares` attribute - * allows processing initial requests (those returned from `Spider.start_requests()`) - -== Use cases and examples == - -This section contains several examples and use cases for Spider Middlewares. Imports are intentionally removed for conciseness and clarity. - -=== Regex (HTML) Link Extractor === - -A typical application of spider middlewares could be to build Link Extractors. For example: - -{{{ -#!python -class RegexHtmlLinkExtractor(object): - - def process_response(self, response, request, spider): - if isinstance(response, HtmlResponse): - allowed_regexes = spider.url_regexes_to_follow - # extract urls to follow using allowed_regexes - return [Request(x) for x in urls_to_follow] - -# Example spider using this middleware -class MySpider(BaseSpider): - - middlewares = [RegexHtmlLinkExtractor()] - url_regexes_to_follow = ['/product.php?.*'] - - # parsing callbacks below -}}} - -=== RSS2 link extractor === - -{{{ -#!python -class Rss2LinkExtractor(object): - - def process_response(self, response, request, spider): - if response.headers.get('Content-type') == 'application/rss+xml': - xs = XmlXPathSelector(response) - urls = xs.select("//item/link/text()").extract() - return [Request(x) for x in urls] -}}} - -=== Callback dispatcher based on rules === - -Another example could be to build a callback dispatcher based on rules: - -{{{ -#!python -class CallbackRules(object): - - def __init__(self): - self.rules = {} - dispatcher.connect(signals.spider_opened, self.spider_opened) - dispatcher.connect(signals.spider_closed, self.spider_closed) - - def spider_opened(self, spider): - self.rules[spider] = {} - for regex, method_name in spider.callback_rules.items(): - r = re.compile(regex) - m = getattr(self.spider, method_name, None) - if m: - self.rules[spider][r] = m - - def spider_closed(self, spider): - del self.rules[spider] - - def process_response(self, response, request, spider): - for regex, method in self.rules[spider].items(): - m = regex.search(response.url) - if m: - return method(response) - return [] - -# Example spider using this middleware -class MySpider(BaseSpider): - - middlewares = [CallbackRules()] - callback_rules = { - '/product.php.*': 'parse_product', - '/category.php.*': 'parse_category', - } - - def parse_product(self, response): - # parse response and populate item - return item -}}} - -=== URL Canonicalizers === - -Another example could be for building URL canonicalizers: - -{{{ -#!python -class CanonializeUrl(object): - - def process_request(self, request, response, spider): - curl = canonicalize_url(request.url, rules=spider.canonicalization_rules) - return request.replace(url=curl) - -# Example spider using this middleware -class MySpider(BaseSpider): - - middlewares = [CanonicalizeUrl()] - canonicalization_rules = ['sort-query-args', 'normalize-percent-encoding', ...] - - # ... -}}} - -=== Setting item identifier === - -Another example could be for setting a unique identifier to items, based on certain fields: - -{{{ -#!python -class ItemIdSetter(object): - - def process_item(self, item, response, spider): - id_field = spider.id_field - id_fields_to_hash = spider.id_fields_to_hash - item[id_field] = make_hash_based_on_fields(item, id_fields_to_hash) - return item - -# Example spider using this middleware -class MySpider(BaseSpider): - - middlewares = [ItemIdSetter()] - id_field = 'guid' - id_fields_to_hash = ['supplier_name', 'supplier_id'] - - def parse(self, response): - # extract item from response - return item -}}} - -=== robots.txt exclusion === - -A spider middleware to avoid visiting pages forbidden by robots.txt: - -{{{ -#!python -class SpiderInfo(object): - - def __init__(self, useragent): - self.useragent = useragent - self.parsers = {} - self.pending = defaultdict(list) - - -class AllowAllParser(object): - - def can_fetch(useragent, url): - return True - - -class RobotsTxtMiddleware(object): - - REQUEST_PRIORITY = 1000 - - def __init__(self): - self.spiders = {} - dispatcher.connect(self.spider_opened, signal=signals.spider_opened) - dispatcher.connect(self.spider_closed, signal=signals.spider_closed) - - def process_request(self, request, response, spider): - return self.process_start_request(self, request) - - def process_start_request(self, request, spider): - info = self.spiders[spider] - url = urlparse_cached(request) - netloc = url.netloc - if netloc in info.parsers: - rp = info.parsers[netloc] - if rp.can_fetch(info.useragent, request.url): - res = request - else: - spider.log("Forbidden by robots.txt: %s" % request) - res = None - else: - if netloc in info.pending: - res = None - else: - robotsurl = "%s://%s/robots.txt" % (url.scheme, netloc) - meta = {'spider': spider, {'handle_httpstatus_list': [403, 404, 500]} - res = Request(robotsurl, callback=self.parse_robots, - meta=meta, priority=self.REQUEST_PRIORITY) - info.pending[netloc].append(request) - return res - - def parse_robots(self, response): - spider = response.request.meta['spider'] - netloc urlparse_cached(response).netloc - info = self.spiders[spider] - if response.status == 200; - rp = robotparser.RobotFileParser(response.url) - rp.parse(response.body.splitlines()) - info.parsers[netloc] = rp - else: - info.parsers[netloc] = AllowAllParser() - return info.pending[netloc] - - def spider_opened(self, spider): - ua = getattr(spider, 'user_agent', None) or settings['USER_AGENT'] - self.spiders[spider] = SpiderInfo(ua) - - def spider_closed(self, spider): - del self.spiders[spider] -}}} - -=== Offsite middleware === - -This is a port of the Offsite middleware to the new spider middleware API: - -{{{ -#!python -class SpiderInfo(object): - - def __init__(self, host_regex): - self.host_regex = host_regex - self.hosts_seen = set() - - -class OffsiteMiddleware(object): - - def __init__(self): - self.spiders = {} - dispatcher.connect(self.spider_opened, signal=signals.spider_opened) - dispatcher.connect(self.spider_closed, signal=signals.spider_closed) - - def process_request(self, request, response, spider): - return self.process_start_request(self, request) - - def process_start_request(self, request, spider): - if self.should_follow(request, spider): - return request - else: - info = self.spiders[spider] - host = urlparse_cached(x).hostname - if host and host not in info.hosts_seen: - spider.log("Filtered offsite request to %r: %s" % (host, request)) - info.hosts_seen.add(host) - - def should_follow(self, request, spider): - info = self.spiders[spider] - # hostanme can be None for wrong urls (like javascript links) - host = urlparse_cached(request).hostname or '' - return bool(info.regex.search(host)) - - def get_host_regex(self, spider): - """Override this method to implement a different offsite policy""" - domains = [d.replace('.', r'\.') for d in spider.allowed_domains] - regex = r'^(.*\.)?(%s)$' % '|'.join(domains) - return re.compile(regex) - - def spider_opened(self, spider): - info = SpiderInfo(self.get_host_regex(spider)) - self.spiders[spider] = info - - def spider_closed(self, spider): - del self.spiders[spider] - -}}} - -=== Limit URL length === - -A middleware to filter out requests with long urls: - -{{{ -#!python - -class LimitUrlLength(object): - - def __init__(self): - self.maxlength = settings.getint('URLLENGTH_LIMIT') - - def process_request(self, request, response, spider): - return self.process_start_request(self, request) - - def process_start_request(self, request, spider): - if len(request.url) <= self.maxlength: - return request - spider.log("Ignoring request (url length > %d): %s " % (self.maxlength, request.url)) -}}} - -=== Set Referer === - -A middleware to set the Referer: - -{{{ -#!python -class SetReferer(object): - - def process_request(self, request, response, spider): - request.headers.setdefault('Referer', response.url) - return request -}}} - -=== Set and limit crawling depth === - -A middleware to set (and limit) the request/response depth, taken from the start requests: - -{{{ -#!python -class SetLimitDepth(object): - - def __init__(self, maxdepth=0): - self.maxdepth = maxdepth or settings.getint('DEPTH_LIMIT') - - def process_request(self, request, response, spider): - depth = response.request.meta['depth'] + 1 - request.meta['depth'] = depth - if not self.maxdepth or depth <= self.maxdepth: - return request - spider.log("Ignoring link (depth > %d): %s " % (self.maxdepth, request) - - def process_start_request(self, request, spider): - request.meta['depth'] = 0 - return request -}}} - -=== Filter duplicate requests === - -A middleware to filter out requests already seen: - -{{{ -#!python -class FilterDuplicates(object): - - def __init__(self): - clspath = settings.get('DUPEFILTER_CLASS') - self.dupefilter = load_object(clspath)() - dispatcher.connect(self.spider_opened, signal=signals.spider_opened) - dispatcher.connect(self.spider_closed, signal=signals.spider_closed) - - def enqueue_request(self, spider, request): - seen = self.dupefilter.request_seen(spider, request) - if not seen or request.dont_filter: - return request - - def spider_opened(self, spider): - self.dupefilter.open_spider(spider) - - def spider_closed(self, spider): - self.dupefilter.close_spider(spider) -}}} - -=== Scrape data using Parsley === - -A middleware to Scrape data using Parsley as described in UsingParsley - -{{{ -#!python -from pyparsley import PyParsley - -class ParsleyExtractor(object): - - def __init__(self, parslet_json_code): - parslet = json.loads(parselet_json_code) - class ParsleyItem(Item): - def __init__(self, *a, **kw): - for name in parslet.keys(): - self.fields[name] = Field() - super(ParsleyItem, self).__init__(*a, **kw) - self.item_class = ParsleyItem - self.parsley = PyParsley(parslet, output='python') - - def process_response(self, response, request, spider): - return self.item_class(self.parsly.parse(string=response.body)) -}}} - - - -== Pending issues == - -Resolved: - - * how to make `start_requests()` output pass through spider middleware `process_request()`? - * Start requests will be injected through `manager.scraper.process_request()` instead of `manager.engine.crawl()` - * should we support adding additional start requests from a spider middleware? - * Yes - there is a spider middleware method (`start_requests`) for that - * should `process_response()` receive a `request` argument with the `request` that originated it?. `response.request` is the latest request, not the original one (think of redirections), but it does carry the `meta` of the original one. The original one may not be available anymore (in memory) if we're using a persistent scheduler., but in that case it would be the deserialized request from the persistent scheduler queue. - * No - this would make implementation more complex and we're not sure it's really needed - * how to make sure `Request.errback` is always called if there is a problem with the request?. Do we need to ensure that?. Requests filtered out (by returning `None`) in the `process_request()` method will never be callback-ed or even errback-ed. this could be a problem for spiders that want to be notified if their requests are dropped. should we support this notification somehow or document (the lack of) it properly? - * We won't support notifications of dropped requests, because: 1. it's hard to implement and unreliable, 2. it's against not friendly with request persistence, 3. we can't come up with a good api. - * should we make the list of default spider middlewares empty? (or the "per-spider" spider middleware alone) - * No - there are some useful spider middlewares that it's worth enabling by default like referer, duplicates, robots2 - * should we allow returning deferreds in spider middleware methods? - * Yes - we should build a Deferred with the spider middleware methods as callbacks and that would implicitly support returning Deferreds - * should we support processing responses before they're processed by the spider, because `process_response` runs "in parallel" to the spider callback, and can't stop from running it. - * No - we haven't seen a practical use case for this, so we won't add an additional hook. It should be trivial to add it later, if needed. - * should we make a spider middleware to handle calling the request and spider callback, instead of letting the Scraper component do it? - * Yes - there's gonna a spider middleware for execution spider-specific code such as callbacks and also custom middlewares From 05ac4112cdf405b8ca11ef0638c53d96722a39d8 Mon Sep 17 00:00:00 2001 From: Edwin O Marshall Date: Fri, 7 Mar 2014 10:47:54 -0500 Subject: [PATCH 2/7] converted sep 16 --- sep/sep-016.rst | 306 +++++++++++++++++++++++++++++++++++++++++++++++ sep/sep-016.trac | 265 ---------------------------------------- 2 files changed, 306 insertions(+), 265 deletions(-) create mode 100644 sep/sep-016.rst delete mode 100644 sep/sep-016.trac diff --git a/sep/sep-016.rst b/sep/sep-016.rst new file mode 100644 index 00000000000..335f09f450e --- /dev/null +++ b/sep/sep-016.rst @@ -0,0 +1,306 @@ +======= ============================= +SEP 16 +Title Leg Spider +Author Insophia Team +Created 2010-06-03 +Status Superseded by :doc:`sep-018` +======= ============================= + +=================== +SEP-016: Leg Spider +=================== + +This SEP introduces a new kind of Spider called ``LegSpider`` which provides +modular functionality which can be plugged to different spiders. + +Rationale +========= + +The purpose of Leg Spiders is to define an architecture for building spiders +based on smaller well-tested components (aka. Legs) that can be combined to +achieve the desired functionality. These reusable components will benefit all +Scrapy users by building a repository of well-tested components (legs) that can +be shared among different spiders and projects. Some of them will come bundled +with Scrapy. + +The Legs themselves can be also combined with sub-legs, in a hierarchical +fashion. Legs are also spiders themselves, hence the name "Leg Spider". + +``LegSpider`` API +================= + +A ``LegSpider`` is a ``BaseSpider`` subclass that adds the following attributes and methods: + +- ``legs`` + - legs composing this spider +- ``process_response(response)`` + - Process a (downloaded) response and return a list of requests and items +- ``process_request(request)`` + - Process a request after it has been extracted and before returning it from + the spider +- ``process_item(item)`` + - Process an item after it has been extracted and before returning it from + the spider +- ``set_spider()`` + - Defines the main spider associated with this Leg Spider, which is often + used to configure the Leg Spider behavior. + +How Leg Spiders work +==================== + +1. Each Leg Spider has zero or many Leg Spiders associated with it. When a + response arrives, the Leg Spider process it with its ``process_response`` + method and also the ``process_response`` method of all its "sub leg + spiders". Finally, the output of all of them is combined to produce the + final aggregated output. +2. Each element of the aggregated output of ``process_response`` is processed + with either ``process_item`` or ``process_request`` before being returned + from the spider. Similar to ``process_response``, each item/request is + processed with all ``process_{request,item``} of the leg spiders composing + the spider, and also with those of the spider itself. + +Leg Spider examples +=================== + +Regex (HTML) Link Extractor +--------------------------- + +A typical application of LegSpider's is to build Link Extractors. For example: + +:: + + #!python + class RegexHtmlLinkExtractor(LegSpider): + + def process_response(self, response): + if isinstance(response, HtmlResponse): + allowed_regexes = self.spider.url_regexes_to_follow + # extract urls to follow using allowed_regexes + return [Request(x) for x in urls_to_follow] + + class MySpider(LegSpider): + + legs = [RegexHtmlLinkExtractor()] + url_regexes_to_follow = ['/product.php?.*'] + + def parse_response(self, response): + # parse response and extract items + return items + +RSS2 link extractor +------------------- + +This is a Leg Spider that can be used for following links from RSS2 feeds. + +:: + + #!python + class Rss2LinkExtractor(LegSpider): + + def process_response(self, response): + if response.headers.get('Content-type') 'application/rss+xml': + xs = XmlXPathSelector(response) + urls = xs.select("//item/link/text()").extract() + return [Request(x) for x in urls] + +Callback dispatcher based on rules +---------------------------------- + +Another example could be to build a callback dispatcher based on rules: + +:: + + #!python + class CallbackRules(LegSpider): + + def __init__(self, *a, **kw): + super(CallbackRules, self).__init__(*a, **kw) + for regex, method_name in self.spider.callback_rules.items(): + r = re.compile(regex) + m = getattr(self.spider, method_name, None) + if m: + self._rules[r] = m + + def process_response(self, response): + for regex, method in self._rules.items(): + m = regex.search(response.url) + if m: + return method(response) + return [] + + class MySpider(LegSpider): + + legs = [CallbackRules()] + callback_rules = { + '/product.php.*': 'parse_product', + '/category.php.*': 'parse_category', + } + + def parse_product(self, response): + # parse response and populate item + return item + +URL Canonicalizers +------------------ + +Another example could be for building URL canonicalizers: + +:: + + #!python + class CanonializeUrl(LegSpider): + + def process_request(self, request): + curl = canonicalize_url(request.url, rules=self.spider.canonicalization_rules) + return request.replace(url=curl) + + class MySpider(LegSpider): + + legs = [CanonicalizeUrl()] + canonicalization_rules = ['sort-query-args', 'normalize-percent-encoding', ...] + + # ... + +Setting item identifier +----------------------- + +Another example could be for setting a unique identifier to items, based on +certain fields: + +:: + + #!python + class ItemIdSetter(LegSpider): + + def process_item(self, item): + id_field = self.spider.id_field + id_fields_to_hash = self.spider.id_fields_to_hash + item[id_field] = make_hash_based_on_fields(item, id_fields_to_hash) + return item + + class MySpider(LegSpider): + + legs = [ItemIdSetter()] + id_field = 'guid' + id_fields_to_hash = ['supplier_name', 'supplier_id'] + + def process_response(self, item): + # extract item from response + return item + +Combining multiple leg spiders +------------------------------ + +Here's an example that combines functionality from multiple leg spiders: + +:: + + #!python + class MySpider(LegSpider): + + legs = [RegexLinkExtractor(), ParseRules(), CanonicalizeUrl(), ItemIdSetter()] + + url_regexes_to_follow = ['/product.php?.*'] + + parse_rules = { + '/product.php.*': 'parse_product', + '/category.php.*': 'parse_category', + } + + canonicalization_rules = ['sort-query-args', 'normalize-percent-encoding', ...] + + id_field = 'guid' + id_fields_to_hash = ['supplier_name', 'supplier_id'] + + def process_product(self, item): + # extract item from response + return item + + def process_category(self, item): + # extract item from response + return item + +Leg Spiders vs Spider middlewares +================================= + +A common question that would arise is when one should use Leg Spiders and when +to use Spider middlewares. Leg Spiders functionality is meant to implement +spider-specific functionality, like link extraction which has custom rules per +spider. Spider middlewares, on the other hand, are meant to implement global +functionality. + +When not to use Leg Spiders +=========================== + +Leg Spiders are not a silver bullet to implement all kinds of spiders, so it's +important to keep in mind their scope and limitations, such as: + +- Leg Spiders can't filter duplicate requests, since they don't have access to + all requests at the same time. This functionality should be done in a spider + or scheduler middleware. +- Leg Spiders are meant to be used for spiders whose behavior (requests & items + to extract) depends only on the current page and not previously crawled pages + (aka. "context-free spiders"). If your spider has some custom logic with + chained downloads (for example, multi-page items) then Leg Spiders may not be + a good fit. + +``LegSpider`` proof-of-concept implementation +============================================= + +Here's a proof-of-concept implementation of ``LegSpider``: + +:: + + #!python + from scrapy.http import Request + from scrapy.item import BaseItem + from scrapy.spider import BaseSpider + from scrapy.utils.spider import iterate_spider_output + + + class LegSpider(BaseSpider): + """A spider made of legs""" + + legs = [] + + def __init__(self, *args, **kwargs): + super(LegSpider, self).__init__(*args, **kwargs) + self._legs = [self] + self.legs[:] + for l in self._legs: + l.set_spider(self) + + def parse(self, response): + res = self._process_response(response) + for r in res: + if isinstance(r, BaseItem): + yield self._process_item(r) + else: + yield self._process_request(r) + + def process_response(self, response): + return [] + + def process_request(self, request): + return request + + def process_item(self, item): + return item + + def set_spider(self, spider): + self.spider = spider + + def _process_response(self, response): + res = [] + for l in self._legs: + res.extend(iterate_spider_output(l.process_response(response))) + return res + + def _process_request(self, request): + for l in self._legs: + request = l.process_request(request) + return request + + def _process_item(self, item): + for l in self._legs: + item = l.process_item(item) + return item diff --git a/sep/sep-016.trac b/sep/sep-016.trac deleted file mode 100644 index e9a0e5df0ce..00000000000 --- a/sep/sep-016.trac +++ /dev/null @@ -1,265 +0,0 @@ -= SEP-016: Leg Spider = - -[[PageOutline(2-5,Contents)]] - -||'''SEP:'''||16|| -||'''Title:'''||Leg Spider|| -||'''Author:'''||Insophia Team|| -||'''Created:'''||2010-06-03|| -||'''Status'''||Superseded by [wiki:SEP-018]|| - -== Introduction == - -This SEP introduces a new kind of Spider called {{{LegSpider}}} which provides modular functionality which can be plugged to different spiders. - -== Rationale == - -The purpose of Leg Spiders is to define an architecture for building spiders based on smaller well-tested components (aka. Legs) that can be combined to achieve the desired functionality. These reusable components will benefit all Scrapy users by building a repository of well-tested components (legs) that can be shared among different spiders and projects. Some of them will come bundled with Scrapy. - -The Legs themselves can be also combined with sub-legs, in a hierarchical fashion. Legs are also spiders themselves, hence the name "Leg Spider". - -== {{{LegSpider}}} API == - -A {{{LegSpider}}} is a {{{BaseSpider}}} subclass that adds the following attributes and methods: - - * {{{legs}}} - * legs composing this spider - * {{{process_response(response)}}} - * Process a (downloaded) response and return a list of requests and items - * {{{process_request(request)}}} - * Process a request after it has been extracted and before returning it from the spider - * {{{process_item(item)}}} - * Process an item after it has been extracted and before returning it from the spider - * {{{set_spider()}}} - * Defines the main spider associated with this Leg Spider, which is often used to configure the Leg Spider behavior. - -== How Leg Spiders work == - - 1. Each Leg Spider has zero or many Leg Spiders associated with it. When a response arrives, the Leg Spider process it with its {{{process_response}}} method and also the {{{process_response}}} method of all its "sub leg spiders". Finally, the output of all of them is combined to produce the final aggregated output. - 2. Each element of the aggregated output of {{{process_response}}} is processed with either {{{process_item}}} or {{{process_request}}} before being returned from the spider. Similar to {{{process_response}}}, each item/request is processed with all {{{process_{request,item}}}} of the leg spiders composing the spider, and also with those of the spider itself. - -== Leg Spider examples == - -=== Regex (HTML) Link Extractor === - -A typical application of LegSpider's is to build Link Extractors. For example: - -{{{ -#!python -class RegexHtmlLinkExtractor(LegSpider): - - def process_response(self, response): - if isinstance(response, HtmlResponse): - allowed_regexes = self.spider.url_regexes_to_follow - # extract urls to follow using allowed_regexes - return [Request(x) for x in urls_to_follow] - -class MySpider(LegSpider): - - legs = [RegexHtmlLinkExtractor()] - url_regexes_to_follow = ['/product.php?.*'] - - def parse_response(self, response): - # parse response and extract items - return items -}}} - -=== RSS2 link extractor === - -This is a Leg Spider that can be used for following links from RSS2 feeds. - -{{{ -#!python -class Rss2LinkExtractor(LegSpider): - - def process_response(self, response): - if response.headers.get('Content-type') == 'application/rss+xml': - xs = XmlXPathSelector(response) - urls = xs.select("//item/link/text()").extract() - return [Request(x) for x in urls] -}}} - -=== Callback dispatcher based on rules === - -Another example could be to build a callback dispatcher based on rules: - -{{{ -#!python -class CallbackRules(LegSpider): - - def __init__(self, *a, **kw): - super(CallbackRules, self).__init__(*a, **kw) - for regex, method_name in self.spider.callback_rules.items(): - r = re.compile(regex) - m = getattr(self.spider, method_name, None) - if m: - self._rules[r] = m - - def process_response(self, response): - for regex, method in self._rules.items(): - m = regex.search(response.url) - if m: - return method(response) - return [] - -class MySpider(LegSpider): - - legs = [CallbackRules()] - callback_rules = { - '/product.php.*': 'parse_product', - '/category.php.*': 'parse_category', - } - - def parse_product(self, response): - # parse response and populate item - return item -}}} - -=== URL Canonicalizers === - -Another example could be for building URL canonicalizers: - -{{{ -#!python -class CanonializeUrl(LegSpider): - - def process_request(self, request): - curl = canonicalize_url(request.url, rules=self.spider.canonicalization_rules) - return request.replace(url=curl) - -class MySpider(LegSpider): - - legs = [CanonicalizeUrl()] - canonicalization_rules = ['sort-query-args', 'normalize-percent-encoding', ...] - - # ... -}}} - -=== Setting item identifier === - -Another example could be for setting a unique identifier to items, based on certain fields: - -{{{ -#!python -class ItemIdSetter(LegSpider): - - def process_item(self, item): - id_field = self.spider.id_field - id_fields_to_hash = self.spider.id_fields_to_hash - item[id_field] = make_hash_based_on_fields(item, id_fields_to_hash) - return item - -class MySpider(LegSpider): - - legs = [ItemIdSetter()] - id_field = 'guid' - id_fields_to_hash = ['supplier_name', 'supplier_id'] - - def process_response(self, item): - # extract item from response - return item -}}} - -=== Combining multiple leg spiders === - -Here's an example that combines functionality from multiple leg spiders: - -{{{ -#!python -class MySpider(LegSpider): - - legs = [RegexLinkExtractor(), ParseRules(), CanonicalizeUrl(), ItemIdSetter()] - - url_regexes_to_follow = ['/product.php?.*'] - - parse_rules = { - '/product.php.*': 'parse_product', - '/category.php.*': 'parse_category', - } - - canonicalization_rules = ['sort-query-args', 'normalize-percent-encoding', ...] - - id_field = 'guid' - id_fields_to_hash = ['supplier_name', 'supplier_id'] - - def process_product(self, item): - # extract item from response - return item - - def process_category(self, item): - # extract item from response - return item -}}} - - - -== Leg Spiders vs Spider middlewares == - -A common question that would arise is when one should use Leg Spiders and when to use Spider middlewares. Leg Spiders functionality is meant to implement spider-specific functionality, like link extraction which has custom rules per spider. Spider middlewares, on the other hand, are meant to implement global functionality. - -== When not to use Leg Spiders == - -Leg Spiders are not a silver bullet to implement all kinds of spiders, so it's important to keep in mind their scope and limitations, such as: - - * Leg Spiders can't filter duplicate requests, since they don't have access to all requests at the same time. This functionality should be done in a spider or scheduler middleware. - * Leg Spiders are meant to be used for spiders whose behavior (requests & items to extract) depends only on the current page and not previously crawled pages (aka. "context-free spiders"). If your spider has some custom logic with chained downloads (for example, multi-page items) then Leg Spiders may not be a good fit. - -== {{{LegSpider}}} proof-of-concept implementation == - -Here's a proof-of-concept implementation of {{{LegSpider}}}: - -{{{ -#!python -from scrapy.http import Request -from scrapy.item import BaseItem -from scrapy.spider import BaseSpider -from scrapy.utils.spider import iterate_spider_output - - -class LegSpider(BaseSpider): - """A spider made of legs""" - - legs = [] - - def __init__(self, *args, **kwargs): - super(LegSpider, self).__init__(*args, **kwargs) - self._legs = [self] + self.legs[:] - for l in self._legs: - l.set_spider(self) - - def parse(self, response): - res = self._process_response(response) - for r in res: - if isinstance(r, BaseItem): - yield self._process_item(r) - else: - yield self._process_request(r) - - def process_response(self, response): - return [] - - def process_request(self, request): - return request - - def process_item(self, item): - return item - - def set_spider(self, spider): - self.spider = spider - - def _process_response(self, response): - res = [] - for l in self._legs: - res.extend(iterate_spider_output(l.process_response(response))) - return res - - def _process_request(self, request): - for l in self._legs: - request = l.process_request(request) - return request - - def _process_item(self, item): - for l in self._legs: - item = l.process_item(item) - return item -}}} \ No newline at end of file From 5312146ff7958804fa9519f1fed722aa9a79fbc9 Mon Sep 17 00:00:00 2001 From: Edwin O Marshall Date: Fri, 7 Mar 2014 11:07:57 -0500 Subject: [PATCH 3/7] converted sep 13 --- sep/sep-013.rst | 188 +++++++++++++++++++++++++++++++++++++++++++++++ sep/sep-013.trac | 129 -------------------------------- 2 files changed, 188 insertions(+), 129 deletions(-) create mode 100644 sep/sep-013.rst delete mode 100644 sep/sep-013.trac diff --git a/sep/sep-013.rst b/sep/sep-013.rst new file mode 100644 index 00000000000..4c11a0762ee --- /dev/null +++ b/sep/sep-013.rst @@ -0,0 +1,188 @@ +======= ==================================== +SEP 13 +Title Middlewares Refactoring +Author Pablo Hoffman +Created 2009-11-14 +Status Document in progress (being written) +======= ==================================== + +================================= +SEP-013 - Middlewares refactoring +================================= + +This SEP proposes a refactoring of Scrapy middlewares to remove some +inconsistencies and limitations. + +Current flaws and inconsistencies +================================== + +Even though the core works pretty well, it has some subtle inconsistencies that +don't manifest in the common uses, but arise (and are quite annoying) when you +try to fully exploit all Scrapy features. The currently identified flaws and +inconsistencies are: + +1. Request errback may not get called in all cases (more details needed on when + this happens) +2. Spider middleware has a ``process_spider_exception`` method which catches + exceptions coming out of the spider, but it doesn't have an analogous for + catching exceptions coming into the spider (for example, from other + downloader middlewares). This complicates supporting middlewares that extend + other middlewares. +3. Downloader middleware has a ``process_exception`` method which catches + exceptions coming out of the downloader, but it doesn't have an analogous + for catching exceptions coming into the downloader (for example, from other + downloader middlewares). This complicates supporting middlewares that extend + other middlewares. +4. Scheduler middleware has a ``enqueue_request`` method but doesn't have a + ``enqueue_request_exception`` nor ``dequeue_request`` nor + ``dequeue_request_exception`` methods. + +These flaws will be corrected by the changes proposed in this SEP. + +Overview of changes proposed +============================ + +Most of the inconsistencies come from the fact that middlewares don't follow +the typical +[http://twistedmatrix.com/projects/core/documentation/howto/defer.html +deferred] callback/errback chaining logic. Twisted logic is fine and quite +intuitive, and also fits middlewares very well. Due to some bad design choices +the integration between middleware calls and deferred is far from optional. So +the changes to middlewares involve mainly building deferred chains with the +middleware methods and adding the missing method to each callback/errback +chain. The proposed API for each middleware is described below. + +See |scrapy core v2| - a diagram draft for the process architecture. + +Global changes to all middlewares +================================= + +To be discussed: + +1. should we support returning deferreds (ie. ``maybeDeferred``) in middleware + methods? +2. should we pass Twisted Failures instead of exceptions to error methods? + +Spider middleware changes +========================= + +Current API +----------- + +- ``process_spider_input(response, spider)`` +- ``process_spider_output(response, result, spider)`` +- ``process_spider_exception(response, exception, spider=spider)`` + +Changes proposed +---------------- + +1. rename method: ``process_spider_exception`` to + ``process_spider_output_exception`` +2. add method" ``process_spider_input_exception`` + +New API +------- + +- ``SpiderInput`` deferred + - ``process_spider_input(response, spider)`` + - ``process_spider_input_exception(response, exception, spider=spider)`` +- ``SpiderOutput`` deferred + - ``process_spider_output(response, result, spider)`` + - ``process_spider_output_exception(response, exception, spider=spider)`` + +Downloader middleware changes +============================= + +Current API +----------- + +- ``process_request(request, spider)`` +- ``process_response(request, response, spider)`` +- ``process_exception(request, exception, spider)`` + +Changes proposed +---------------- + + 1. rename method: ``process_exception`` to ``process_response_exception`` + 2. add method: ``process_request_exception`` + +New API +------- + +- ``ProcessRequest`` deferred + - ``process_request(request, spider)`` + - ``process_request_exception(request, exception, response)`` +- ``ProcessResponse`` deferred + - ``process_response(request, spider, response)`` + - ``process_response_exception(request, exception, response)`` + +Scheduler middleware changes +============================ + +Current API +----------- + +- ``enqueue_request(spider, request)`` + - '''TBD:''' what does it mean to return a Response object here? (the current implementation allows it) +- ``open_spider(spider)`` +- ``close_spider(spider)`` + +Changes proposed +---------------- + +1. exchange order of method arguments '''(spider, request)''' to '''(request, + spider)''' for consistency with the other middlewares +2. add methods: ``dequeue_request``, ``enqueue_request_exception``, + ``dequeue_request_exception`` +3. remove methods: ``open_spider``, ``close_spider``. They should be + replaced by using the ``spider_opened``, ``spider_closed`` signals, but + they weren't before because of a chicken-egg problem when open spiders + (because of scheduler auto-open feature). + +- '''TBD:''' how to get rid of chicken-egg problem, perhaps refactoring scheduler auto-open? + +New API +------- + +- ``EnqueueRequest`` deferred + - ``enqueue_request(request, spider)`` + - Can return: + - return Request: which is passed to next mw component + - raise ``IgnoreRequest`` + - raise any other exception (errback chain called) + - ``enqueue_request_exception(request, exception, spider)`` + - Output and errors: + - The Request that gets returned by last enqueue_request() is the one + that gets scheduled + - If no request is returned but a Failure, the Request errback is called + with that failure + + - '''TBD''': do we want to call request errback if it fails + scheduling?0 +- ``DequeueRequest`` deferred + - ``dequeue_request(request, spider)`` + - ``dequeue_request_exception(exception, spider)`` + +Open issues (to resolve) +======================== + +1. how to avoid massive ``IgnoreRequest`` exceptions from propagating which + slows down the crawler +2. if requests change, how do we keep reference to the original one? do we need + to? + - opt 1: don't allow changing the original Request object - discarded + - opt 2: keep reference to the original request (how it's done now) + - opt 3: split SpiderRequest from DownloaderRequest + + - opt 5: keep reference only to original deferred and forget about the + original request +3. scheduler auto-open chicken-egg problem + + - opt 1: drop auto-open y forbid opening spiders if concurrent is full. use + SpiderScheduler instead. why is scheduler auto-open really needed? +4. call ``Request.errback`` if both schmw and dlmw fail? + - opt 1: ignore and just propagate the error as-is + - opt 2: call another method? like Request.schmw_errback / dlmw_errback? + - opt 3: use an exception wrapper? SchedulerError() DownloaderError()? + +.. |scrapy core v2| image:: scrapy_core_v2.jpg diff --git a/sep/sep-013.trac b/sep/sep-013.trac deleted file mode 100644 index 40784778aad..00000000000 --- a/sep/sep-013.trac +++ /dev/null @@ -1,129 +0,0 @@ -= SEP-013 - Middlewares refactoring = - -[[PageOutline(2-5,Contents)]] - -||'''SEP:'''||13|| -||'''Title:'''||Middlewares Refactoring|| -||'''Author:'''||Pablo Hoffman|| -||'''Created:'''||2009-11-14|| -||'''Status'''||Document in progress (being written)|| - -== Introduction == - -This SEP proposes a refactoring of Scrapy middlewares to remove some inconsistencies and limitations. - -== Current flaws and inconsistencies == - -Even though the core works pretty well, it has some subtle inconsistencies that don't manifest in the common uses, but arise (and are quite annoying) when you try to fully exploit all Scrapy features. The currently identified flaws and inconsistencies are: - - 1. Request errback may not get called in all cases (more details needed on when this happens) - 2. Spider middleware has a {{{process_spider_exception}}} method which catches exceptions coming out of the spider, but it doesn't have an analogous for catching exceptions coming into the spider (for example, from other downloader middlewares). This complicates supporting middlewares that extend other middlewares. - 3. Downloader middleware has a {{{process_exception}}} method which catches exceptions coming out of the downloader, but it doesn't have an analogous for catching exceptions coming into the downloader (for example, from other downloader middlewares). This complicates supporting middlewares that extend other middlewares. - 4. Scheduler middleware has a {{{enqueue_request}}} method but doesn't have a {{{enqueue_request_exception}}} nor {{{dequeue_request}}} nor {{{dequeue_request_exception}}} methods. - -These flaws will be corrected by the changes proposed in this SEP. - -== Overview of changes proposed == - -Most of the inconsistencies come from the fact that middlewares don't follow the typical [http://twistedmatrix.com/projects/core/documentation/howto/defer.html deferred] callback/errback chaining logic. Twisted logic is fine and quite intuitive, and also fits middlewares very well. Due to some bad design choices the integration between middleware calls and deferred is far from optional. So the changes to middlewares involve mainly building deferred chains with the middleware methods and adding the missing method to each callback/errback chain. The proposed API for each middleware is described below. - -See [attachment:scrapy_core_v2.jpg] - a diagram draft for the proposes architecture. - -== Global changes to all middlewares == - -To be discussed: - - 1. should we support returning deferreds (ie. {{{maybeDeferred}}}) in middleware methods? - 2. should we pass Twisted Failures instead of exceptions to error methods? - -== Spider middleware changes == - -=== Current API === - - * {{{process_spider_input(response, spider)}}} - * {{{process_spider_output(response, result, spider)}}} - * {{{process_spider_exception(response, exception, spider=spider)}}} - -=== Changes proposed === - - 1. rename method: {{{process_spider_exception}}} to {{{process_spider_output_exception}}} - 2. add method" {{{process_spider_input_exception}}} - -=== New API === - - * {{{SpiderInput}}} deferred - * {{{process_spider_input(response, spider)}}} - * {{{process_spider_input_exception(response, exception, spider=spider)}}} - * {{{SpiderOutput}}} deferred - * {{{process_spider_output(response, result, spider)}}} - * {{{process_spider_output_exception(response, exception, spider=spider)}}} - -== Downloader middleware changes == - -=== Current API === - - * {{{process_request(request, spider)}}} - * {{{process_response(request, response, spider)}}} - * {{{process_exception(request, exception, spider)}}} - -=== Changes proposed === - - 1. rename method: {{{process_exception}}} to {{{process_response_exception}}} - 2. add method: {{{process_request_exception}}} - -=== New API === - - * {{{ProcessRequest}}} deferred - * {{{process_request(request, spider)}}} - * {{{process_request_exception(request, exception, response)}}} - * {{{ProcessResponse}}} deferred - * {{{process_response(request, spider, response)}}} - * {{{process_response_exception(request, exception, response)}}} - -== Scheduler middleware changes == - -=== Current API === - - * {{{enqueue_request(spider, request)}}} - * '''TBD:''' what does it mean to return a Response object here? (the current implementation allows it) - * {{{open_spider(spider)}}} - * {{{close_spider(spider)}}} - -=== Changes proposed === - - 1. exchange order of method arguments '''(spider, request)''' to '''(request, spider)''' for consistency with the other middlewares - 2. add methods: {{{dequeue_request}}}, {{{enqueue_request_exception}}}, {{{dequeue_request_exception}}} - 3. remove methods: {{{open_spider}}}, {{{close_spider}}}. They should be replaced by using the {{{spider_opened}}}, {{{spider_closed}}} signals, but they weren't before because of a chicken-egg problem when open spiders (because of scheduler auto-open feature). - * '''TBD:''' how to get rid of chicken-egg problem, perhaps refactoring scheduler auto-open? - -=== New API === - - * {{{EnqueueRequest}}} deferred - * {{{enqueue_request(request, spider)}}} - * Can return: - * return Request: which is passed to next mw component - * raise {{{IgnoreRequest}}} - * raise any other exception (errback chain called) - * {{{enqueue_request_exception(request, exception, spider)}}} - * Output and errors: - * The Request that gets returned by last enqueue_request() is the one that gets scheduled - * If no request is returned but a Failure, the Request errback is called with that failure - * '''TBD''': do we want to call request errback if it fails scheduling?0 - * {{{DequeueRequest}}} deferred - * {{{dequeue_request(request, spider)}}} - * {{{dequeue_request_exception(exception, spider)}}} - -== Open issues (to resolve) == - - 1. how to avoid massive {{{IgnoreRequest}}} exceptions from propagating which slows down the crawler - 1. if requests change, how do we keep reference to the original one? do we need to? - * opt 1: don't allow changing the original Request object - discarded - * opt 2: keep reference to the original request (how it's done now) - * opt 3: split SpiderRequest from DownloaderRequest - * opt 5: keep reference only to original deferred and forget about the original request - 1. scheduler auto-open chicken-egg problem - * opt 1: drop auto-open y forbid opening spiders if concurrent is full. use SpiderScheduler instead. why is scheduler auto-open really needed? - 1. call {{{Request.errback}}} if both schmw and dlmw fail? - * opt 1: ignore and just propagate the error as-is - * opt 2: call another method? like Request.schmw_errback / dlmw_errback? - * opt 3: use an exception wrapper? SchedulerError() DownloaderError()? From f43c99f36006dd74d80d00145cdef02d7b65ad7b Mon Sep 17 00:00:00 2001 From: Edwin O Marshall Date: Fri, 7 Mar 2014 11:18:08 -0500 Subject: [PATCH 4/7] converted sep 5 --- sep/sep-005.rst | 142 +++++++++++++++++++++++++++++++++++++++++++++++ sep/sep-005.trac | 119 --------------------------------------- 2 files changed, 142 insertions(+), 119 deletions(-) create mode 100644 sep/sep-005.rst delete mode 100644 sep/sep-005.trac diff --git a/sep/sep-005.rst b/sep/sep-005.rst new file mode 100644 index 00000000000..e795838e492 --- /dev/null +++ b/sep/sep-005.rst @@ -0,0 +1,142 @@ +======= ============================== +SEP 5 +Title ItemBuilder API +Author Ismael Carnales, Pablo Hoffman +Created 2009-07-24 +Status Obsoleted by :doc:`sep-008` +======= ============================== + +========================================= +SEP-005: Detailed ``ItemBuilder`` API use +========================================= + +Item class for examples: + +:: + + #!python + class NewsItem(Item): + url = fields.TextField() + headline = fields.TextField() + content = fields.TextField() + published = fields.DateField() + + +gSetting expanders +================== + +:: + + #!python + class NewsItemBuilder(ItemBuilder): + item_class = NewsItem + + headline = reducers.Reducer(extract, remove_tags(), unquote(), strip) + + +This approach will override the Reducer class for ``BuilderFields`` depending +on their Item Field class: + + * ``MultivaluedField`` = ``PassValue`` + * ``TextField`` = ``JoinStrings`` + * other = ``TakeFirst`` + +gSetting reducers +================= + +:: + + #!python + class NewsItemBuilder(ItemBuilder): + item_class = NewsItem + + headline = reducers.TakeFirst(extract, remove_tags(), unquote(), strip) + published = reducers.Reducer(extract, remove_tags(), unquote(), strip) + + +As with the previous example this would select join_strings as the reducer for +content + +gSetting expanders/reducers new way +=================================== + +:: + + #!python + class NewsItemBuilder(ItemBuilder): + item_class = NewsItem + + headline = BuilderField(extract, remove_tags(), unquote(), strip) + content = BuilderField(extract, remove_tags(), unquote(), strip) + + class Reducer: + headline = TakeFirst + + +gExtending ``ItemBuilder`` +========================== + +:: + + #!python + class SiteNewsItemBuilder(NewsItemBuilder): + published = reducers.Reducer(extract, remove_tags(), unquote(), + strip, to_date('%d.%m.%Y')) + + +gExtending ``ItemBuilder`` using statich methods +================================================ + +:: + + #!python + class SiteNewsItemBuilder(NewsItemBuilder): + published = reducers.Reducer(NewsItemBuilder.published, to_date('%d.%m.%Y')) + + +gUsing default_builder +====================== + +:: + + #!python + class DefaultedNewsItemBuilder(ItemBuilder): + item_class = NewsItem + + default_builder = reducers.Reducer(extract, remove_tags(), unquote(), strip) + + +This will use default_builder as the builder for every field in the item class. +As a reducer is not set reducers will be set based on Item Field classess. + +gReset default_builder for a field +================================== + +:: + + #!python + class DefaultedNewsItemBuilder(ItemBuilder): + item_class = NewsItem + + default_builder = reducers.Reducer(extract, remove_tags(), unquote(), strip) + url = BuilderField() + + +gExtending default ``ItemBuilder`` +================================== + +:: + + #!python + class SiteNewsItemBuilder(NewsItemBuilder): + published = reducers.Reducer(extract, remove_tags(), unquote(), strip, to_date('%d.%m.%Y')) + + +gExtending default ``ItemBuilder`` using static methods +======================================================= + +:: + + #!python + class SiteNewsItemBuilder(NewsItemBuilder): + published = reducers.Reducer(NewsItemBuilder.default_builder, to_date('%d.%m.%Y')) diff --git a/sep/sep-005.trac b/sep/sep-005.trac deleted file mode 100644 index a57bc080ce8..00000000000 --- a/sep/sep-005.trac +++ /dev/null @@ -1,119 +0,0 @@ -= SEP-005: Detailed !ItemBuilder API use = - -[[PageOutline(2-5,Contents)]] - -||'''SEP:'''||5|| -||'''Title:'''||!ItemBuilder API|| -||'''Author:'''||Ismael Carnales, Pablo Hoffman|| -||'''Created:'''||2009-07-24|| -||'''Status'''||Obsoleted by [wiki:SEP-008]|| - -Item class for examples: - -{{{ -#!python -class NewsItem(Item): - url = fields.TextField() - headline = fields.TextField() - content = fields.TextField() - published = fields.DateField() -}}} - -== Setting expanders == - -{{{ -#!python -class NewsItemBuilder(ItemBuilder): - item_class = NewsItem - - headline = reducers.Reducer(extract, remove_tags(), unquote(), strip) -}}} - -This approach will override the Reducer class for !BuilderFields depending on their Item Field class: - - * !MultivaluedField = PassValue - * !TextField = JoinStrings - * other = TakeFirst - -== Setting reducers == - -{{{ -#!python -class NewsItemBuilder(ItemBuilder): - item_class = NewsItem - - headline = reducers.TakeFirst(extract, remove_tags(), unquote(), strip) - published = reducers.Reducer(extract, remove_tags(), unquote(), strip) -}}} - -As with the previous example this would select join_strings as the reducer for content - -== Setting expanders/reducers new way == - -{{{ -#!python -class NewsItemBuilder(ItemBuilder): - item_class = NewsItem - - headline = BuilderField(extract, remove_tags(), unquote(), strip) - content = BuilderField(extract, remove_tags(), unquote(), strip) - - class Reducer: - headline = TakeFirst -}}} - -== Extending !ItemBuilder == - -{{{ -#!python -class SiteNewsItemBuilder(NewsItemBuilder): - published = reducers.Reducer(extract, remove_tags(), unquote(), strip, to_date('%d.%m.%Y')) -}}} - -== Extending !ItemBuilder using statich methods == - -{{{ -#!python -class SiteNewsItemBuilder(NewsItemBuilder): - published = reducers.Reducer(NewsItemBuilder.published, to_date('%d.%m.%Y')) -}}} - -== Using default_builder == - -{{{ -#!python -class DefaultedNewsItemBuilder(ItemBuilder): - item_class = NewsItem - - default_builder = reducers.Reducer(extract, remove_tags(), unquote(), strip) -}}} - -This will use default_builder as the builder for every field in the item class. -As a reducer is not set reducers will be set based on Item Field classess. - -== Reset default_builder for a field == - -{{{ -#!python -class DefaultedNewsItemBuilder(ItemBuilder): - item_class = NewsItem - - default_builder = reducers.Reducer(extract, remove_tags(), unquote(), strip) - url = BuilderField() -}}} - -== Extending default !ItemBuilder == - -{{{ -#!python -class SiteNewsItemBuilder(NewsItemBuilder): - published = reducers.Reducer(extract, remove_tags(), unquote(), strip, to_date('%d.%m.%Y')) -}}} - -== Extending default !ItemBuilder using static methods == - -{{{ -#!python -class SiteNewsItemBuilder(NewsItemBuilder): - published = reducers.Reducer(NewsItemBuilder.default_builder, to_date('%d.%m.%Y')) -}}} \ No newline at end of file From f1e0faacff9f582d823c5d2ac07ee258d31e8b01 Mon Sep 17 00:00:00 2001 From: Edwin O Marshall Date: Fri, 7 Mar 2014 11:28:37 -0500 Subject: [PATCH 5/7] - convertd sep 8 --- sep/sep-008.rst | 111 +++++++++++++++++++++++++++++++++++++++++++++++ sep/sep-008.trac | 102 ------------------------------------------- 2 files changed, 111 insertions(+), 102 deletions(-) create mode 100644 sep/sep-008.rst delete mode 100644 sep/sep-008.trac diff --git a/sep/sep-008.rst b/sep/sep-008.rst new file mode 100644 index 00000000000..b28bb548e3c --- /dev/null +++ b/sep/sep-008.rst @@ -0,0 +1,111 @@ +========= ============================================================== +SEP 8 +Title Item Parsers +Author Pablo Hoffman +Created 2009-08-11 +Status Final (implemented with variations) +Obsoletes :doc:`sep-001`, :doc:`sep-002`, :doc:`sep-003`, :doc:`sep-005` +========= ============================================================== + +====================== +SEP-008 - Item Loaders +====================== + +Item Parser is the final API proposed to implement Item Builders/Loader +proposed in :doc:`sep-001`. + +.. note:: This is the API that was finally implemented with the name "Item + Loaders", instead of "Item Parsers" along with some other minor fine + tuning to the API methods and semantics. + +Dataflow +======== + +1. ``ItemParser.add_value()`` + 1. **input_parser** + 2. store +2. ``ItemParser.add_xpath()`` *(only available in XPathItemLoader)* + 1. selector.extract() + 2. **input_parser** + 3. store +3. ``ItemParser.populate_item()`` *(ex. get_item)* + 1. **output_parser** + 2. assign field + +Modules and classes +=================== + +- ``scrapy.contrib.itemparser.ItemParser`` +- ``scrapy.contrib.itemparser.XPathItemParser`` +- ``scrapy.contrib.itemparser.parsers.``MapConcat`` *(ex. ``TreeExpander``)* +- ``scrapy.contrib.itemparser.parsers.``TakeFirst`` +- ``scrapy.contrib.itemparser.parsers.Join`` +- ``scrapy.contrib.itemparser.parsers.Identity`` + +Public API +========== + +- ``ItemParser.add_value()`` +- ``ItemParser.replace_value()`` +- ``ItemParser.populate_item()`` *(returns item populated)* + +- ``ItemParser.get_collected_values()`` *(note the 's' in values)* +- ``ItemParser.parse_field()`` + +- ``ItemParser.get_input_parser()`` +- ``ItemParser.get_output_parser()`` + +- ``ItemParser.context`` + +- ``ItemParser.default_item_class`` +- ``ItemParser.default_input_parser`` +- ``ItemParser.default_output_parser`` +- ``ItemParser.*field*_in`` +- ``ItemParser.*field*_out`` + +Alternative Public API Proposal +=============================== + +- ``ItemLoader.add_value()`` +- ``ItemLoader.replace_value()`` +- ``ItemLoader.load_item()`` *(returns loaded item)* + +- ``ItemLoader.get_stored_values()`` or ``ItemLoader.get_values()`` *(returns the ``ItemLoader values)* +- ``ItemLoader.get_output_value()`` + +- ``ItemLoader.get_input_processor()`` or ``ItemLoader.get_in_processor()`` *(short version)* +- ``ItemLoader.get_output_processor()`` or ``ItemLoader.get_out_processor()`` *(short version)* + +- ``ItemLoader.context`` + +- ``ItemLoader.default_item_class`` +- ``ItemLoader.default_input_processor`` or ``ItemLoader.default_in_processor`` *(short version)* +- ``ItemLoader.default_output_processor`` or ``ItemLoader.default_out_processor`` *(short version)* +- ``ItemLoader.*field*_in`` +- ``ItemLoader.*field*_out`` + +Usage example: declaring Item Parsers +===================================== + +:: + + #!python + from scrapy.contrib.itemparser import XPathItemParser, parsers + + class ProductParser(XPathItemParser): + name_in = parsers.MapConcat(removetags, filterx) + price_in = parsers.MapConcat(...) + + price_out = parsers.TakeFirst() + +Usage example: declaring parsers in Fields +========================================== + +:: + + #!python + class Product(Item): + name = Field(output_parser=parsers.Join(), ...) + price = Field(output_parser=parsers.TakeFirst(), ...) + + description = Field(input_parser=parsers.MapConcat(removetags)) diff --git a/sep/sep-008.trac b/sep/sep-008.trac deleted file mode 100644 index 5d762cdaa8a..00000000000 --- a/sep/sep-008.trac +++ /dev/null @@ -1,102 +0,0 @@ -= SEP-008 - Item Loaders = - -[[PageOutline(2-5,Contents)]] - -||'''SEP:'''||8|| -||'''Title:'''||Item Parsers|| -||'''Author:'''||Pablo Hoffman|| -||'''Created:'''||2009-08-11|| -||'''Status'''||Final (implemented with variations)|| -||'''Obsoletes'''||[wiki:SEP-001], [wiki:SEP-002], [wiki:SEP-003], [wiki:SEP-005]|| - -== Introduction == - -Item Parser is the final API proposed to implement Item Builders/Loader proposed in [wiki:SEP-001]. - -'''NOTE:''' This is the API that was finally implemented with the name "Item Loaders", instead of "Item Parsers" along with some other minor fine tuning to the API methods and semantics. - -== Dataflow == - - 1. !ItemParser.add_value() - 1. '''input_parser''' - 2. store - 2. !ItemParser.add_xpath() ''(only available in XPathItemLoader)'' - 1. selector.extract() - 2. '''input_parser''' - 3. store - 3. !ItemParser.populate_item() ''(ex. get_item)'' - 1. '''output_parser''' - 2. assign field - -== Modules and classes == - - * scrapy.contrib.itemparser.!ItemParser - * scrapy.contrib.itemparser.XPathItemParser - * scrapy.contrib.itemparser.parsers.!MapConcat ''(ex. !TreeExpander)'' - * scrapy.contrib.itemparser.parsers.!TakeFirst - * scrapy.contrib.itemparser.parsers.Join - * scrapy.contrib.itemparser.parsers.Identity - -== Public API == - - * !ItemParser.add_value() - * !ItemParser.replace_value() - * !ItemParser.populate_item() ''(returns item populated)'' - - * !ItemParser.get_collected_values() ''(note the 's' in values)'' - * !ItemParser.parse_field() - - * !ItemParser.get_input_parser() - * !ItemParser.get_output_parser() - - * !ItemParser.context - - * !ItemParser.default_item_class - * !ItemParser.default_input_parser - * !ItemParser.default_output_parser - * !ItemParser.''field''_in - * !ItemParser.''field''_out - -== Alternative Public API Proposal == - - * !ItemLoader.add_value() - * !ItemLoader.replace_value() - * !ItemLoader.load_item() ''(returns loaded item)'' - - * !ItemLoader.get_stored_values() or !ItemLoader.get_values() ''(returns the !ItemLoader values)'' - * !ItemLoader.get_output_value() - - * !ItemLoader.get_input_processor() or !ItemLoader.get_in_processor() ''(short version)'' - * !ItemLoader.get_output_processor() or !ItemLoader.get_out_processor() ''(short version)'' - - * !ItemLoader.context - - * !ItemLoader.default_item_class - * !ItemLoader.default_input_processor or !ItemLoader.default_in_processor ''(short version)'' - * !ItemLoader.default_output_processor or !ItemLoader.default_out_processor ''(short version)'' - * !ItemLoader.''field''_in - * !ItemLoader.''field''_out - -== Usage example: declaring Item Parsers == - -{{{ -#!python -from scrapy.contrib.itemparser import XPathItemParser, parsers - -class ProductParser(XPathItemParser): - name_in = parsers.MapConcat(removetags, filterx) - price_in = parsers.MapConcat(...) - - price_out = parsers.TakeFirst() -}}} - -== Usage example: declaring parsers in Fields == - -{{{ -#!python -class Product(Item): - name = Field(output_parser=parsers.Join(), ...) - price = Field(output_parser=parsers.TakeFirst(), ...) - - description = Field(input_parser=parsers.MapConcat(removetags)) -}}} From 770e24ae5122ccaf36bffb0c96cd12a287a560eb Mon Sep 17 00:00:00 2001 From: Edwin O Marshall Date: Fri, 7 Mar 2014 11:44:41 -0500 Subject: [PATCH 6/7] converted sep 9 --- sep/sep-009.rst | 138 +++++++++++++++++++++++++++++++++++++++++++++++ sep/sep-009.trac | 102 ----------------------------------- 2 files changed, 138 insertions(+), 102 deletions(-) create mode 100644 sep/sep-009.rst delete mode 100644 sep/sep-009.trac diff --git a/sep/sep-009.rst b/sep/sep-009.rst new file mode 100644 index 00000000000..232a536a89f --- /dev/null +++ b/sep/sep-009.rst @@ -0,0 +1,138 @@ +======= ==================================== +SEP 9 +Title Singleton removal +Author Pablo Hoffman +Created 2009-11-14 +Status Document in progress (being written) +======= ==================================== + +============================ +SEP-009 - Singletons removal +============================ + +This SEP proposes a refactoring of the Scrapy to get ri of singletons, which +will result in a cleaner API and will allow us to implement the library API +proposed in :doc:`sep-004`. + +Current singletons +================== + +Scrapy 0.7 has the following singletons: + +- Execution engine (``scrapy.core.engine.scrapyengine``) +- Execution manager (``scrapy.core.manager.scrapymanager``) +- Extension manager (``scrapy.extension.extensions``) +- Spider manager (``scrapy.spider.spiders``) +- Stats collector (``scrapy.stats.stats``) +- Logging system (``scrapy.log``) +- Signals system (``scrapy.xlib.pydispatcher``) + +Proposed API +============ + +The proposed architecture is to have one "root" object called ``Crawler`` +(which will replace the current Execution Manager) and make all current +singletons members of that object, as explained below: + +- **crawler**: ``scrapy.crawler.Crawler`` instance (replaces current + ``scrapy.core.manager.ExecutionManager``) - instantiated with a ``Settings`` + object + + - **crawler.settings**: ``scrapy.conf.Settings`` instance (passed in the constructor) + - **crawler.extensions**: ``scrapy.extension.ExtensionManager`` instance + - **crawler.engine**: ``scrapy.core.engine.ExecutionEngine`` instance + - ``crawler.engine.scheduler`` + - ``crawler.engine.scheduler.middleware`` - to access scheduler + middleware + - ``crawler.engine.downloader`` + - ``crawler.engine.downloader.middleware`` - to access downloader + middleware + - ``crawler.engine.scraper`` + - ``crawler.engine.scraper.spidermw`` - to access spider middleware + - **crawler.spiders**: ``SpiderManager`` instance (concrete class given in + ``SPIDER_MANAGER_CLASS`` setting) + - **crawler.stats**: ``StatsCollector`` instance (concrete class given in + ``STATS_CLASS`` setting) + - **crawler.log**: Logger class with methods replacing the current + ``scrapy.log`` functions. Logging would be started (if enabled) on + ``Crawler`` constructor, so no log starting functions are required. + + - ``crawler.log.msg`` + - **crawler.signals**: signal handling + - ``crawler.signals.send()`` - same as ``pydispatch.dispatcher.send()`` + - ``crawler.signals.connect()`` - same as + ``pydispatch.dispatcher.connect()`` + - ``crawler.signals.disconnect()`` - same as + ``pydispatch.dispatcher.disconnect()`` + +Required code changes after singletons removal +============================================== + +All components (extensions, middlewares, etc) will receive this ``Crawler`` +object in their constructors, and this will be the only mechanism for accessing +any other components (as opposed to importing each singleton from their +respective module). This will also serve to stabilize the core API, something +which we haven't documented so far (partly because of this). + +So, for a typical middleware constructor code, instead of this: + +:: + + #!python + from scrapy.core.exceptions import NotConfigured + from scrapy.conf import settings + + class SomeMiddleware(object): + def __init__(self): + if not settings.getbool('SOMEMIDDLEWARE_ENABLED'): + raise NotConfigured + +We'd write this: + +:: + + #!python + from scrapy.core.exceptions import NotConfigured + + class SomeMiddleware(object): + def __init__(self, crawler): + if not crawler.settings.getbool('SOMEMIDDLEWARE_ENABLED'): + raise NotConfigured + +Running from command line +========================= + +When running from **command line** (the only mechanism supported so far) the +``scrapy.command.cmdline`` module will: + +1. instantiate a ``Settings`` object and populate it with the values in + SCRAPY_SETTINGS_MODULE, and per-command overrides +2. instantiate a ``Crawler`` object with the ``Settings`` object (the + ``Crawler`` instantiates all its components based on the given settings) +3. run ``Crawler.crawl()`` with the URLs or domains passed in the command line + +Using Scrapy as a library +========================= + +When using Scrapy with the **library API**, the programmer will: + +1. instantiate a ``Settings`` object (which only has the defaults settings, by + default) and override the desired settings +2. instantiate a ``Crawler`` object with the ``Settings`` object + +Open issues to resolve +====================== + +- Should we pass ``Settings`` object to ``ScrapyCommand.add_options()``? +- How should spiders access settings? + - Option 1. Pass ``Crawler`` object to spider constructors too + - pro: one way to access all components (settings and signals being the + most relevant to spiders) + - con?: spider code can access (and control) any crawler component - + since we don't want to support spiders messing with the crawler (write + an extension or spider middleware if you need that) + - Option 2. Pass ``Settings`` object to spider constructors, which would + then be accessed through ``self.settings``, like logging which is accessed + through ``self.log`` + + - con: would need a way to access stats too diff --git a/sep/sep-009.trac b/sep/sep-009.trac deleted file mode 100644 index ce229d77267..00000000000 --- a/sep/sep-009.trac +++ /dev/null @@ -1,102 +0,0 @@ -= SEP-009 - Singletons removal = - -[[PageOutline(2-5,Contents)]] - -||'''SEP:'''||9|| -||'''Title:'''||Singleton removal || -||'''Author:'''||Pablo Hoffman|| -||'''Created:'''||2009-11-14|| -||'''Status'''||Document in progress (being written)|| - -== Introduction == - -This SEP proposes a refactoring of the Scrapy to get ri of singletons, which will result in a cleaner API and will allow us to implement the library API proposed in [wiki:SEP-004]. - -== Current singletons == - -Scrapy 0.7 has the following singletons: - - * Execution engine ({{{scrapy.core.engine.scrapyengine}}}) - * Execution manager ({{{scrapy.core.manager.scrapymanager}}}) - * Extension manager ({{{scrapy.extension.extensions}}}) - * Spider manager ({{{scrapy.spider.spiders}}}) - * Stats collector ({{{scrapy.stats.stats}}}) - * Logging system ({{{scrapy.log}}}) - * Signals system ({{{scrapy.xlib.pydispatcher}}}) - -== Proposed API == - -The proposed architecture is to have one "root" object called {{{Crawler}}} (which will replace the current Execution Manager) and make all current singletons members of that object, as explained below: - - * '''crawler''': {{{scrapy.crawler.Crawler}}} instance (replaces current {{{scrapy.core.manager.ExecutionManager}}}) - instantiated with a {{{Settings}}} object - * '''crawler.settings''': {{{scrapy.conf.Settings}}} instance (passed in the constructor) - * '''crawler.extensions''': {{{scrapy.extension.ExtensionManager}}} instance - * '''crawler.engine''': {{{scrapy.core.engine.ExecutionEngine}}} instance - * {{{crawler.engine.scheduler}}} - * {{{crawler.engine.scheduler.middleware}}} - to access scheduler middleware - * {{{crawler.engine.downloader}}} - * {{{crawler.engine.downloader.middleware}}} - to access downloader middleware - * {{{crawler.engine.scraper}}} - * {{{crawler.engine.scraper.spidermw}}} - to access spider middleware - * '''crawler.spiders''': {{{SpiderManager}}} instance (concrete class given in {{{SPIDER_MANAGER_CLASS}}} setting) - * '''crawler.stats''': {{{StatsCollector}}} instance (concrete class given in {{{STATS_CLASS}}} setting) - * '''crawler.log''': Logger class with methods replacing the current {{{scrapy.log}}} functions. Logging would be started (if enabled) on {{{Crawler}}} constructor, so no log starting functions are required. - * {{{crawler.log.msg}}} - * '''crawler.signals''': signal handling - * {{{crawler.signals.send()}}} - same as {{{pydispatch.dispatcher.send()}}} - * {{{crawler.signals.connect()}}} - same as {{{pydispatch.dispatcher.connect()}}} - * {{{crawler.signals.disconnect()}}} - same as {{{pydispatch.dispatcher.disconnect()}}} - -== Required code changes after singletons removal == - -All components (extensions, middlewares, etc) will receive this {{{Crawler}}} object in their constructors, and this will be the only mechanism for accessing any other components (as opposed to importing each singleton from their respective module). This will also serve to stabilize the core API, something which we haven't documented so far (partly because of this). - -So, for a typical middleware constructor code, instead of this: - -{{{ -#!python -from scrapy.core.exceptions import NotConfigured -from scrapy.conf import settings - -class SomeMiddleware(object): - def __init__(self): - if not settings.getbool('SOMEMIDDLEWARE_ENABLED'): - raise NotConfigured -}}} - -We'd write this: - -{{{ -#!python -from scrapy.core.exceptions import NotConfigured - -class SomeMiddleware(object): - def __init__(self, crawler): - if not crawler.settings.getbool('SOMEMIDDLEWARE_ENABLED'): - raise NotConfigured -}}} - -== Running from command line == - -When running from '''command line''' (the only mechanism supported so far) the {{{scrapy.command.cmdline}}} module will: - - 1. instantiate a {{{Settings}}} object and populate it with the values in SCRAPY_SETTINGS_MODULE, and per-command overrides - 2. instantiate a {{{Crawler}}} object with the {{{Settings}}} object (the {{{Crawler}}} instantiates all its components based on the given settings) - 3. run {{{Crawler.crawl()}}} with the URLs or domains passed in the command line - -== Using Scrapy as a library == - -When using Scrapy with the '''library API''', the programmer will: - - 1. instantiate a {{{Settings}}} object (which only has the defaults settings, by default) and override the desired settings - 2. instantiate a {{{Crawler}}} object with the {{{Settings}}} object - -== Open issues to resolve == - - * Should we pass {{{Settings}}} object to {{{ScrapyCommand.add_options()}}}? - * How should spiders access settings? - * Option 1. Pass {{{Crawler}}} object to spider constructors too - * pro: one way to access all components (settings and signals being the most relevant to spiders) - * con?: spider code can access (and control) any crawler component - since we don't want to support spiders messing with the crawler (write an extension or spider middleware if you need that) - * Option 2. Pass {{{Settings}}} object to spider constructors, which would then be accessed through {{{self.settings}}}, like logging which is accessed through {{{self.log}}} - * con: would need a way to access stats too \ No newline at end of file From 138e534bc2661e86d10b42967fa896f0ee29efb4 Mon Sep 17 00:00:00 2001 From: Edwin O Marshall Date: Fri, 7 Mar 2014 11:52:12 -0500 Subject: [PATCH 7/7] converted sep017 --- sep/sep-017.rst | 111 +++++++++++++++++++++++++++++++++++++++++++++++ sep/sep-017.trac | 90 -------------------------------------- 2 files changed, 111 insertions(+), 90 deletions(-) create mode 100644 sep/sep-017.rst delete mode 100644 sep/sep-017.trac diff --git a/sep/sep-017.rst b/sep/sep-017.rst new file mode 100644 index 00000000000..7707a162219 --- /dev/null +++ b/sep/sep-017.rst @@ -0,0 +1,111 @@ +======= ================ +SEP 17 +Title Spider Contracts +Author Insophia Team +Created 2010-06-10 +Status Draft +======= ================ + +========================= +SEP-017: Spider Contracts +========================= + +The motivation for Spider Contracts is to build a lightweight mechanism for +testing your spiders, and be able to run the tests quickly without having to +wait for all the spider to run. It's partially based on the +[http://en.wikipedia.org/wiki/Design_by_contract Design by contract] approach +(hence its name) where you define certain conditions that spider callbacks must +met, and you give example testing pages. + +How it works +============ + +In the docstring of your spider callbacks, you write certain tags that define +the spider contract. For example, the URL of a sample page for that callback, +and what you expect to scrape from it. + +Then you can run a command to check that the spider contracts are met. + +Contract examples +================= + +gExample URL for simple callback +-------------------------------- + +The ``parse_product`` callback must return items containing the fields given in +``@scrapes``. + +:: + + #!python + class ProductSpider(BaseSpider): + + def parse_product(self, response): + """ + @url http://www.example.com/store/product.php?id=123 + @scrapes name, price, description + """" + +gChained callbacks +------------------ + +The following spider contains two callbacks, one for login to a site, and the +other for scraping user profile info. + +The contracts assert that the first callback returns a Request and the second +one scrape ``user, name, email`` fields. + +:: + + #!python + class UserProfileSpider(BaseSpider): + + def parse_login_page(self, response): + """ + @url http://www.example.com/login.php + @returns_request + """ + # returns Request with callback=self.parse_profile_page + + def parse_profile_page(self, response): + """ + @after parse_login_page + @scrapes user, name, email + """" + # ... + +Tags reference +============== + +Note that tags can also be extended by users, meaning that you can have your +own custom contract tags in your Scrapy project. + +==================== ========================================================== +``@url`` url of a sample page parsed by the callback +``@after`` the callback is called with the response generated by the + specified callback +``@scrapes`` list of fields that must be present in the item(s) scraped + by the callback +``@returns_request`` the callback must return one (and only one) Request +==================== ========================================================== + +Some tag constraints: + + * a callback cannot contain ``@url`` and ``@after`` + +Checking spider contracts +========================= + +To check the contracts of a single spider: + +:: + + scrapy-ctl.py check example.com + +Or to check all spiders: + +:: + + scrapy-ctl.py check + +No need to wait for the whole spider to run. diff --git a/sep/sep-017.trac b/sep/sep-017.trac deleted file mode 100644 index 1e55b56f8dd..00000000000 --- a/sep/sep-017.trac +++ /dev/null @@ -1,90 +0,0 @@ -= SEP-017: Spider Contracts = - -[[PageOutline(2-5,Contents)]] - -||'''SEP:'''||17|| -||'''Title:'''||Spider Contracts|| -||'''Author:'''||Insophia Team|| -||'''Created:'''||2010-06-10|| -||'''Status'''||Draft|| - -== Introduction == - -The motivation for Spider Contracts is to build a lightweight mechanism for testing your spiders, and be able to run the tests quickly without having to wait for all the spider to run. It's partially based on the [http://en.wikipedia.org/wiki/Design_by_contract Design by contract] approach (hence its name) where you define certain conditions that spider callbacks must met, and you give example testing pages. - -== How it works == - -In the docstring of your spider callbacks, you write certain tags that define the spider contract. For example, the URL of a sample page for that callback, and what you expect to scrape from it. - -Then you can run a command to check that the spider contracts are met. - -== Contract examples == - -=== Example URL for simple callback === - -The {{{parse_product}}} callback must return items containing the fields given in {{{@scrapes}}}. - -{{{ -#!python -class ProductSpider(BaseSpider): - - def parse_product(self, response): - """ - @url http://www.example.com/store/product.php?id=123 - @scrapes name, price, description - """" -}}} - -=== Chained callbacks === - -The following spider contains two callbacks, one for login to a site, and the other for scraping user profile info. - -The contracts assert that the first callback returns a Request and the second one scrape {{{{user, name, email}}} fields. - -{{{ -#!python -class UserProfileSpider(BaseSpider): - - def parse_login_page(self, response): - """ - @url http://www.example.com/login.php - @returns_request - """ - # returns Request with callback=self.parse_profile_page - - def parse_profile_page(self, response): - """ - @after parse_login_page - @scrapes user, name, email - """" - # ... -}}} - -== Tags reference == - -Note that tags can also be extended by users, meaning that you can have your own custom contract tags in your Scrapy project. - -||{{{@url}}} || url of a sample page parsed by the callback || -||{{{@after}}} || the callback is called with the response generated by the specified callback || -||{{{@scrapes}}} || list of fields that must be present in the item(s) scraped by the callback || -||{{{@returns_request}}} || the callback must return one (and only one) Request || - -Some tag constraints: - - * a callback cannot contain {{{@url}}} and {{{@after}}} - -== Checking spider contracts == - -To check the contracts of a single spider: - -{{{ -scrapy-ctl.py check example.com -}}} - -Or to check all spiders: - -{{{ -scrapy-ctl.py check -}}} - -No need to wait for the whole spider to run.