Skip to content

Latest commit

 

History

History
109 lines (80 loc) · 4.08 KB

coroutines.rst

File metadata and controls

109 lines (80 loc) · 4.08 KB

Coroutines

2.0

Scrapy has partial support <coroutine-support> for the coroutine syntax <async>.

Warning

asyncio support in Scrapy is experimental. Future Scrapy versions may introduce related API and behavior changes without a deprecation period or warning.

Supported callables

The following callables may be defined as coroutines using async def, and hence use coroutine syntax (e.g. await, async for, async with):

  • ~scrapy.http.Request callbacks.

    The following are known caveats of the current implementation that we aim to address in future versions of Scrapy:

    • The callback output is not processed until the whole callback finishes.

      As a side effect, if the callback raises an exception, none of its output is processed.

    • Because asynchronous generators were introduced in Python 3.6, you can only use yield if you are using Python 3.6 or later.

      If you need to output multiple items or requests and you are using Python 3.5, return an iterable (e.g. a list) instead.

  • The process_item method of item pipelines <topics-item-pipeline>.
  • The ~scrapy.downloadermiddlewares.DownloaderMiddleware.process_request, ~scrapy.downloadermiddlewares.DownloaderMiddleware.process_response, and ~scrapy.downloadermiddlewares.DownloaderMiddleware.process_exception methods of downloader middlewares <topics-downloader-middleware-custom>.
  • Signal handlers that support deferreds <signal-deferred>.

Usage

There are several use cases for coroutines in Scrapy. Code that would return Deferreds when written for previous Scrapy versions, such as downloader middlewares and signal handlers, can be rewritten to be shorter and cleaner:

class DbPipeline:
    def _update_item(self, data, item):
        item['field'] = data
        return item

    def process_item(self, item, spider):
        dfd = db.get_some_data(item['id'])
        dfd.addCallback(self._update_item, item)
        return dfd

becomes:

class DbPipeline:
    async def process_item(self, item, spider):
        item['field'] = await db.get_some_data(item['id'])
        return item

Coroutines may be used to call asynchronous code. This includes other coroutines, functions that return Deferreds and functions that return awaitable objects <awaitable> such as ~asyncio.Future. This means you can use many useful Python libraries providing such code:

class MySpider(Spider):
    # ...
    async def parse_with_deferred(self, response):
        additional_response = await treq.get('https://additional.url')
        additional_data = await treq.content(additional_response)
        # ... use response and additional_data to yield items and requests

    async def parse_with_asyncio(self, response):
        async with aiohttp.ClientSession() as session:
            async with session.get('https://additional.url') as additional_response:
                additional_data = await r.text()
        # ... use response and additional_data to yield items and requests

Note

Many libraries that use coroutines, such as aio-libs, require the asyncio loop and to use them you need to enable asyncio support in Scrapy<asyncio>.

Common use cases for asynchronous code include:

  • requesting data from websites, databases and other services (in callbacks, pipelines and middlewares);
  • storing data in databases (in pipelines and middlewares);
  • delaying the spider initialization until some external event (in the spider_opened handler);
  • calling asynchronous Scrapy methods like ExecutionEngine.download (see the screenshot pipeline example<ScreenshotPipeline>).