The extensions framework provides a mechanism for inserting your own custom functionality into Scrapy.
Extensions are just regular classes that are instantiated at Scrapy startup, when extensions are initialized.
Extensions use the Scrapy settings <topics-settings>
to manage their settings, just like any other Scrapy code.
It is customary for extensions to prefix their settings with their own name, to avoid collision with existing (and future) extensions. For example, an hypothetic extension to handle Google Sitemaps would use settings like GOOGLESITEMAP_ENABLED, GOOGLESITEMAP_DEPTH, and so on.
Extensions are loaded and activated at startup by instantiating a single instance of the extension class. Therefore, all the extension initialization code must be performed in the class constructor (__init__
method).
To make an extension available, add it to the EXTENSIONS
setting in your Scrapy settings. In EXTENSIONS
, each extension is represented by a string: the full Python path to the extension's class name. For example:
EXTENSIONS = {
'scrapy.contrib.corestats.CoreStats': 500,
'scrapy.webservice.WebService': 500,
'scrapy.telnet.TelnetConsole': 500,
}
As you can see, the EXTENSIONS
setting is a dict where the keys are the extension paths, and their values are the orders, which define the extension loading order. Extensions orders are not as important as middleware orders though, and they are typically irrelevant, ie. it doesn't matter in which order the extensions are loaded because they don't depend on each other [1].
However, this feature can be exploited if you need to add an extension which depends on other extensions already loaded.
[1] This is is why the EXTENSIONS_BASE
setting in Scrapy (which contains all built-in extensions enabled by default) defines all the extensions with the same order (500
).
Not all available extensions will be enabled. Some of them usually depend on a particular setting. For example, the HTTP Cache extension is available by default but disabled unless the HTTPCACHE_ENABLED
setting is set.
Even though it's not usually needed, you can access extension objects through the topics-extensions-ref-manager
which is populated when extensions are loaded. For example, to access the WebService
extension:
from scrapy.project import extensions
webservice_extension = extensions.enabled['WebService']
Writing your own extension is easy. Each extension is a single Python class which doesn't need to implement any particular method.
All extension initialization code must be performed in the class constructor (__init__
method). If that method raises the ~scrapy.exceptions.NotConfigured
exception, the extension will be disabled. Otherwise, the extension will be enabled.
Let's take a look at the following example extension which just logs a message every time a domain/spider is opened and closed:
from scrapy.xlib.pydispatch import dispatcher
from scrapy import signals
class SpiderOpenCloseLogging(object):
def __init__(self):
dispatcher.connect(self.spider_opened, signal=signals.spider_opened)
dispatcher.connect(self.spider_closed, signal=signals.spider_closed)
def spider_opened(self, spider):
log.msg("opened spider %s" % spider.name)
def spider_closed(self, spider):
log.msg("closed spider %s" % spider.name)
scrapy.extension
The Extension Manager is responsible for loading and keeping track of installed extensions and it's configured through the EXTENSIONS
setting which contains a dictionary of all available extensions and their order similar to how you configure the downloader middlewares
<topics-downloader-middleware-setting>
.
The Extension Manager is a singleton object, which is instantiated at module loading time and can be accessed like this:
from scrapy.project import extensions
loaded
A boolean which is True if extensions are already loaded or False if they're not.
enabled
A dict with the enabled extensions. The keys are the extension class names, and the values are the extension objects. Example:
>>> from scrapy.project import extensions
>>> extensions.load()
>>> print extensions.enabled
{'CoreStats': <scrapy.contrib.corestats.CoreStats object at 0x9e272ac>,
'WebService': <scrapy.management.telnet.TelnetConsole instance at 0xa05670c>,
...
disabled
A dict with the disabled extensions. The keys are the extension class names, and the values are the extension class paths (because objects are never instantiated for disabled extensions). Example:
>>> from scrapy.project import extensions
>>> extensions.load()
>>> print extensions.disabled
{'MemoryDebugger': 'scrapy.contrib.memdebug.MemoryDebugger',
'MyExtension': 'myproject.extensions.MyExtension',
...
load()
Load the available extensions configured in the EXTENSIONS
setting. On a standard run, this method is usually called by the Execution Manager, but you may need to call it explicitly if you're dealing with code outside Scrapy.
reload()
Reload the available extensions. See load
.
scrapy.contrib.logstats
Log basic stats like crawled pages and scraped items.
scrapy.contrib.corestats
Enable the collection of core statistics, provided the stats collection is enabled (see topics-stats
).
scrapy.webservice
See topics-webservice.
scrapy.telnet
Provides a telnet console for getting into a Python interpreter inside the currently running Scrapy process, which can be very useful for debugging.
The telnet console must be enabled by the TELNETCONSOLE_ENABLED
setting, and the server will listen in the port specified in TELNETCONSOLE_PORT
.
scrapy.contrib.memusage
Note
This extension does not work in Windows.
Allows monitoring the memory used by a Scrapy process and:
1, send a notification e-mail when it exceeds a certain value 2. terminate the Scrapy process when it exceeds a certain value
The notification e-mails can be triggered when a certain warning value is reached (MEMUSAGE_WARNING_MB
) and when the maximum value is reached (MEMUSAGE_LIMIT_MB
) which will also cause the Scrapy process to be terminated.
This extension is enabled by the MEMUSAGE_ENABLED
setting and can be configured with the following settings:
MEMUSAGE_LIMIT_MB
MEMUSAGE_WARNING_MB
MEMUSAGE_NOTIFY_MAIL
MEMUSAGE_REPORT
scrapy.contrib.memdebug
An extension for debugging memory usage. It collects information about:
- objects uncollected by the Python garbage collector
- libxml2 memory leaks
- objects left alive that shouldn't. For more info, see
topics-leaks-trackrefs
To enable this extension, turn on the MEMDEBUG_ENABLED
setting. The info will be stored in the stats.
scrapy.contrib.closespider
Closes a spider automatically when some conditions are met, using a specific closing reason for each condition.
The conditions for closing a spider can be configured through the following settings:
CLOSESPIDER_TIMEOUT
CLOSESPIDER_ITEMCOUNT
CLOSESPIDER_PAGECOUNT
CLOSESPIDER_ERRORCOUNT
CLOSESPIDER_TIMEOUT
Default: 0
An integer which specifies a number of seconds. If the spider remains open for more than that number of second, it will be automatically closed with the reason closespider_timeout
. If zero (or non set), spiders won't be closed by timeout.
CLOSESPIDER_ITEMCOUNT
Default: 0
An integer which specifies a number of items. If the spider scrapes more than that amount if items and those items are passed by the item pipeline, the spider will be closed with the reason closespider_itemcount
. If zero (or non set), spiders won't be closed by number of passed items.
CLOSESPIDER_PAGECOUNT
0.11
Default: 0
An integer which specifies the maximum number of responses to crawl. If the spider crawls more than that, the spider will be closed with the reason closespider_pagecount
. If zero (or non set), spiders won't be closed by number of crawled responses.
CLOSESPIDER_ERRORCOUNT
0.11
Default: 0
An integer which specifies the maximum number of errors to receive before closing the spider. If the spider generates more than that number of errors, it will be closed with the reason closespider_errorcount
. If zero (or non set), spiders won't be closed by number of errors.
scrapy.contrib.statsmailer
This simple extension can be used to send a notification e-mail every time a domain has finished scraping, including the Scrapy stats collected. The email will be sent to all recipients specified in the STATSMAILER_RCPTS
setting.
scrapy.contrib.debug
Dumps information about the running process when a SIGQUIT or SIGUSR2 signal is received. The information dumped is the following:
- engine status (using
scrapy.utils.engine.get_engine_status()
) - live references (see
topics-leaks-trackrefs
) - stack trace of all threads
After the stack trace and engine status is dumped, the Scrapy process continues running normally.
This extension only works on POSIX-compliant platforms (ie. not Windows), because the SIGQUIT and SIGUSR2 signals are not available on Windows.
There are at least two ways to send Scrapy the SIGQUIT signal:
- By pressing Ctrl-while a Scrapy process is running (Linux only?)
By running this command (assuming
<pid>
is the process id of the Scrapy process):kill -QUIT <pid>
Invokes a Python debugger inside a running Scrapy process when a SIGUSR2 signal is received. After the debugger is exited, the Scrapy process continues running normally.
For more info see Debugging in Python.
This extension only works on POSIX-compliant platforms (ie. not Windows).