.. module:: scrapy.http :synopsis: Request and Response classes
Scrapy uses :class:`Request` and :class:`Response` objects for crawling web sites.
Typically, :class:`Request` objects are generated in the spiders and pass across the system until they reach the Downloader, which executes the request and returns a :class:`Response` object which travels back to the spider that issued the request.
Both :class:`Request` and :class:`Response` classes have subclasses which add functionality not required in the base classes. These are described below in :ref:`topics-request-response-ref-request-subclasses` and :ref:`topics-request-response-ref-response-subclasses`.
A :class:`Request` object represents an HTTP request, which is usually generated in the Spider and executed by the Downloader, and thus generating a :class:`Response`.
param url: | the URL of this request |
---|---|
type url: | string |
param callback: | the function that will be called with the response of this request (once its downloaded) as its first parameter. For more information see :ref:`topics-request-response-ref-request-callback-arguments` below. If a Request doesn't specify a callback, the spider's :meth:`~scrapy.spiders.Spider.parse` method will be used. Note that if exceptions are raised during processing, errback is called instead. |
type callback: | callable |
param method: | the HTTP method of this request. Defaults to |
type method: | string |
param meta: | the initial values for the :attr:`Request.meta` attribute. If given, the dict passed in this parameter will be shallow copied. |
type meta: | dict |
param body: | the request body. If a |
type body: | str or unicode |
param headers: | the headers of this request. The dict values can be strings
(for single valued headers) or lists (for multi-valued headers). If
|
type headers: | dict |
param cookies: | the request cookies. These can be sent in two forms.
The latter form allows for customizing the When some site returns cookies (in a response) those are stored in the
cookies for that domain and will be sent again in future requests. That's
the typical behaviour of any regular web browser. However, if, for some
reason, you want to avoid merging with existing cookies you can instruct
Scrapy to do so by setting the Example of request without merging cookies: request_with_cookies = Request(url="http://www.example.com", cookies={'currency': 'USD', 'country': 'UY'}, meta={'dont_merge_cookies': True}) For more info see :ref:`cookies-mw`. |
type cookies: | dict or list |
param encoding: | the encoding of this request (defaults to |
type encoding: | string |
param priority: | the priority of this request (defaults to |
type priority: | int |
param dont_filter: | indicates that this request should not be filtered by
the scheduler. This is used when you want to perform an identical
request multiple times, to ignore the duplicates filter. Use it with
care, or you will get into crawling loops. Default to |
type dont_filter: | boolean |
param errback: | a function that will be called if any exception was raised while processing the request. This includes pages that failed with 404 HTTP errors and such. It receives a Twisted Failure instance as first parameter. For more information, see :ref:`topics-request-response-ref-errbacks` below. |
type errback: | callable |
.. attribute:: Request.url A string containing the URL of this request. Keep in mind that this attribute contains the escaped URL, so it can differ from the URL passed in the constructor. This attribute is read-only. To change the URL of a Request use :meth:`replace`.
.. attribute:: Request.method A string representing the HTTP method in the request. This is guaranteed to be uppercase. Example: ``"GET"``, ``"POST"``, ``"PUT"``, etc
.. attribute:: Request.headers A dictionary-like object which contains the request headers.
.. attribute:: Request.body A str that contains the request body. This attribute is read-only. To change the body of a Request use :meth:`replace`.
.. attribute:: Request.meta A dict that contains arbitrary metadata for this request. This dict is empty for new Requests, and is usually populated by different Scrapy components (extensions, middlewares, etc). So the data contained in this dict depends on the extensions you have enabled. See :ref:`topics-request-meta` for a list of special meta keys recognized by Scrapy. This dict is `shallow copied`_ when the request is cloned using the ``copy()`` or ``replace()`` methods, and can also be accessed, in your spider, from the ``response.meta`` attribute.
.. method:: Request.copy() Return a new Request which is a copy of this Request. See also: :ref:`topics-request-response-ref-request-callback-arguments`.
.. method:: Request.replace([url, method, headers, body, cookies, meta, encoding, dont_filter, callback, errback]) Return a Request object with the same members, except for those members given new values by whichever keyword arguments are specified. The attribute :attr:`Request.meta` is copied by default (unless a new value is given in the ``meta`` argument). See also :ref:`topics-request-response-ref-request-callback-arguments`.
The callback of a request is a function that will be called when the response of that request is downloaded. The callback function will be called with the downloaded :class:`Response` object as its first argument.
Example:
def parse_page1(self, response): return scrapy.Request("http://www.example.com/some_page.html", callback=self.parse_page2) def parse_page2(self, response): # this would log http://www.example.com/some_page.html self.logger.info("Visited %s", response.url)
In some cases you may be interested in passing arguments to those callback functions so you can receive the arguments later, in the second callback. You can use the :attr:`Request.meta` attribute for that.
Here's an example of how to pass an item using this mechanism, to populate different fields from different pages:
def parse_page1(self, response): item = MyItem() item['main_url'] = response.url request = scrapy.Request("http://www.example.com/some_page.html", callback=self.parse_page2) request.meta['item'] = item return request def parse_page2(self, response): item = response.meta['item'] item['other_url'] = response.url return item
The errback of a request is a function that will be called when an exception is raise while processing it.
It receives a Twisted Failure instance as first parameter and can be used to track connection establishment timeouts, DNS errors etc.
Here's an example spider logging all errors and catching some specific errors if needed:
import scrapy from scrapy.spidermiddlewares.httperror import HttpError from twisted.internet.error import DNSLookupError from twisted.internet.error import TimeoutError, TCPTimedOutError class ErrbackSpider(scrapy.Spider): name = "errback_example" start_urls = [ "http://www.httpbin.org/", # HTTP 200 expected "http://www.httpbin.org/status/404", # Not found error "http://www.httpbin.org/status/500", # server issue "http://www.httpbin.org:12345/", # non-responding host, timeout expected "http://www.httphttpbinbin.org/", # DNS error expected ] def start_requests(self): for u in self.start_urls: yield scrapy.Request(u, callback=self.parse_httpbin, errback=self.errback_httpbin, dont_filter=True) def parse_httpbin(self, response): self.logger.info('Got successful response from {}'.format(response.url)) # do something useful here... def errback_httpbin(self, failure): # log all failures self.logger.error(repr(failure)) # in case you want to do something special for some errors, # you may need the failure's type: if failure.check(HttpError): # these exceptions come from HttpError spider middleware # you can get the non-200 response response = failure.value.response self.logger.error('HttpError on %s', response.url) elif failure.check(DNSLookupError): # this is the original request request = failure.request self.logger.error('DNSLookupError on %s', request.url) elif failure.check(TimeoutError, TCPTimedOutError): request = failure.request self.logger.error('TimeoutError on %s', request.url)
The :attr:`Request.meta` attribute can contain any arbitrary data, but there are some special keys recognized by Scrapy and its built-in extensions.
Those are:
- :reqmeta:`dont_redirect`
- :reqmeta:`dont_retry`
- :reqmeta:`handle_httpstatus_list`
- :reqmeta:`handle_httpstatus_all`
dont_merge_cookies
(seecookies
parameter of :class:`Request` constructor)- :reqmeta:`cookiejar`
- :reqmeta:`dont_cache`
- :reqmeta:`redirect_urls`
- :reqmeta:`bindaddress`
- :reqmeta:`dont_obey_robotstxt`
- :reqmeta:`download_timeout`
- :reqmeta:`download_maxsize`
- :reqmeta:`proxy`
.. reqmeta:: bindaddress
The IP of the outgoing IP address to use for the performing the request.
.. reqmeta:: download_timeout
The amount of time (in secs) that the downloader will wait before timing out. See also: :setting:`DOWNLOAD_TIMEOUT`.
Here is the list of built-in :class:`Request` subclasses. You can also subclass it to implement your own custom functionality.
The FormRequest class extends the base :class:`Request` with functionality for dealing with HTML forms. It uses lxml.html forms to pre-populate form fields with form data from :class:`Response` objects.
If you want to simulate a HTML Form POST in your spider and send a couple of key-value fields, you can return a :class:`FormRequest` object (from your spider) like this:
return [FormRequest(url="http://www.example.com/post/action", formdata={'name': 'John Doe', 'age': '27'}, callback=self.after_post)]
It is usual for web sites to provide pre-populated form fields through <input
type="hidden">
elements, such as session related data or authentication
tokens (for login pages). When scraping, you'll want these fields to be
automatically pre-populated and only override a couple of them, such as the
user name and password. You can use the :meth:`FormRequest.from_response`
method for this job. Here's an example spider which uses it:
import scrapy class LoginSpider(scrapy.Spider): name = 'example.com' start_urls = ['http://www.example.com/users/login.php'] def parse(self, response): return scrapy.FormRequest.from_response( response, formdata={'username': 'john', 'password': 'secret'}, callback=self.after_login ) def after_login(self, response): # check login succeed before going on if "authentication failed" in response.body: self.logger.error("Login failed") return # continue scraping with authenticated session...
A :class:`Response` object represents an HTTP response, which is usually downloaded (by the Downloader) and fed to the Spiders for processing.
param url: | the URL of this response |
---|---|
type url: | string |
param headers: | the headers of this response. The dict values can be strings (for single valued headers) or lists (for multi-valued headers). |
type headers: | dict |
param status: | the HTTP status of the response. Defaults to 200 . |
type status: | integer |
param body: | the response body. It must be str, not unicode, unless you're using a encoding-aware :ref:`Response subclass <topics-request-response-ref-response-subclasses>`, such as :class:`TextResponse`. |
type body: | str |
param meta: | the initial values for the :attr:`Response.meta` attribute. If given, the dict will be shallow copied. |
type meta: | dict |
param flags: | is a list containing the initial values for the :attr:`Response.flags` attribute. If given, the list will be shallow copied. |
type flags: | list |
.. attribute:: Response.url A string containing the URL of the response. This attribute is read-only. To change the URL of a Response use :meth:`replace`.
.. attribute:: Response.status An integer representing the HTTP status of the response. Example: ``200``, ``404``.
.. attribute:: Response.headers A dictionary-like object which contains the response headers.
.. attribute:: Response.body The body of this Response. Keep in mind that Response.body is always a bytes object. If you want the unicode version use :attr:`TextResponse.text` (only available in :class:`TextResponse` and subclasses). This attribute is read-only. To change the body of a Response use :meth:`replace`.
.. attribute:: Response.request The :class:`Request` object that generated this response. This attribute is assigned in the Scrapy engine, after the response and the request have passed through all :ref:`Downloader Middlewares <topics-downloader-middleware>`. In particular, this means that: - HTTP redirections will cause the original request (to the URL before redirection) to be assigned to the redirected response (with the final URL after redirection). - Response.request.url doesn't always equal Response.url - This attribute is only available in the spider code, and in the :ref:`Spider Middlewares <topics-spider-middleware>`, but not in Downloader Middlewares (although you have the Request available there by other means) and handlers of the :signal:`response_downloaded` signal.
.. attribute:: Response.meta A shortcut to the :attr:`Request.meta` attribute of the :attr:`Response.request` object (ie. ``self.request.meta``). Unlike the :attr:`Response.request` attribute, the :attr:`Response.meta` attribute is propagated along redirects and retries, so you will get the original :attr:`Request.meta` sent from your spider. .. seealso:: :attr:`Request.meta` attribute
.. attribute:: Response.flags A list that contains flags for this response. Flags are labels used for tagging Responses. For example: `'cached'`, `'redirected`', etc. And they're shown on the string representation of the Response (`__str__` method) which is used by the engine for logging.
.. method:: Response.copy() Returns a new Response which is a copy of this Response.
.. method:: Response.replace([url, status, headers, body, request, flags, cls]) Returns a Response object with the same members, except for those members given new values by whichever keyword arguments are specified. The attribute :attr:`Response.meta` is copied by default.
.. method:: Response.urljoin(url) Constructs an absolute url by combining the Response's :attr:`url` with a possible relative url. This is a wrapper over `urlparse.urljoin`_, it's merely an alias for making this call:: urlparse.urljoin(response.url, url)
Here is the list of available built-in Response subclasses. You can also subclass the Response class to implement your own functionality.