Releases: scrapy/scrapy
2.6.2
1.8.3
2.6.1
2.6.0
- Security fixes for cookie handling (see details below)
- Python 3.10 support
- asyncio support is no longer considered experimental, and works out-of-the-box on Windows regardless of your Python version
- Feed exports now support
pathlib.Path
output paths and per-feed item filtering and post-processing
Security bug fixes
-
When a
Request
object with cookies defined gets a redirect response causing a newRequest
object to be scheduled, the cookies defined in the originalRequest
object are no longer copied into the newRequest
object.If you manually set the
Cookie
header on aRequest
object and the domain name of the redirect URL is not an exact match for the domain of the URL of the originalRequest
object, yourCookie
header is now dropped from the newRequest
object.The old behavior could be exploited by an attacker to gain access to your cookies. Please, see the cjvr-mfj7-j4j8 security advisory for more
information.Note: It is still possible to enable the sharing of cookies between different domains with a shared domain suffix (e.g.
example.com
and any subdomain) by defining the shared domain suffix (e.g.example.com
) as the cookie domain when defining your cookies. See the documentation of theRequest
class for more information. -
When the domain of a cookie, either received in the
Set-Cookie
header of a response or defined in aRequest
object, is set to apublic suffix <https://publicsuffix.org/>
_, the cookie is now ignored unless the cookie domain is the same as the request domain.The old behavior could be exploited by an attacker to inject cookies from a controlled domain into your cookiejar that could be sent to other domains not controlled by the attacker. Please, see the mfjm-vh54-3f96 security advisory for more information.
1.8.2
Security bug fixes
-
When a
Request
object with cookies defined gets a redirect response causing a newRequest
object to be scheduled, the cookies defined in the originalRequest
object are no longer copied into the newRequest
object.If you manually set the
Cookie
header on aRequest
object and the domain name of the redirect URL is not an exact match for the domain of the URL of the originalRequest
object, yourCookie
header is now dropped from the newRequest
object.The old behavior could be exploited by an attacker to gain access to your cookies. Please, see the cjvr-mfj7-j4j8 security advisory for more
information.Note: It is still possible to enable the sharing of cookies between different domains with a shared domain suffix (e.g.
example.com
and any subdomain) by defining the shared domain suffix (e.g.example.com
) as the cookie domain when defining your cookies. See the documentation of theRequest
class for more information. -
When the domain of a cookie, either received in the
Set-Cookie
header of a response or defined in aRequest
object, is set to apublic suffix <https://publicsuffix.org/>
_, the cookie is now ignored unless the cookie domain is the same as the request domain.The old behavior could be exploited by an attacker to inject cookies from a controlled domain into your cookiejar that could be sent to other domains not controlled by the attacker. Please, see the mfjm-vh54-3f96 security advisory for more information.
2.5.1
Security bug fix:
If you use HttpAuthMiddleware
(i.e. the http_user
and http_pass
spider attributes) for HTTP authentication, any request exposes your credentials to the request target.
To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute, http_auth_domain
, and point it to the specific domain to which the authentication credentials must be sent.
If the http_auth_domain
spider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain.
If you need to send the same HTTP authentication credentials to multiple domains, you can use w3lib.http.basic_auth_header
instead to set the value of the Authorization
header of your requests.
If you really want your spider to send the same HTTP authentication credentials to any domain, set the http_auth_domain
spider attribute to None
.
Finally, if you are a user of scrapy-splash, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work.
1.8.1
Security bug fix:
If you use HttpAuthMiddleware
(i.e. the http_user
and http_pass
spider attributes) for HTTP authentication, any request exposes your credentials to the request target.
To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute, http_auth_domain
, and point it to the specific domain to which the authentication credentials must be sent.
If the http_auth_domain
spider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain.
If you need to send the same HTTP authentication credentials to multiple domains, you can use w3lib.http.basic_auth_header
instead to set the value of the Authorization
header of your requests.
If you really want your spider to send the same HTTP authentication credentials to any domain, set the http_auth_domain
spider attribute to None
.
Finally, if you are a user of scrapy-splash, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work.
2.5.0
- Official Python 3.9 support
- Experimental HTTP/2 support
- New get_retry_request() function to retry requests from spider callbacks
- New headers_received signal that allows stopping downloads early
- New Response.protocol attribute
2.4.1
-
Fixed feed exports overwrite support
-
Fixed the asyncio event loop handling, which could make code hang
-
Fixed the IPv6-capable DNS resolver
CachingHostnameResolver
for download handlers that callreactor.resolve
-
Fixed the output of the
genspider
command showing placeholders instead of the import part of the generated spider module (issue 4874)
2.4.0
Hihglights:
-
Python 3.5 support has been dropped.
-
The
file_path
method of media pipelines can now access the source item.This allows you to set a download file path based on item data.
-
The new
item_export_kwargs
key of theFEEDS
setting allows to define keyword parameters to pass to item exporter classes. -
You can now choose whether feed exports overwrite or append to the output file.
For example, when using the
crawl
orrunspider
commands, you can use the-O
option instead of-o
to overwrite the output file. -
Zstd-compressed responses are now supported if zstandard is installed.
-
In settings, where the import path of a class is required, it is now possible to pass a class object instead.