-
Notifications
You must be signed in to change notification settings - Fork 10.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CLOSESPIDER_TIMEOUT problem. #5437
Comments
I was able to reproduce this issue using this command:
Maybe we should catch this exception instead of throwing? @Gallaecio @wRAR I did a local fix for this one using this approach but probably we have a better way to handle this. |
If it worked in 2.5 it's an accidental regression that should be fixed (we are already going to make 2.6.2 because of another problem so the fix for this one should go there too) |
I have identified 7e23677 as the commit that introduced the issue. |
Any update? I'm getting the same error. How can I manually handle it? @Laerte what was your local approach to circumvent this issue? |
The fix is merged to 2.6 branch, you can install it using the below command:
|
Lovely! |
It will be included in 2.6.2 which should be released soon (no actual date yet). |
Hello! Hope you are doing well! Just wondering if the latest version has already been updated with these changes? Thank you. |
Just noticed it has been already updated |
Description
When switching from version 2.5.1 to 2.6.1, there was a problem with the parser terminating if the shutdown condition was CLOSESPIDER_TIMEOUT.
Steps to Reproduce
Expected behavior: [What you expect to happen]
2022-03-03 16:09:30 [scrapy.core.engine] INFO: Spider closed (closespider_timeout)
Actual behavior: [What actually happens]
2022-03-03 16:09:30 [scrapy.core.engine] INFO: Spider closed (closespider_timeout)
2022-03-03 16:09:30 [scrapy.core.engine] INFO: Error while scheduling new request
Traceback (most recent call last):
File "c:\users\onedrive\python\scrapy_test\venv\lib\site-packages\twisted\internet\task.py", line 528, in _oneWorkUnit
result = next(self._iterator)
StopIteration
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\onedrive\python\scrapy_test\venv\lib\site-packages\twisted\internet\defer.py", line 858, in _runCallbacks
current.result = callback( # type: ignore[misc]
File "c:\users\onedrive\python\scrapy_test\venv\lib\site-packages\scrapy\core\engine.py", line 187, in
d.addBoth(lambda _: self.slot.nextcall.schedule())
AttributeError: 'NoneType' object has no attribute 'nextcall'
Reproduces how often: [What percentage of the time does it reproduce?]
It always reproduce
Versions
Scrapy : 2.6.1
lxml : 4.7.1.0
libxml2 : 2.9.12
cssselect : 1.1.0
parsel : 1.6.0
w3lib : 1.22.0
Twisted : 21.7.0
Python : 3.9.6 (tags/v3.9.6:db3ff76, Jun 28 2021, 15:26:21) [MSC v.1929 64 bit (AMD64)]
pyOpenSSL : 22.0.0 (OpenSSL 1.1.1m 14 Dec 2021)
cryptography : 36.0.1
Platform : Windows-10-10.0.19044-SP0
The text was updated successfully, but these errors were encountered: