Tried to stop a LoopingCall that was not running #2011
Comments
Hey, I has a same error log like yours, and my problem is "connection.MySQLConnection()" not connect successfully. When I connect to my MySQL, Scrapy Crawl is work! Hope this help for you. |
mysql的连接,执行和关闭. 你为啥不直接放到 process_item 一次性完成呢? |
process_item 內若要一直使用到 MySQL 的話,只在 open_spider 與 close_spider 做一次 開啟/關閉 應該會比較好吧 |
我都是用django的... 感觉scrapy+django 配合起来很好用 |
i got same error,and found this occur because of file "setting" configuration not right,you may check it. |
@shartoo can you give some hint about the "setting" configuration? |
like this
you should check if the path is right. |
I have a working Scrapy project with two named spiders that are running successfully with Scrapy v1.0.1 and Python v2.7.9. Today, I updated Python to 2.7.11+ as well as Scrapy to 1.1.0 and received a similar error Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/scrapy/commands/crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 163, in crawl
return self._crawl(crawler, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 167, in _crawl
d = crawler.crawl(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1274, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1126, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/usr/lib/python2.7/dist-packages/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 87, in crawl
yield self.engine.close()
File "/usr/local/lib/python2.7/dist-packages/scrapy/core/engine.py", line 100, in close
return self._close_all_spiders()
File "/usr/local/lib/python2.7/dist-packages/scrapy/core/engine.py", line 340, in _close_all_spiders
dfds = [self.close_spider(s, reason='shutdown') for s in self.open_spiders]
File "/usr/local/lib/python2.7/dist-packages/scrapy/core/engine.py", line 298, in close_spider
dfd = slot.close()
File "/usr/local/lib/python2.7/dist-packages/scrapy/core/engine.py", line 44, in close
self._maybe_fire_closing()
File "/usr/local/lib/python2.7/dist-packages/scrapy/core/engine.py", line 51, in _maybe_fire_closing
self.heartbeat.stop()
File "/usr/lib/python2.7/dist-packages/twisted/internet/task.py", line 202, in stop
assert self.running, ("Tried to stop a LoopingCall that was "
exceptions.AssertionError: Tried to stop a LoopingCall that was not running. I have also rectified my ITEM_PIPELINES = {
'tttscraper.pipelines.MySQLStorePipeline':300,
'tttscraper.pipelines.CustomFilePipeline':300,
} For now, I've reverted Scrapy to version 1.0.1 and the code works again. (Python is still at v2.7.11+) |
I met this error, when my pipeline.py had some bug. |
updating my custom scheduler fix this issue. my case was got the hint after trying |
AssertionError: Tried to stop a LoopingCall that was not running. |
I was having this issue even it works the day before with I was able to fix reinstalling pymongo with No version bumbed but problem fixed. |
Also add a test on state of looping task in LogStats extension Fixes scrapy#2011 and scrapy#2362
I'm using mysql to store my spider data, but when I set the piplines to store data into local mysql server, it raise an error.
pipelines.py
the Traceback:
The text was updated successfully, but these errors were encountered: