Just a small issue I've been having with scrapyd. Four of my jobs encountered MySQL errors and then grinded to a halt. Wondering how to handle this? The jobs have been running for over 2 days and therefore bottlenecked the queue.
Is it possible to set a timeout variable for jobs? Or alternatively, how would I exit the crawler on MySQL error (from the PipeLine)
please, route question like this to Scrapy mailing list http://groups.google.com/group/scrapy-users/