Skip to content

Commit

Permalink
Added info on scrapyd being a separate project from Scrapy 0.18 onwar…
Browse files Browse the repository at this point in the history
…ds to documentation
  • Loading branch information
holgerd77 authored and holgerd77 committed May 30, 2015
1 parent 859949f commit 4ea3909
Show file tree
Hide file tree
Showing 2 changed files with 17 additions and 2 deletions.
11 changes: 9 additions & 2 deletions docs/advanced_topics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -267,15 +267,22 @@ If everything works well, you should now see the following line in your command
[2011-12-12 10:20:01,535: INFO/MainProcess] Celerybeat: Starting...

As a second daemon process we need the server coming along with Scrapy to actually crawl the different
websites targeted with our scrapers. You can start the `Scrapy Server`_ with the following command from
within the main directory of your project::
websites targeted with our scrapers. For ``Scrapy`` up to version ``0.16`` you can use the build-in `Scrapy Server`_
started with the following command from within the main directory of your project::

scrapy server
You should get an output similar to the following:

.. image:: images/screenshot_shell_scrapy_server.png

Starting with ``Scrapy 0.18`` is not shipped bundled with ``scrapyd`` any more, being now a separate project,
so you have to make sure you have deployed your Scrapy project (see: :ref:`setting_up_scrapy`) and run the
server with::

scrapyd


For testing your scheduling system, you can temporarily set your time interval of your periodic task to
a lower interval, e.g. 1 minute. Now you should see a new task coming in and being executed every minute::

Expand Down
8 changes: 8 additions & 0 deletions docs/getting_started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -392,6 +392,8 @@ To get this going, we have to create a new Scrapy project, adjust some settings
two short python module files, one with a spider class, inheriting from :ref:`django_spider`, and a finalising
pipeline for saving our scraped objects.

.. _setting_up_scrapy:

Setting up Scrapy
-----------------

Expand Down Expand Up @@ -423,10 +425,16 @@ settings file and the project name::
[settings]
default = open_news.scraper.settings
#Scrapy till 0.16
[deploy]
#url = http://localhost:6800/
project = open_news

#Scrapy with separate scrapyd (0.18+)
[deploy:scrapyd1]
url = http://localhost:6800/
project = open_news


And this is your ``settings.py`` file::

Expand Down

0 comments on commit 4ea3909

Please sign in to comment.