Skip to content

Commit

Permalink
Release commit for v.0.9.3
Browse files Browse the repository at this point in the history
  • Loading branch information
holgerd77 authored and holgerd77 committed Jan 14, 2016
1 parent 741aed0 commit c02af44
Show file tree
Hide file tree
Showing 3 changed files with 14 additions and 2 deletions.
6 changes: 6 additions & 0 deletions docs/development.rst
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,12 @@ Docker container can be run with::

Release Notes
=============
**Changes in version 0.9.3-beta** (2016-01-14)

* New command line options ``output_num_mp_response_bodies`` and ``output_num_dp_response_bodies``
for logging the complete response bodies of the first {Int} main/detail page responses to the screen
for debugging (for the really hard cases :-)) (see: :ref:`running_scrapers`)

**Changes in version 0.9.2-beta** (2016-01-14)

* New processor ``remove_chars`` (see: :ref:`processors`) for removing one or several type of chars from
Expand Down
8 changes: 7 additions & 1 deletion docs/getting_started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -389,7 +389,8 @@ as following::
scrapy crawl [--output=FILE --output-format=FORMAT] SPIDERNAME -a id=REF_OBJECT_ID
[-a do_action=(yes|no) -a run_type=(TASK|SHELL)
-a max_items_read={Int} -a max_items_save={Int}
-a max_pages_read={Int}]
-a max_pages_read={Int}
-a output_num_mp_response_bodies={Int} -a output_num_dp_response_bodies={Int} ]
* With ``-a id=REF_OBJECT_ID`` you provide the ID of the reference object items should be scraped for,
in our example case that would be the Wikinews ``NewsWebsite`` object, probably with ID 1 if you haven't
Expand All @@ -406,6 +407,11 @@ as following::

* With ``-a max_pages_read={Int}`` you can limit the number of pages read when using pagination

* With ``-a output_num_mp_response_bodies={Int}`` and ``-a output_num_dp_response_bodies={Int}`` you can log
the complete response body content of the {Int} first main/detail page responses to the screen for debugging
(beginnings/endings are marked with a unique string in the form ``RP_MP_{num}_START`` for using full-text
search for orientation)

* If you don't want your output saved to the Django DB but to a custom file you can use Scrapy`s build-in
output options ``--output=FILE`` and ``--output-format=FORMAT`` to scrape items into a file. Use this without
setting the ``-a do_action=yes`` parameter!
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

setup(
name='django-dynamic-scraper',
version='0.9.2',
version='0.9.3',
description='Creating Scrapy scrapers via the Django admin interface',
author='Holger Drewes',
author_email='Holger.Drewes@gmail.com',
Expand Down

0 comments on commit c02af44

Please sign in to comment.