diff --git a/docs/intro/tutorial.rst b/docs/intro/tutorial.rst
index 246db1a609f..f0f2b870792 100644
--- a/docs/intro/tutorial.rst
+++ b/docs/intro/tutorial.rst
@@ -147,15 +147,17 @@ To put our spider to work, go to the project's top level directory and run::
The ``crawl dmoz`` command runs the spider for the ``dmoz.org`` domain. You
will get an output similar to this::
- 2008-08-20 03:51:13-0300 [scrapy] INFO: Started project: dmoz
- 2008-08-20 03:51:13-0300 [tutorial] INFO: Enabled extensions: ...
- 2008-08-20 03:51:13-0300 [tutorial] INFO: Enabled downloader middlewares: ...
- 2008-08-20 03:51:13-0300 [tutorial] INFO: Enabled spider middlewares: ...
- 2008-08-20 03:51:13-0300 [tutorial] INFO: Enabled item pipelines: ...
- 2008-08-20 03:51:14-0300 [dmoz] INFO: Spider opened
- 2008-08-20 03:51:14-0300 [dmoz] DEBUG: Crawled (referer: )
- 2008-08-20 03:51:14-0300 [dmoz] DEBUG: Crawled (referer: )
- 2008-08-20 03:51:14-0300 [dmoz] INFO: Spider closed (finished)
+ 2014-01-23 18:13:07-0400 [scrapy] INFO: Scrapy started (bot: tutorial)
+ 2014-01-23 18:13:07-0400 [scrapy] INFO: Optional features available: ...
+ 2014-01-23 18:13:07-0400 [scrapy] INFO: Overridden settings: {}
+ 2014-01-23 18:13:07-0400 [scrapy] INFO: Enabled extensions: ...
+ 2014-01-23 18:13:07-0400 [scrapy] INFO: Enabled downloader middlewares: ...
+ 2014-01-23 18:13:07-0400 [scrapy] INFO: Enabled spider middlewares: ...
+ 2014-01-23 18:13:07-0400 [scrapy] INFO: Enabled item pipelines: ...
+ 2014-01-23 18:13:07-0400 [dmoz] INFO: Spider opened
+ 2014-01-23 18:13:08-0400 [dmoz] DEBUG: Crawled (200) (referer: None)
+ 2014-01-23 18:13:09-0400 [dmoz] DEBUG: Crawled (200) (referer: None)
+ 2014-01-23 18:13:09-0400 [dmoz] INFO: Closing spider (finished)
Pay attention to the lines containing ``[dmoz]``, which corresponds to our
spider. You can see a log line for each URL defined in ``start_urls``. Because
@@ -253,16 +255,18 @@ This is what the shell looks like::
[ ... Scrapy log here ... ]
+ 2014-01-23 17:11:42-0400 [default] DEBUG: Crawled (200) (referer: None)
[s] Available Scrapy objects:
- [s] 2010-08-19 21:45:59-0300 [default] INFO: Spider closed (finished)
- [s] sel
- [s] item Item()
+ [s] crawler
+ [s] item {}
[s] request
[s] response <200 http://www.dmoz.org/Computers/Programming/Languages/Python/Books/>
- [s] spider
+ [s] sel \r\n\r\n
+ [s] settings
+ [s] spider
[s] Useful shortcuts:
- [s] shelp() Prints this help.
- [s] fetch(req_or_url) Fetch a new request or URL and update objects
+ [s] shelp() Shell help (print this help)
+ [s] fetch(req_or_url) Fetch request (or URL) and update local objects
[s] view(response) View response in a browser
>>>
@@ -131,24 +134,27 @@ After that, we can star playing with the objects::
>>> fetch("http://slashdot.org")
[s] Available Scrapy objects:
- [s] sel
- [s] item JobItem()
+ [s] crawler
+ [s] item {}
[s] request
[s] response <200 http://slashdot.org>
- [s] settings
- [s] spider
+ [s] sel \n\n\n\n\n\n