Skip to content
Browse files

Updating documentation

  • Loading branch information...
1 parent f1410bd commit aa828636c671c50780ea9ecc420b63f73d17d86f @cgiffard committed Dec 20, 2011
Showing with 2 additions and 2 deletions.
  1. +2 −2 README.markdown
View
4 README.markdown
@@ -11,7 +11,7 @@ Simplecrawler is designed to provide the most basic possible API for crawling we
* Provides basic statistics on network performance
* Uses buffers for fetching and managing data, preserving binary data (except when discovering links)
-#####Note
+####Note
You can't install simplecrawler via npm yet. I'll package it up once I'm happy the documentation is solid.
### Getting Started
@@ -96,7 +96,7 @@ Fired when a request dies locally for some reason. The error data is returned as
* `complete`
Fired when the crawler completes processing all the items in its queue, and does not find any more to add. This event returns no arguments.
-##### A note about HTTP error conditions
+####A note about HTTP error conditions
By default, simplecrawler does not download the response body when it encounters an HTTP error status in the response. If you need this information, you can listen to simplecrawler's error events, and through node's native `data` event (`response.on("data",function(chunk) {...})`) you can save the information yourself.
If this is annoying, and you'd really like to retain error pages by default, let me know. I didn't include it because I didn't need it - but if it's important to people I might put it back in. :)

0 comments on commit aa82863

Please sign in to comment.
Something went wrong with that request. Please try again.