Skip to content
A scrapy-based Hacker News crawler.
Find file
New pull request


A scrapy-based Hacker News crawler.


HNCrawl is a tiny, simple scrapy-based crawler which grabs the html content of pages linked to the front page of hacker news.



$ pip install scrapy
$ git clone


Note: Please be sure to keep in mind that the Crawl-Delay value set in the HN robots.txt file is set to 30 seconds. Please be sure to avoid using the scraper more than once per 30 seconds!

Scrape the links from the front page of HN

$ scrapy crawl hnspider

Scrape items and return json summary of items scraped into items.json

$ scrapy crawl alias_scrape -o items.json -t json


Here is an example file hierarchy. Folders are a hex digest of the SHA1 hash of the hacker news item url.

 ├── out
 │   ├── 000f86c7547b47a700dee0879a0fe08b4597360f
 │   │   └── index.html
 │   ├── 0190cbad182ab3bc9a92482d169f38e363ca3c57
 │   │   └── index.html
 │   ├── 02bae9642c8dd4b75a593c1c42beff62824ee8fc
 │   │   └── index.html
 │   ├── 05c1460571f0ac45f77bf2ecbd3cba8b85c20621
 │   │   └── index.html
 │   ├── 0b1587a3dbe9996d10a0fd3250f75462ebd59a0b
 │   │   └── index.html
 │   ├── 0c5c67585004e03341e6a87d2db5257b93337b86
 │   │   └── 

The JSON summary of news items look like this:

{'title': u'EFF Wins Protection for Time Zone Database',
 'url': u''}



HNCrawl is MIT licensed.

Something went wrong with that request. Please try again.