a library for scraping things
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.
docs moving it over Jun 17, 2015
scrapelib try out some stats stuff Mar 23, 2015
.coveragerc fix some coverage issues May 3, 2014
.gitignore add migration docs Mar 20, 2015
LICENSE moving it over Jun 17, 2015
MANIFEST.in MANIFEST.in Jul 13, 2010
README.rst typo in readme Jun 17, 2015
setup.cfg wheel setup May 3, 2014
setup.py test on 3.5 Dec 2, 2015



https://travis-ci.org/jamesturk/scrapelib.svg?branch=master https://coveralls.io/repos/jamesturk/scrapelib/badge.png?branch=master Documentation Status

scrapelib is a library for making requests to less-than-reliable websites, it is implemented (as of 0.7) as a wrapper around requests.

scrapelib originated as part of the Open States project to scrape the websites of all 50 state legislatures and as a result was therefore designed with features desirable when dealing with sites that have intermittent errors or require rate-limiting.

Advantages of using scrapelib over alternatives like httplib2 simply using requests as-is:

  • All of the power of the suberb requests library.
  • HTTP, HTTPS, and FTP requests via an identical API
  • support for simple caching with pluggable cache backends
  • request throttling
  • configurable retries for non-permanent site failures

Written by James Turk <james.p.turk@gmail.com>, thanks to Michael Stephens for initial urllib2/httplib2 version

See https://github.com/jamesturk/scrapelib/graphs/contributors for contributors.


  • python 2.7, 3.3, 3.4
  • requests >= 2.0 (earlier versions may work but aren't tested)

Example Usage

Documentation: http://scrapelib.readthedocs.org/en/latest/

import scrapelib
s = scrapelib.Scraper(requests_per_minute=10)

# Grab Google front page

# Will be throttled to 10 HTTP requests per minute
while True: