Skip to content


Subversion checkout URL

You can clone with
Download ZIP
a library for scraping things
Failed to load latest commit information.
docs moving it over
scrapelib try out some stats stuff
.coveragerc fix some coverage issues
.gitignore add migration docs
.travis.yml moving it over
LICENSE moving it over
README.rst typo in readme
requirements.txt require 1.2.2
setup.cfg wheel setup moving it over
tox.ini stop testing py33 locally


scrapelib Documentation Status

scrapelib is a library for making requests to less-than-reliable websites, it is implemented (as of 0.7) as a wrapper around requests.

scrapelib originated as part of the Open States project to scrape the websites of all 50 state legislatures and as a result was therefore designed with features desirable when dealing with sites that have intermittent errors or require rate-limiting.

Advantages of using scrapelib over alternatives like httplib2 simply using requests as-is:

  • All of the power of the suberb requests library.
  • HTTP, HTTPS, and FTP requests via an identical API
  • support for simple caching with pluggable cache backends
  • request throttling
  • configurable retries for non-permanent site failures

Written by James Turk <>, thanks to Michael Stephens for initial urllib2/httplib2 version

See for contributors.


  • python 2.7, 3.3, 3.4
  • requests >= 2.0 (earlier versions may work but aren't tested)

Example Usage


import scrapelib
s = scrapelib.Scraper(requests_per_minute=10)

# Grab Google front page

# Will be throttled to 10 HTTP requests per minute
while True:
Something went wrong with that request. Please try again.