Python
Clone or download
Latest commit b952150 Apr 16, 2018
Permalink
Failed to load latest commit information.
docs 1.1.1 Apr 16, 2018
scrapelib 1.1.1 Apr 16, 2018
.coveragerc fix some coverage issues May 3, 2014
.gitignore add migration docs Mar 20, 2015
.travis.yml email Apr 16, 2018
LICENSE moving it over Jun 17, 2015
MANIFEST.in MANIFEST.in Jul 13, 2010
README.rst email Apr 16, 2018
setup.cfg wheel setup May 3, 2014
setup.py 1.1.1 Apr 16, 2018
tox.ini test on 3.5 Dec 2, 2015

README.rst

scrapelib

https://travis-ci.org/jamesturk/scrapelib.svg?branch=master https://coveralls.io/repos/jamesturk/scrapelib/badge.png?branch=master Documentation Status

scrapelib is a library for making requests to less-than-reliable websites, it is implemented (as of 0.7) as a wrapper around requests.

scrapelib originated as part of the Open States project to scrape the websites of all 50 state legislatures and as a result was therefore designed with features desirable when dealing with sites that have intermittent errors or require rate-limiting.

Advantages of using scrapelib over alternatives like httplib2 simply using requests as-is:

  • All of the power of the suberb requests library.
  • HTTP, HTTPS, and FTP requests via an identical API
  • support for simple caching with pluggable cache backends
  • request throttling
  • configurable retries for non-permanent site failures

Written by James Turk <dev@jamesturk.net>, thanks to Michael Stephens for initial urllib2/httplib2 version

See https://github.com/jamesturk/scrapelib/graphs/contributors for contributors.

Requirements

  • python 2.7, >=3.3
  • requests >= 2.0 (earlier versions may work but aren't tested)

Example Usage

Documentation: http://scrapelib.readthedocs.org/en/latest/

import scrapelib
s = scrapelib.Scraper(requests_per_minute=10)

# Grab Google front page
s.get('http://google.com')

# Will be throttled to 10 HTTP requests per minute
while True:
    s.get('http://example.com')