a library for scraping things
Python
Latest commit 00516e1 Dec 2, 2015 @jamesturk test on 3.5
Permalink
Failed to load latest commit information.
docs
scrapelib
.coveragerc
.gitignore
.travis.yml
LICENSE
MANIFEST.in
README.rst
requirements.txt require 1.2.2 Apr 17, 2014
setup.cfg
setup.py
tox.ini

README.rst

scrapelib

https://travis-ci.org/jamesturk/scrapelib.svg?branch=master https://coveralls.io/repos/jamesturk/scrapelib/badge.png?branch=master Documentation Status

scrapelib is a library for making requests to less-than-reliable websites, it is implemented (as of 0.7) as a wrapper around requests.

scrapelib originated as part of the Open States project to scrape the websites of all 50 state legislatures and as a result was therefore designed with features desirable when dealing with sites that have intermittent errors or require rate-limiting.

Advantages of using scrapelib over alternatives like httplib2 simply using requests as-is:

  • All of the power of the suberb requests library.
  • HTTP, HTTPS, and FTP requests via an identical API
  • support for simple caching with pluggable cache backends
  • request throttling
  • configurable retries for non-permanent site failures

Written by James Turk <james.p.turk@gmail.com>, thanks to Michael Stephens for initial urllib2/httplib2 version

See https://github.com/jamesturk/scrapelib/graphs/contributors for contributors.

Requirements

  • python 2.7, 3.3, 3.4
  • requests >= 2.0 (earlier versions may work but aren't tested)

Example Usage

Documentation: http://scrapelib.readthedocs.org/en/latest/

import scrapelib
s = scrapelib.Scraper(requests_per_minute=10)

# Grab Google front page
s.get('http://google.com')

# Will be throttled to 10 HTTP requests per minute
while True:
    s.get('http://example.com')