Grab is a python web scraping framework. Grab provides tons of helpful methods to scrape web sites and to process the scraped content:
- Automatic cookies (session) support
- HTTP and SOCKS proxy with and without authorization
- Keep-Alive support
- IDN support
- Tools to work with web forms
- Easy multipart file uploading
- Flexible customization of HTTP requests
- Automatic charset detection
- Powerful API of extracting info from HTML documents with XPATH queries
- Asynchronous API to make thousands of simultaneous queries. This part of library called Spider and it is too big to even list its features in this README.
- Python 3 ready
from grab import Grab
g = Grab()
g.go('https://github.com/login')
g.set_input('login', 'lorien')
g.set_input('password', '***')
g.submit()
for elem in g.doc.select('//ul[@id="repo_listing"]/li/a'):
print('%s: %s' % (elem.text(), elem.attr('href')))
from grab.spider import Spider, Task
import logging
class ExampleSpider(Spider):
def task_generator(self):
for lang in ('python', 'ruby', 'perl'):
url = 'https://www.google.com/search?q=%s' % lang
yield Task('search', url=url)
def task_search(self, grab, task):
print(grab.doc.select('//div[@class="s"]//cite').text())
logging.basicConfig(level=logging.DEBUG)
bot = ExampleSpider()
bot.run()
Pip is recommended way to install Grab and its dependencies:
$ pip install -U grab
See details here http://docs.grablib.org/en/latest/usage/installation.html
Documentation: http://docs.grablib.org/en/latest/
English mailing list: http://groups.google.com/group/grab-users/
Russian mailing list: http://groups.google.com/group/python-grab/
To report a bug please use github issue tracker: https://github.com/lorien/grab/issues
If you want to develop new feature in Grab please use issue tracker to describe what you want to do or contact me at lorien@lorien.name