Extract data from common crawl using elastic map reduce
Python Shell
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
aws
ccjob
data
docs
examples
scripts
static/img
.gitignore
.travis.yml
CONTRIBUTORS.rst
LICENSE
MANIFEST.in
README.rst
requirements.txt
setup.cfg
setup.py

README.rst

Common Crawl Job Library

https://travis-ci.org/qadium-memex/CommonCrawlJob.svg?branch=master

This work is supported by Qadium Inc as a part of the DARPA Memex Program.

Installation

The easiest way to get started is using pip to install a copy of this library. This will install the stable latest version hosted on PyPI.

$ pip install CommonCrawlJob

Another way is to directly install the code from github to get the bleeding edge version of the code. If that is the case, you can still use pip by pointing it to github and specifying the protocol.

$ pip install git+https://github.com/qadium-memex/CommonCrawlJob.git

Compatibility

Unfortunately, this code does not yet compatible with Python 3 and Python/PyPy 2.7 are the only current implementations which are tested against. Unfortunately the library for encoding WARC (Web Archive) file formats will need to undergo a rewrite it is possible to have deterministic IO behavior.