Common Crawl Index Server
Usage & Installation
To run locally, please install with
pip install -r requirements.txt
Common Crawl stores data on Amazon S3 and the data can be accessed via s3 or https. Access to CC data using s3 api is restricted to authenticated AWS users.
Currently, individual indexes for each crawl can be accessed under:
Most of the index will be served from S3, however, a smaller secondary index must be installed locally for each collection.
This can be done automatically by running:
install-collections.sh which will install all available collections locally. It uses the AWS CLI tool to sync the index.
If successful, there should be
collections directory with at least one index.
To run, simply run
cdx-server to start up the index server, or optionally
wayback, to run pywb replay system along with the cdx server.
Running with docker
If you have docker installed in your system, you can run index server with docker itself.
git clone https://github.com/commoncrawl/cc-index-server.git cd cc-index-server docker build . -t cc-index docker run --rm --publish 8080:8080 -ti cc-index
You can use
install-collections.sh to download indexes to your system and mount it on docker.
CDX Server API
The API endpoints correspond to existing index collections in collections directory.
For example, one currently available index is
CC-MAIN-2015-06 and it can be accessed via
Refer to CDX Server API for more detailed instructions on the API itself.
The pywb README provides additional information about pywb.
Building the Index
Please see the webarchive-indexing repository for more info on how the index is built.