CommonCrawl WARC/WET/WAT examples and processing code for Java + Hadoop
Switch branches/tags
Nothing to show
Clone or download
Smerity Merge pull request #1 from thedatachef/master
added pom for building project; makes project publishable; see: https://...
Latest commit 66126a5 Jul 27, 2014

Common Crawl Logo

Common Crawl WARC Examples

This repository contains both wrappers for processing WARC files in Hadoop MapReduce jobs and also Hadoop examples to get you started.

There are three examples for Hadoop processing:

  • [WARC files] HTML tag frequency counter using raw HTTP responses
  • [WAT files] Server response analysis using response metadata
  • [WET files] Classic word count example using extracted text

All three assume initially that the files are stored locally but can be trivially modified to pull them down from Common Crawl's Amazon S3 bucket. To acquire the files, you can use S3Cmd or similar.

s3cmd get s3://aws-publicdatasets/common-crawl/crawl-data/CC-MAIN-2013-48/segments/1386163035819/warc/CC-MAIN-20131204131715-00000-ip-10-33-133-15.ec2.internal.warc.gz


MIT License, as per LICENSE