Hadoop-based Web Archive Record Processing
These tools are designed to served this usecase: Quality assured ARC to WARC migration
Examples how these tools can be applied in context are available in form of Taverna Workflows published on myExperiment:
- ARC to WARC Migration with CDX Index and wayback rendering screenshot comparison
- ONB Web Archive Fits Characterisation using ToMaR
Simple usage examples of the standaline Java executables
Use arc2warc-migration-cli to migrate an ARC container file to a file in the new WARC format:
java -jar hawarp/arc2warc-migration-cli/target/arc2warc-migration-cli-1.0-jar-with-dependencies.jar
-i /local/input/directory/ -o /local/output/directory/
Run droid-identification using Apache Hadoop:
hadoop jar target/droid-identify-1.0.jar-with-dependencies.jar
-d /hdfs/path/to/textfiles/with/absolutefilepaths/ -n job_name
The input for this Hadoop job is a text file listing file paths (either local file system paths – accessible from each worker node – or hadoop distributed file system paths).
More usage example on other tools can be found in the documentation of the individual modules.
The following set of command line interface applications is included in this project as modules:
arc2warc-migration-cli
Command line application to convert ARC container files to the new ISO standard format WARC.
CDX creator
A tool to create CDX index files which are required if the wayback software is used to display archived resources contained in ARC or WARC container files.
droid-identify
Hadoop Job for Identifying files using DROID (Digital Record Object Identification), Version 6.1, http://digital-preservation.github.io/droid/.
unpack2temp-identify
unpack2temp-identify is a tool to identify and/or characterise files packaged in container files using a standalone java application or a Hadoop job.
tika-identify
Hadoop Job for Identifying files using Apache Tika Version 1.0.
tomar-prepare-inputdata
tomar-prepare-inputdata is a tool to prepare web archive container files in the ARC format which are stored in a Hadoop Distributed File System (HDFS) in order to allow processing of the individual files by means of the SCAPE Platform tool Tomar.
- Java >= 1.7
- Apache Hadoop (e.g. Cloudera CDH)