Skip to content

Commit

Permalink
Add information on usage to the README
Browse files Browse the repository at this point in the history
  • Loading branch information
kevinweil committed Sep 21, 2009
1 parent a756a42 commit fe08063
Showing 1 changed file with 29 additions and 0 deletions.
29 changes: 29 additions & 0 deletions README
@@ -0,0 +1,29 @@
This project builds off the great work done at code.google.com/p/hadoop-gpl-compression. As of issue 41, the differences in this codebase are:
* it fixes a few bugs in hadoop-gpl-compression -- notably, it allows the decompressor to read small or uncompressable lzo files
* it adds the ability to work with Hadoop streaming via the com.apache.hadoop.mapred.DeprecatedLzoTextInputFormat class
* it adds an easier way to index lzo files.

LZO is a wonderful compression scheme to use with Hadoop because it's incredibly fast, and (with a bit of work) it's splittable. Gzip is decently fast, but cannot take advantage of Hadoop's natural map splits because it's impossible to start decompressing a gzip stream starting at a random offset in the file. LZO's block format makes it possible to start decompressing at certain specific offsets of the file -- those that start new LZO block boundaries. In addition to providing LZO decompression support, these classes provide an "indexer" (com.hadoop.compression.lzo.LzoIndexer) which will read a set of LZO files and output the offsets of LZO block boundaries that occur near the natural Hadoop block boundaries. This enables a large LZO file to be split into multiple mappers and processed in parallel. Because it is compressed, less data is read off disk, minimizing the number of IOPS required. And LZO decompression is so fast that the CPU stays ahead of the disk read, so there is no performance impact from having to decompress data as it's read off disk.

To get started, see http://code.google.com/p/hadoop-gpl-compression/wiki/FAQ. This project is built exactly the same way; please follow the answer to "How do I configure Hadoop to use these classes?" on that page.

1. Once the libs are built and installed, you may want to add them to the class paths and library paths. That is, in hadoop-env.sh, set

export HADOOP_CLASSPATH=/path/to/your/hadoop-gpl-compression.jar
export JAVA_LIBRARY_PATH=/path/to/hadoop-gpl-native-libs:/path/to/standard-hadoop-native-libs

Note that there seems to be a bug in /path/to/hadoop/bin/hadoop; comment out the line

JAVA_LIBRARY_PATH=''

because it keeps Hadoop from keeping the alteration you made to JAVA_LIBRARY_PATH above.

2. At this point, you should be able to use the indexer to index lzo files in Hadoop. If big_file.lzo is a 1 GB LZO file, index it via:

hadoop jar /path/to/your/hadoop-gpl-compression.jar com.hadoop.compression.lzo.LzoIndexer big_file.lzo

and after ~10 seconds there will be a file named big_file.lzo.index. This file tells the LzoTextInputFormat's getSplits function how to break the LZO file into splits that can be decompressed and processed in parallel. Alternatively, if you specify a directory instead of a filename, the indexer will recursively walk the directory structure looking for .lzo files, indexing any that do not already have corresponding .lzo.index files.

3. Now run any job, say wordcount, over the new file. In Java-based M/R jobs, just replace any uses of TextInputFormat by LzoTextInputFormat. In streaming jobs, add "-inputformat com.hadoop.mapred.DeprecatedLzoTextInputFormat" (streaming still uses the old APIs, and needs a class that inherits from org.apache.hadoop.mapred.InputFormat). For Pig jobs, email me or check the pig list -- I have custom LZO loader classes that work but are not (yet) contributed back.

Note that if you forget to index an .lzo file, the job will work but will process the entire file in a single split, which will be less efficient.

0 comments on commit fe08063

Please sign in to comment.