Permalink
Browse files

Merge pull request #34 from ravi77o/master

Corrected typo
  • Loading branch information...
2 parents ab85f0b + aada895 commit 551f0496d3465387d9653fe9115fa540cfbd48b6 @julienledem julienledem committed Mar 13, 2013
Showing with 1 addition and 1 deletion.
  1. +1 −1 README.md
View
@@ -11,7 +11,7 @@ and writers for Parquet files.
We created Parquet to make the advantages of compressed, efficient columnar data representation available to any project in the Hadoop ecosystem.
-Parquet is built from the ground up with complex nested data structures in mind, and uses the r[record shredding and assembly algorithm](https://github.com/Parquet/parquet-mr/wiki/The-striping-and-assembly-algorithms-from-the-Dremel-paper) described in the Dremel paper. We believe this approach is superior to simple flattening of nested name spaces.
+Parquet is built from the ground up with complex nested data structures in mind, and uses the [record shredding and assembly algorithm](https://github.com/Parquet/parquet-mr/wiki/The-striping-and-assembly-algorithms-from-the-Dremel-paper) described in the Dremel paper. We believe this approach is superior to simple flattening of nested name spaces.
Parquet is built to support very efficient compression and encoding schemes. Multiple projects have demonstrated the performance impact of applying the right compression and encoding scheme to the data. Parquet allows compression schemes to be specified on a per-column level, and is future-proofed to allow adding more encodings as they are invented and implemented.

0 comments on commit 551f049

Please sign in to comment.