diff --git a/README.md b/README.md index 352253cbde..5b8a79ac98 100644 --- a/README.md +++ b/README.md @@ -13,18 +13,18 @@ You should follow the scalding project on twitter: ## Word Count Hadoop is a distributed system for counting words. Here is how it's done in scalding. You can find this in examples: - ```scala - package com.twitter.scalding.examples - - import com.twitter.scalding._ - - class WordCountJob(args : Args) extends Job(args) { - TextLine( args("input") ).read. - flatMap('line -> 'word) { line : String => line.split("\\s+") }. - groupBy('word) { _.size }. - write( Tsv( args("output") ) ) - } - ``` +```scala + package com.twitter.scalding.examples + + import com.twitter.scalding._ + + class WordCountJob(args : Args) extends Job(args) { + TextLine( args("input") ).read. + flatMap('line -> 'word) { line : String => line.split("\\s+") }. + groupBy('word) { _.size }. + write( Tsv( args("output") ) ) + } +``` ##Tutorial See tutorial/ for examples of how to use the DSL. See tutorial/CodeSnippets.md for some