Skip to content
A Spark WordCountJob example as a standalone SBT project with Specs2 tests, runnable on Amazon EMR
Scala Python Shell
Branch: master
Clone or download
Pull request Compare This branch is 1 commit ahead, 9 commits behind snowplow:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

Spark Example Project Build Status


This is a simple word count job written in Scala for the Spark spark cluster computing platform, with instructions for running on [Amazon Elastic MapReduce] emr in non-interactive mode. The code is ported directly from Twitter's [WordCountJob] wordcount for Scalding.

This was built by the Data Science team at [Snowplow Analytics] snowplow, who use Spark on their [Data pipelines and algorithms] data-pipelines-algos projects.

See also: [Spark Streaming Example Project] spark-streaming-example-project | [Scalding Example Project] scalding-example-project


Assuming git, [Vagrant] vagrant-install and [VirtualBox] virtualbox-install installed:

 host> git clone
 host> cd spark-example-project
 host> vagrant up && vagrant ssh
guest> cd /vagrant
guest> sbt assembly

The 'fat jar' is now available as:


Unit testing

The assembly command above runs the test suite - but you can also run this manually with:

$ sbt test
[info] + A WordCount job should
[info]   + count words correctly
[info] Passed: : Total 1, Failed 0, Errors 0, Passed 1, Skipped 0

Running on Amazon EMR



  1. An AWS CLI profile, e.g. spark
  2. An Amazon S3 bucket, e.g. spark-example-project-your-name
  3. A EC2 keypair, e.g. spark-ec2-keypair
  4. A VPC public subnet, e.g. subnet-3dc2bd2a

Make sure you have assembled the jarfile (see above).

Upload and run

guest> inv upload spark spark-example-project-your-name
guest> inv run_emr spark spark-example-project-your-name spark-ec2-keypair subnet-3dc2bd2a

You can now monitor the running EMR jobflow in the AWS Elastic MapReduce UI.


Once the job has completed, you should see a folder structure like this in your output bucket:

 +- part-00000
 +- part-00001
 +- part-00002
 +- part-...

Download the files and check that one file contains:


while another file contains:


Running on your own Spark cluster

If you have successfully run this on your own Spark cluster, we would welcome a pull-request updating the instructions in this section.

Next steps

Fork this project and adapt it into your own custom Spark job.

To invoke/schedule your Spark job on EMR, check out:


  • Change output from tuples to TSV ([#2] issue-2)

Copyright and license

Copyright 2013-2015 Snowplow Analytics Ltd.

Licensed under the [Apache License, Version 2.0] license (the "License"); you may not use this software except in compliance with the License.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

You can’t perform that action at this time.