Programmatic access to Amazon's Elastic Map Reduce service, driven by the Sharethrough team's requirements for belting out EMR jobs.
Pull request Compare This branch is 261 commits behind rslifka:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.

Elasticity provides programmatic access to Amazon's Elastic Map Reduce service. The aim is to conveniently abstract away the complex EMR REST API and make working with job flows more productive and more enjoyable.

Build Status REE, 1.8.7, 1.9.2, 1.9.3

Elasticity provides two ways to access EMR:

  • Indirectly through a JobFlow-based API. This README discusses the Elasticity API.
  • Directly through access to the EMR REST API. The less-discussed hidden darkside... I use this to enable the Elasticity API. RubyDoc can be found at the RubyGems auto-generated documentation site. Be forewarned: Making the calls directly requires that you understand how to structure EMR requests at the Amazon API level and from experience I can tell you there are more fun things you could be doing :) Scroll to the end for more information on the Amazon API.


gem install elasticity

or in your Gemfile

gem 'elasticity', '~> 2.5'

This will ensure that you protect yourself from API changes, which will only be made in major revisions.

Roughly, What Am I Getting Myself Into?

If you're familiar with the AWS EMR UI, you'll recall there are sample jobs Amazon supplies to help us get familiar with EMR. Here's how you'd kick off the "Cloudburst (Custom Jar)" sample job with Elasticity. You can run this code as-is (supplying your AWS credentials and an output location) and JobFlow#run will return the ID of the job flow.

require 'elasticity'

# Create a job flow with your AWS credentials
jobflow ='AWS access key', 'AWS secret key')

# Omit credentials to use the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables
# jobflow =

# This is the first step in the jobflow - running a custom jar
step ='s3n://elasticmapreduce/samples/cloudburst/cloudburst.jar')

# Here are the arguments to pass to the jar
step.arguments = %w(s3n://elasticmapreduce/samples/cloudburst/input/ s3n://elasticmapreduce/samples/cloudburst/input/ s3n://OUTPUT_BUCKET/cloudburst/output/2012-06-22 36 3 0 1 240 48 24 24 128 16)

# Add the step to the jobflow

# Let's go!

Note that this example is only for CustomJarStep. Other steps will have different means of passing parameters.

Working with Job Flows

Job flows are the center of the EMR universe. The general order of operations is:

  1. Create a job flow.
  2. Specify options.
  3. (optional) Configure instance groups.
  4. (optional) Add bootstrap actions.
  5. Add steps.
  6. (optional) Upload assets.
  7. Run the job flow.
  8. (optional) Add additional steps.
  9. (optional) Shutdown the job flow.

1 - Create a Job Flow

Only your AWS credentials are needed.

# Manually specify AWS credentials
jobflow ='AWS access key', 'AWS secret key')

# Use the standard environment variables (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY)
jobflow =

If you want to access a job flow that's already running:

# Manually specify AWS credentials
jobflow = Elasticity::JobFlow.from_jobflow_id('AWS access key', 'AWS secret key', 'jobflow ID', 'region')

# Use the standard environment variables (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY)
jobflow = Elasticity::JobFlow.from_jobflow_id(nil, nil, 'jobflow ID', 'region')

This is useful if you'd like to attach to a running job flow and add more steps, etc. The region parameter is necessary because job flows are only accessible from the the API when you connect to the same endpoint that created them (e.g. us-west-1). If you don't specify the region parameter, us-east-1 is assumed.

2 - Specifying Options

Configuration job flow options, shown below with default values. Note that these defaults are subject to change - they are reasonable defaults at the time(s) I work on them (e.g. the latest version of Hadoop).

These options are sent up as part of job flow submission (i.e. JobFlow#run), so be sure to configure these before running the job.                              = 'Elasticity Job Flow'

jobflow.action_on_failure                 = 'TERMINATE_JOB_FLOW'
jobflow.keep_job_flow_alive_when_no_steps = false
jobflow.ami_version                       = 'latest'
jobflow.hadoop_version                    = '1.0.3'
jobflow.log_uri                           = nil

jobflow.ec2_key_name                      = nil
jobflow.ec2_subnet_id                     = nil
jobflow.placement                         = 'us-east-1a'
jobflow.instance_count                    = 2
jobflow.master_instance_type              = 'm1.small'
jobflow.slave_instance_type               = 'm1.small'

3 - Configure Instance Groups (optional)

Technically this is optional since Elasticity creates MASTER and CORE instance groups for you (one m1.small instance in each). If you'd like your jobs to finish in an appreciable amount of time, you'll want to at least add a few instances to the CORE group :)

The Easy Way™

If all you'd like to do is change the type or number of instances, JobFlow provides a few shortcuts to do just that.

jobflow.instance_count       = 10
jobflow.master_instance_type = 'm1.small'
jobflow.slave_instance_type  = 'c1.medium'

This says "I want 10 instances from EMR: one m1.small MASTER instance and nine c1.medium CORE instances."

The Still-Easy Way™

Elasticity supports all EMR instance group types and all configuration options. The MASTER, CORE and TASK instance groups can be configured via JobFlow#set_master_instance_group, JobFlow#set_core_instance_group and JobFlow#set_task_instance_group respectively.

On-Demand Instance Groups

These instances will be available for the life of your EMR job, versus Spot instances which are transient depending on your bid price (see below).

ig =
ig.count = 10                       # Provision 10 instances
ig.type  = 'c1.medium'              # See the EMR docs for a list of supported types
ig.set_on_demand_instances          # This is the default setting


Spot Instance Groups

When Amazon EC2 has unused capacity, it offers EC2 instances at a reduced cost, called the Spot Price. This price fluctuates based on availability and demand. You can purchase Spot Instances by placing a request that includes the highest bid price you are willing to pay for those instances. When the Spot Price is below your bid price, your Spot Instances are launched and you are billed the Spot Price. If the Spot Price rises above your bid price, Amazon EC2 terminates your Spot Instances. - EMR Developer Guide

ig =
ig.count = 10                       # Provision 10 instances
ig.type  = 'c1.medium'              # See the EMR docs for a list of supported types
ig.set_spot_instances(0.25)         # Makes this a SPOT group with a $0.25 bid price


4 - Add Bootstrap Actions (optional)

Bootstrap actions are run as part of setting up the job flow, so be sure to configure these before running the job.

Bootstrap Actions

With the basic BootstrapAction you specify everything about the action - the script, options and arguments.

action ='s3n://my-bucket/my-script', '-g', '100')

Hadoop Bootstrap Actions

HadoopBootstrapAction handles passing Hadoop configuration options through.

['-m', ''),'-m', '')'-m', '')
].each do |action|

Hadoop File Bootstrap Actions

With EMR's current limit of 15 bootstrap actions, chances are you're going to create a configuration file full of your options and opt to use that instead of passing all the options individually. In that case, use the HadoopFileBootstrapAction, supplying the location of your configuration file.

action ='s3n://my-bucket/job-config.xml')

5 - Add Steps

Each type of step has #name and #action_on_failure fields that can be overridden. Apart from that, steps are configured differently - exhaustively described below.

Adding a Pig Step

# Path to the Pig script
pig_step ='s3n://mybucket/script.pig')

# (optional) These variables are available during the execution of your script
pig_step.variables = {
  'VAR1' => 'VALUE1',
  'VAR2' => 'VALUE2'



Given the importance of specifying a reasonable value for [the number of parallel reducers]( PARALLEL), Elasticity calculates and passes through a reasonable default up with every invocation in the form of a script variable called E_PARALLELS. This default value is based off of the formula in the Pig Cookbook and the number of reducers AWS configures per instance.

For example, if you had 8 instances in total and your slaves were m1.xlarge, the value is 26 (as shown below).

        -p INPUT=s3n://elasticmapreduce/samples/pig-apache/input
        -p OUTPUT=s3n://slif-elasticity/pig-apache/output/2011-05-04
        -p E_PARALLELS=26

Use this as you would any other Pig variable.

  A = LOAD 'myfile' AS (t, u, v);

Adding a Hive Step

# Path to the Hive Script
hive_step ='s3n://mybucket/script.hql')

# (optional) These variables are available during the execution of your script
hive_step.variables = {
  'VAR1' => 'VALUE1',
  'VAR2' => 'VALUE2'


Adding a Streaming Step

# Input bucket, output bucket, mapper and reducer scripts
streaming_step ='s3n://elasticmapreduce/samples/wordcount/input', 's3n://elasticityoutput/wordcount/output/2012-07-23', 's3n://elasticmapreduce/samples/wordcount/', 'aggregate')


Adding a Custom Jar Step

# Path to your jar
jar_step ='s3n://mybucket/my.jar')

# (optional) Arguments passed to the jar
jar_step.arguments = ['arg1', 'arg2']


6 - Upload Assets (optional)

This isn't part of JobFlow; more of an aside. Elasticity provides a very basic means of uploading assets to S3 so that your EMR job has access to them. Most commonly this will be a set of resources to run the job (e.g. JAR files, streaming scripts, etc.) and a set of resources used by the job itself (e.g. a TSV file with a range of valid values, join tables, etc.).

# Specify the bucket name, AWS credentials and region
s3 = Elasticity::SyncToS3('my-bucket', 'access', 'secret', 'region')

# Alternatively, specify nothing :)
# - Use the standard environment variables (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY)
# - Use the 'us-east-1' region by default
# s3 = Elasticity::SyncToS3('my-bucket')

# Recursively sync the contents of '/foo' under the remote location 'remote-dir/this-job'
s3.sync('/foo', 'remote-dir/this-job')

# Sync a single file to a remote directory
s3.sync('/foo/this-job/tables/join.tsv', 'remote-dir/this-job/tables')

If the bucket doesn't exist, it will be created.

If a file already exists, there is an MD5 checksum evaluation. If the checksums are the same, the file will be skipped. Now you can use something like s3n://my-bucket/remote-dir/this-job/tables/join.tsv in your EMR jobs.

7 - Run the Job Flow

Submit the job flow to Amazon, storing the ID of the running job flow.

jobflow_id =

8 - Add Additional Steps (optional)

Steps can be added to a running jobflow just by calling #add_step on the job flow exactly how you add them prior to submitting the job.

9 - Shut Down the Job Flow (optional)

By default, job flows are set to terminate when there are no more running steps. You can tell the job flow to stay alive when it has nothing left to do:

jobflow.keep_job_flow_alive_when_no_steps = true

If that's the case, or if you'd just like to terminate a running jobflow before waiting for it to finish:


Amazon EMR Documentation

Elasticity wraps all of the EMR API calls. Please see the Amazon guide for details on these operations because the default values aren't obvious (e.g. the meaning of DescribeJobFlows without parameters).

You may opt for "direct" access to the API where you specify the params and Elasticity takes care of the signing for you, responding with the XML from Amazon.

In addition to the AWS EMR site, there are three primary resources of reference information for EMR:

Unfortunately, the documentation is sometimes incorrect and sometimes missing. E.g. the allowable values for AddInstanceGroups are present in the PDF version of the API reference but not in the HTML version. Elasticity implements the API as specified in the PDF reference as that is the most complete description I could find.



  Copyright 2011-2012 Robert Slifka

  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  See the License for the specific language governing permissions and
  limitations under the License.