Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Fetching contributors…

Cannot retrieve contributors at this time

136 lines (94 sloc) 6.599 kB

r read_chunk('../pkg/tests/getting-data-in-and-out.R') r read_chunk('../pkg/tests/wordcount.R') r opts_chunk$set(echo=TRUE, eval=FALSE, cache=FALSE, tidy=FALSE)

library(rmr2)
  • This document responds to several inquiries on data formats and how to get data in and out of the rmr system
  • Still more a collection of snippets than anything organized
  • Thanks Damien and @ryangarner for the examples and Koert for conversations on the subject

Internally rmr uses R's own serialization in most cases and its own typedbytes extension for some atomic vectors. The goal is to make you forget about representation issues most of the time. But what happens at the boundary of the system, when you need to get non-rmr data in and out of it? Of course rmr has to be able to read and write a variety of formats to be of any use. This is what is available and how to extend it.

Built in formats

The complete list is:

  1. text: for english text. key is NULL and value is a string, one per line. Please don't use it for anything else.
  2. json-ish: it is actually <JSON\tJSON\n> so that streaming can tell key and value. This implies you have to escape all newlines and tabs in the JSON part. Your data may not be in this form, but almost any language has decent JSON libraries. It was the default in rmr 1.0, but we'll keep because it is almost standard. Parsed in C for efficiency, should handle large objects.
  3. csv: A family of concrete formats modeled after R's own read.table. See examples below.
  4. native: based on R's own serialization, it is the default and supports everything that R's serialize supports. If you want to know the gory details, it is implemented as an application specific type for the typedbytes format, which is further encapsulated in the sequence file format when writing to HDFS, which ... Dont't worry about it, it just works. Unfortunately, it is written and read by only one package, rmr itself.
  5. sequence.typedbytes: based on specs in HADOOP-1722 it has emerged as the standard for non Java hadoop application talking to the rest of Hadoop. Also implemented in C for efficiency, its underlying data model is different from R's and we tried to map the two systems the best we could.

The easy way

Specify one of those format strings directly as arguments to mapreduce, from.dfs, to.dfs.

mapreduce(input, input.format = "json")

Custom formats light

Use make.input.format with a string format argument and additional arguments to specify some variants to that format. Typical example is csv which is actually a family of character separated formats with lots of variation in the details. In this case you can call something like

mapreduce(input, input.format = make.input.format("csv", sep = "\t"))

which says to use a CSV format with a tab separator. For this format the arguments are, with few exceptions, the same as read.table. The same is true on the output side with make.output.format and the model for the additional arguments is write.table.

Custom formats

A format is a triple. You can create one with make.input.format, for instance:

The mode element can be text or binary. The format element is a function that takes a connection, reads nrows records and creates a key-value object. The streaming.format element is a fully qualified Java class (as a string) that writes to the connection the format function reads from. The default is TextInputFormat and also useful is org.apache.hadoop.streaming.AutoInputFormat. Once you have these three elements you can pass them to make.input.format and get something out that can be used as the input.format option to mapreduce and the format option to from.dfs. On the output side the situation is reversed with the R function acting first and then the Java class doing its thing.

R data types natively work without additional effort.

Put into HDFS:

my.data needs to be one of: vector, data frame, list or matrix. Compute a frequency of object lengths. Only require input, mapper, and reducer. Note that my.data is passed into the mapper, record by record, as key = NULL, value = subrange.

However, if using data which was not generated with rmr (txt, csv, tsv, JSON, log files, etc) it is necessary to specify an input format.

To define your own input.format (e.g. to handle tsv):

Frequency count on input column two of the tsv data, data comes into map already delimited

Or if you want named columns, this would be specific to your data file

You can then use the list names to directly access your column of interest for manipulations

Another common input.format is fixed width formatted data:

Using the text output.format as a template, we modify it slightly to write fixed width data without tab seperation:

Writing the mtcars dataset to a fixed width file with column widths of 6 bytes and putting into hdfs:

The key thing to note about fwf.reader is the global variable fields. In fields, we define the start and end byte locations for each field in the data:

Sending 1 line at a time to the map function:

Frequency count on cyl:

To get your data out - say you input file, apply column transformations, add columns, and want to output a new csv file Just like input.format, one must define an output format function:

And then use that as an argument to make.output.format, but why sweat it since the devs have already done the work?

This time providing output argument so one can extract from hdfs (cannot hdfs.get from a Rhadoop big data object)

Jump to Line
Something went wrong with that request. Please try again.