r opts_chunk$set(echo=TRUE, eval=FALSE, cache=FALSE, tidy=FALSE)
- This document responds to several inquiries on data formats and how to get data in and out of the rmr system
- Still more a collection of snippets than anything organized
- Thanks Damien and @ryangarner for the examples and Koert for conversations on the subject
rmr uses R's own serialization in most cases and its own typedbytes extension for some atomic vectors. The goal is to make you forget about representation issues most of the time. But what happens at the boundary of the
system, when you need to get non-rmr data in and out of it? Of course
rmr has to be able to read and write a variety of formats to be of any use. This is what is available and how to extend it.
Built in formats
The complete list is:
text: for english text. key is
NULLand value is a string, one per line. Please don't use it for anything else.
json-ish: it is actually
<JSON\tJSON\n>so that streaming can tell key and value. This implies you have to escape all newlines and tabs in the JSON part. Your data may not be in this form, but almost any language has decent JSON libraries. It was the default in
rmr1.0, but we'll keep because it is almost standard. Parsed in C for efficiency, should handle large objects.
csv: A family of concrete formats modeled after R's own
read.table. See examples below.
native: based on R's own serialization, it is the default and supports everything that R's
serializesupports. If you want to know the gory details, it is implemented as an application specific type for the typedbytes format, which is further encapsulated in the sequence file format when writing to HDFS, which ... Dont't worry about it, it just works. Unfortunately, it is written and read by only one package,
sequence.typedbytes: based on specs in HADOOP-1722 it has emerged as the standard for non Java hadoop application talking to the rest of Hadoop. Also implemented in C for efficiency, its underlying data model is different from R's and we tried to map the two systems the best we could.
A format is a triple. You can create one with
make.input.format, for instance:
mode element can be
format element is a function that takes a connection, reads
nrows records and creates a key-value pair. The
streaming.format element is a fully qualified Java class (as a string) that writes to the connection the format function reads from. The default is
TextInputFormat and also useful is
org.apache.hadoop.streaming.AutoInputFormat. Once you have these three elements you can pass them to
make.input.format and get something out that can be used as the
input.format option to
mapreduce and the
format option to
from.dfs. On the output side the situation is reversed with the R function acting first and then the Java class doing its thing.
R data types natively work without additional effort.
Put into HDFS:
my.data is coerced to a list and each element of a list becomes a record.
Compute a frequency of object lengths. Only require input, mapper, and reducer. Note that
my.data is passed into the mapper, record by
key = NULL, value = item.
However, if using data which was not generated with
rmr (txt, csv, tsv, JSON, log files, etc) it is necessary to specify an input format.
There is a third option in between the simplicity of a string like "csv" and the full power of
make.input.format, which is passing the format string to
make.input.format with additional arguments that further specify the specific dialect of
csv, as in
make.input.format("csv", sep = ';').
csv is the only format offering this possibility as the others are fully specified and it takes the same options as
read.table. The same on the output side with
write.table being the model.
To define your own
input.format (e.g. to handle tsv):
Frequency count on input column two of the tsv data, data comes into map already delimited
Or if you want named columns, this would be specific to your data file
You can then use the list names to directly access your column of interest for manipulations
input.format is fixed width formatted data:
Using the text
output.format as a template, we modify it slightly to write fixed width data without tab seperation:
mtcars dataset to a fixed width file with column widths of 6 bytes and putting into hdfs:
The key thing to note about
fwf.reader is the global variable
fields, we define the start and
end byte locations for each field in the data:
Sending 1 line at a time to the map function:
Frequency count on
To get your data out - say you input file, apply column transformations, add columns, and want to output a new csv file Just like input.format, one must define an output format function:
And then use that as an argument to
make.output.format, but why sweat it since the devs have already done the work?
This time providing output argument so one can extract from hdfs (cannot hdfs.get from a Rhadoop big data object)