diff --git a/rmr/docs/compatibility.Rmd b/rmr/docs/compatibility.Rmd
new file mode 100644
index 00000000..ccf7e8e2
--- /dev/null
+++ b/rmr/docs/compatibility.Rmd
@@ -0,0 +1,14 @@
+# Compatibility testing for rmr 1.3.x
+Please contribute with additional reports. To claim compatibility you need to run `R CMD check path-to-rmr` successfully.
+As with any new release, testing on additional platforms is under way. If you build your own Hadoop, see [Which Hadoop for rmr](https://github.com/RevolutionAnalytics/RHadoop/wiki/Which-Hadoop-for-rmr).
+
+
+
+rmr | Hadoop | R | OS | Compatibility | Reporter |
+
+
+1.3.1 | mr1-cdh4.0.0 | Revolution R Enterprise 6.0 | 64-bit CentOS 5.6 | only x86_64 and mr1 | Revolution Analytics |
+1.3.1 | CDH3u4 | Revolution R Enterprise 6.0 | 64-bit CentOS 5.6 | only x86_64 | Revolution Analytics |
+1.3.1 | Apache Hadoop 1.0.2 | Revolution R Enterprise 6.0 | 64-bit CentOS 5.6 | only x86_64 | Revolution Analytics |
+
+
\ No newline at end of file
diff --git a/rmr/docs/compatibility.html b/rmr/docs/compatibility.html
new file mode 100644
index 00000000..d65ccdfc
--- /dev/null
+++ b/rmr/docs/compatibility.html
@@ -0,0 +1,165 @@
+
+
+
+
+
+
+Compatibility testing for rmr 1.3.x (Current stable)
+
+
+
+
+
+
+
+
+
+
+
+
+Compatibility testing for rmr 1.3.x (Current stable)
+
+Please contribute with additional reports. To claim compatibility you need to run R CMD check path-to-rmr
successfully.
+As with any new release, testing on additional platforms is under way. If you build your own Hadoop, see Which Hadoop for rmr.
+
+
+
+rmr | Hadoop | R | OS | Compatibility | Reporter |
+
+
+1.3.1 | mr1-cdh4.0.0 | Revolution R Enterprise 6.0 | 64-bit CentOS 5.6 | only x86_64 and mr1 | Revolution Analytics |
+1.3.1 | CDH3u4 | Revolution R Enterprise 6.0 | 64-bit CentOS 5.6 | only x86_64 | Revolution Analytics |
+1.3.1 | Apache Hadoop 1.0.2 | Revolution R Enterprise 6.0 | 64-bit CentOS 5.6 | only x86_64 | Revolution Analytics |
+
+
+
+
+
+
+
diff --git a/rmr/docs/compatibility.pdf b/rmr/docs/compatibility.pdf
new file mode 100644
index 00000000..27d2760b
Binary files /dev/null and b/rmr/docs/compatibility.pdf differ
diff --git a/rmr/docs/tutorial.Rmd b/rmr/docs/tutorial.Rmd
index 36a521f0..9244c72f 100644
--- a/rmr/docs/tutorial.Rmd
+++ b/rmr/docs/tutorial.Rmd
@@ -1,9 +1,9 @@
-`r read_chunk('../tests/basic-examples.R')`
-`r read_chunk('../tests/wordcount.R')`
-`r read_chunk('../tests/logistic-regression.R')`
-`r read_chunk('../tests/linear-least-squares.R')`
-`r read_chunk('../tests/kmeans.R')`
+`r read_chunk('../pkg/tests/basic-examples.R')`
+`r read_chunk('../pkg/tests/wordcount.R')`
+`r read_chunk('../pkg/tests/logistic-regression.R')`
+`r read_chunk('../pkg/tests/linear-least-squares.R')`
+`r read_chunk('../pkg/tests/kmeans.R')`
`r opts_chunk$set(echo=TRUE, eval=FALSE, cache=FALSE, tidy=FALSE)`
# Mapreduce in R
@@ -34,7 +34,7 @@ function, which we are not using here, is a regular R function with a few constr
1. It returns a key value pair as returned by the helper function `keyval`, which takes any two R objects as arguments; you can also return a list of such objects, or `NULL`.
In this example, we are not using the key at all, only the value, but we still need both to support the general mapreduce case.
-The return value is an object, and you can pass it as input to other jobs or read it into memory (watch out, not good for big data) with `from.dfs`. `from.dfs` is complementary `to.dfs`. It returns a list of key-value pairs, which is the most general data type that mapreduce can handle. If you prefer data frames to lists, you can instruct `from.dfs` to perform a conversion to data frames, which will cover many many important use cases but is not as general as a list of pairs (structured vs. unstructured case). `from.dfs` is useful in defining map reduce algorithms whenever a mapreduce job produces something of reasonable size, like a summary, that can fit in memory and needs to be inspected to decide on the next steps, or to visualize it.
+The return value is an object, and you can pass it as input to other jobs or read it into memory (watch out, not good for big data) with `from.dfs`. `from.dfs` is complementary `to.dfs`. It returns a list of key-value pairs, which is the most general data type that mapreduce can handle. `from.dfs` is useful in defining map reduce algorithms whenever a mapreduce job produces something of reasonable size, like a summary, that can fit in memory and needs to be inspected to decide on the next steps, or to visualize it.
## My second mapreduce job