Permalink
Browse files

Merge branch 'master' into td-rdd-save

Conflicts:
	core/src/main/scala/spark/SparkContext.scala
  • Loading branch information...
2 parents ad842ac + b187675 commit 3f08e1129f092cf80a136a1aa7d0134976c9e7fe @tdas tdas committed Jun 27, 2011
Showing with 3,606 additions and 2,331 deletions.
  1. +0 −48 README
  2. +63 −0 README.md
  3. +0 −1,236 core/src/main/scala/spark/BitTorrentBroadcast.scala
  4. +0 −140 core/src/main/scala/spark/Broadcast.scala
  5. +0 −873 core/src/main/scala/spark/ChainedBroadcast.scala
  6. +2 −0 core/src/main/scala/spark/Executor.scala
  7. +5 −2 core/src/main/scala/spark/LocalFileShuffle.scala
  8. +46 −0 core/src/main/scala/spark/RDD.scala
  9. +0 −15 core/src/main/scala/spark/Shuffle.scala
  10. +2 −0 core/src/main/scala/spark/SparkContext.scala
  11. +38 −6 core/src/main/scala/spark/Utils.scala
  12. +1,355 −0 core/src/main/scala/spark/broadcast/BitTorrentBroadcast.scala
  13. +228 −0 core/src/main/scala/spark/broadcast/Broadcast.scala
  14. +12 −0 core/src/main/scala/spark/broadcast/BroadcastFactory.scala
  15. +792 −0 core/src/main/scala/spark/broadcast/ChainedBroadcast.scala
  16. +4 −2 core/src/main/scala/spark/{ → broadcast}/DfsBroadcast.scala
  17. +41 −0 core/src/main/scala/spark/broadcast/SourceInfo.scala
  18. +807 −0 core/src/main/scala/spark/broadcast/TreeBroadcast.scala
  19. +49 −1 core/src/test/scala/spark/ShuffleSuite.scala
  20. +2 −7 examples/src/main/scala/spark/examples/BroadcastTest.scala
  21. +37 −0 examples/src/main/scala/spark/examples/GroupByTest.scala
  22. +30 −0 examples/src/main/scala/spark/examples/MultiBroadcastTest.scala
  23. +51 −0 examples/src/main/scala/spark/examples/SimpleSkewedGroupByTest.scala
  24. +41 −0 examples/src/main/scala/spark/examples/SkewedGroupByTest.scala
  25. +1 −1 repl/src/main/scala/spark/repl/SparkInterpreterLoop.scala
View
48 README
@@ -1,48 +0,0 @@
-ONLINE DOCUMENTATION
-
-You can find the latest Spark documentation, including a programming guide,
-on the project wiki at http://github.com/mesos/spark/wiki. This file only
-contains basic setup instructions.
-
-
-
-BUILDING
-
-Spark requires Scala 2.8. This version has been tested with 2.8.1.final.
-
-The project is built using Simple Build Tool (SBT), which is packaged with it.
-To build Spark and its example programs, run sbt/sbt update compile.
-
-To run Spark, you will need to have Scala's bin in your $PATH, or you
-will need to set the SCALA_HOME environment variable to point to where
-you've installed Scala. Scala must be accessible through one of these
-methods on Mesos slave nodes as well as on the master.
-
-To run one of the examples, use ./run <class> <params>. For example,
-./run spark.examples.SparkLR will run the Logistic Regression example.
-Each of the example programs prints usage help if no params are given.
-
-All of the Spark samples take a <host> parameter that is the Mesos master
-to connect to. This can be a Mesos URL, or "local" to run locally with one
-thread, or "local[N]" to run locally with N threads.
-
-
-
-CONFIGURATION
-
-Spark can be configured through two files: conf/java-opts and conf/spark-env.sh.
-
-In java-opts, you can add flags to be passed to the JVM when running Spark.
-
-In spark-env.sh, you can set any environment variables you wish to be available
-when running Spark programs, such as PATH, SCALA_HOME, etc. There are also
-several Spark-specific variables you can set:
-- SPARK_CLASSPATH: Extra entries to be added to the classpath, separated by ":".
-- SPARK_MEM: Memory for Spark to use, in the format used by java's -Xmx option
- (for example, 200m meams 200 MB, 1g means 1 GB, etc).
-- SPARK_LIBRARY_PATH: Extra entries to add to java.library.path for locating
- shared libraries.
-- SPARK_JAVA_OPTS: Extra options to pass to JVM.
-
-Note that spark-env.sh must be a shell script (it must be executable and start
-with a #! header to specify the shell to use).
View
@@ -0,0 +1,63 @@
+# Spark
+
+Lightning-Fast Cluster Computing - <http://www.spark-project.org/>
+
+
+## Online Documentation
+
+You can find the latest Spark documentation, including a programming
+guide, on the project wiki at <http://github.com/mesos/spark/wiki>. This
+file only contains basic setup instructions.
+
+
+## Building
+
+Spark requires Scala 2.8. This version has been tested with 2.8.1.final.
+Experimental support for Scala 2.9 is available in the `scala-2.9` branch.
+
+The project is built using Simple Build Tool (SBT), which is packaged with it.
+To build Spark and its example programs, run:
+
+ sbt/sbt update compile
+
+To run Spark, you will need to have Scala's bin in your `PATH`, or you
+will need to set the `SCALA_HOME` environment variable to point to where
+you've installed Scala. Scala must be accessible through one of these
+methods on Mesos slave nodes as well as on the master.
+
+To run one of the examples, use `./run <class> <params>`. For example:
+
+ ./run spark.examples.SparkLR local[2]
+
+will run the Logistic Regression example locally on 2 CPUs.
+
+Each of the example programs prints usage help if no params are given.
+
+All of the Spark samples take a `<host>` parameter that is the Mesos master
+to connect to. This can be a Mesos URL, or "local" to run locally with one
+thread, or "local[N]" to run locally with N threads.
+
+
+## Configuration
+
+Spark can be configured through two files: `conf/java-opts` and
+`conf/spark-env.sh`.
+
+In `java-opts`, you can add flags to be passed to the JVM when running Spark.
+
+In `spark-env.sh`, you can set any environment variables you wish to be available
+when running Spark programs, such as `PATH`, `SCALA_HOME`, etc. There are also
+several Spark-specific variables you can set:
+
+- `SPARK_CLASSPATH`: Extra entries to be added to the classpath, separated by ":".
+
+- `SPARK_MEM`: Memory for Spark to use, in the format used by java's `-Xmx`
+ option (for example, `-Xmx200m` means 200 MB, `-Xmx1g` means 1 GB, etc).
+
+- `SPARK_LIBRARY_PATH`: Extra entries to add to `java.library.path` for locating
+ shared libraries.
+
+- `SPARK_JAVA_OPTS`: Extra options to pass to JVM.
+
+Note that `spark-env.sh` must be a shell script (it must be executable and start
+with a `#!` header to specify the shell to use).
Oops, something went wrong.

0 comments on commit 3f08e11

Please sign in to comment.