Skip to content

Commit

Permalink
Merge branch 'master' into mos-bt-dev
Browse files Browse the repository at this point in the history
  • Loading branch information
Mosharaf Chowdhury committed Feb 9, 2011
2 parents 591749f + 62f1c6f commit b70d7e5
Show file tree
Hide file tree
Showing 268 changed files with 200 additions and 117 deletions.
20 changes: 18 additions & 2 deletions .gitignore
@@ -1,10 +1,26 @@
*~
*.swp
build
work
*.iml
.idea/
/build/
work/
out/
.DS_Store
third_party/libmesos.so
third_party/libmesos.dylib
conf/java-opts
conf/spark-env.sh
conf/log4j.properties
target/
reports/
.project
.classpath
.scala_dependencies
lib_managed/
src_managed/
project/boot/
project/plugins/project/build.properties
project/build/target/
project/plugins/target/
project/plugins/lib_managed/
project/plugins/src_managed/
73 changes: 0 additions & 73 deletions Makefile

This file was deleted.

28 changes: 18 additions & 10 deletions README
@@ -1,24 +1,32 @@
ONLINE DOCUMENTATION

You can find the latest Spark documentation, including a programming guide,
on the project wiki at http://github.com/mesos/spark/wiki. This file only
contains basic setup instructions.



BUILDING

Spark requires Scala 2.8. This version has been tested with 2.8.0.final.
Spark requires Scala 2.8. This version has been tested with 2.8.1.final.

To build and run Spark, you will need to have Scala's bin in your $PATH,
or you will need to set the SCALA_HOME environment variable to point
to where you've installed Scala. Scala must be accessible through one
of these methods on Mesos slave nodes as well as on the master.
The project is built using Simple Build Tool (SBT), which is packaged with it.
To build Spark and its example programs, run sbt/sbt compile.

To build Spark and the example programs, run make.
To run Spark, you will need to have Scala's bin in your $PATH, or you
will need to set the SCALA_HOME environment variable to point to where
you've installed Scala. Scala must be accessible through one of these
methods on Mesos slave nodes as well as on the master.

To run one of the examples, use ./run <class> <params>. For example,
./run SparkLR will run the Logistic Regression example. Each of the
example programs prints usage help if no params are given.
./run spark.examples.SparkLR will run the Logistic Regression example.
Each of the example programs prints usage help if no params are given.

All of the Spark samples take a <host> parameter that is the Mesos master
to connect to. This can be a Mesos URL, or "local" to run locally with one
thread, or "local[N]" to run locally with N threads.

Tip: If you are building Spark and examples repeatedly, export USE_FSC=1
to have the Makefile use the fsc compiler daemon instead of scalac.


CONFIGURATION

Expand Down
11 changes: 0 additions & 11 deletions alltests

This file was deleted.

File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Binary file added core/lib/jline.jar
Binary file not shown.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Expand Up @@ -359,7 +359,7 @@ extends RDD[Pair[T, U]](sc) {
: RDD[(K, C)] =
{
val shufClass = Class.forName(System.getProperty(
"spark.shuffle.class", "spark.DfsShuffle"))
"spark.shuffle.class", "spark.LocalFileShuffle"))
val shuf = shufClass.newInstance().asInstanceOf[Shuffle[K, V, C]]
shuf.compute(self, numSplits, createCombiner, mergeValue, mergeCombiners)
}
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Expand Up @@ -310,7 +310,7 @@ class SparkCompletion(val repl: SparkInterpreter) extends SparkCompletionOutput
else xs.reduceLeft(_ zip _ takeWhile (x => x._1 == x._2) map (_._1) mkString)

// This is jline's entry point for completion.
override def complete(_buf: String, cursor: Int, candidates: JList[String]): Int = {
override def complete(_buf: String, cursor: Int, candidates: java.util.List[java.lang.String]): Int = {
val buf = onull(_buf)
verbosity = if (isConsecutiveTabs(buf, cursor)) verbosity + 1 else 0
DBG("complete(%s, %d) last = (%s, %d), verbosity: %s".format(buf, cursor, lastBuf, lastCursor, verbosity))
Expand All @@ -321,7 +321,7 @@ class SparkCompletion(val repl: SparkInterpreter) extends SparkCompletionOutput
case Nil => None
case xs =>
// modify in place and return the position
xs foreach (candidates add _)
xs.foreach(x => candidates.add(x))

// update the last buffer unless this is an alternatives list
if (xs contains "") Some(p.cursor)
Expand Down
File renamed without changes.
Expand Up @@ -129,7 +129,8 @@ extends InterpreterControl {
settings.classpath append addedClasspath

interpreter = new SparkInterpreter(settings, out) {
override protected def parentClassLoader = classOf[SparkInterpreterLoop].getClassLoader
override protected def parentClassLoader =
classOf[SparkInterpreterLoop].getClassLoader
}
interpreter.setContextClassLoader()
// interpreter.quietBind("settings", "spark.repl.SparkInterpreterSettings", interpreter.isettings)
Expand Down
File renamed without changes.
File renamed without changes.
@@ -1,16 +1,31 @@
package spark.repl

import java.io._
import java.net.URLClassLoader

import scala.collection.mutable.ArrayBuffer
import scala.collection.JavaConversions._

import org.scalatest.FunSuite

class ReplSuite extends FunSuite {
def runInterpreter(master: String, input: String): String = {
val in = new BufferedReader(new StringReader(input + "\n"))
val out = new StringWriter()
val cl = getClass.getClassLoader
var paths = new ArrayBuffer[String]
if (cl.isInstanceOf[URLClassLoader]) {
val urlLoader = cl.asInstanceOf[URLClassLoader]
for (url <- urlLoader.getURLs) {
if (url.getProtocol == "file") {
paths += url.getFile
}
}
}
val interp = new SparkInterpreterLoop(in, new PrintWriter(out), master)
spark.repl.Main.interp = interp
interp.main(new Array[String](0))
val separator = System.getProperty("path.separator")
interp.main(Array("-classpath", paths.mkString(separator)))
spark.repl.Main.interp = null
return out.toString
}
Expand Down
@@ -1,3 +1,5 @@
package spark.examples

import spark.SparkContext

object BroadcastTest {
Expand Down
@@ -1,3 +1,5 @@
package spark.examples

import spark._

object CpuHog {
Expand Down
@@ -1,3 +1,5 @@
package spark.examples

import spark._

object HdfsTest {
Expand Down
@@ -1,3 +1,5 @@
package spark.examples

import java.util.Random
import scala.math.sqrt
import cern.jet.math._
Expand Down
@@ -1,3 +1,5 @@
package spark.examples

import java.util.Random
import Vector._

Expand Down
@@ -1,3 +1,5 @@
package spark.examples

import java.util.Random
import Vector._

Expand Down
@@ -1,3 +1,5 @@
package spark.examples

import scala.math.random
import spark._
import SparkContext._
Expand Down
@@ -1,3 +1,5 @@
package spark.examples

import spark._

object SleepJob {
Expand Down
@@ -1,3 +1,5 @@
package spark.examples

import java.io.Serializable
import java.util.Random
import scala.math.sqrt
Expand Down
@@ -1,3 +1,5 @@
package spark.examples

import java.util.Random
import scala.math.exp
import Vector._
Expand Down
@@ -1,3 +1,5 @@
package spark.examples

import java.util.Random
import scala.math.exp
import Vector._
Expand Down
@@ -1,3 +1,5 @@
package spark.examples

import scala.math.random
import spark._
import SparkContext._
Expand Down
@@ -1,3 +1,5 @@
package spark.examples

@serializable class Vector(val elements: Array[Double]) {
def length = elements.length

Expand Down
8 changes: 8 additions & 0 deletions project/build.properties
@@ -0,0 +1,8 @@
#Project properties
#Sat Nov 13 21:57:32 PST 2010
project.organization=UC Berkeley
project.name=Spark
sbt.version=0.7.5.RC0
project.version=0.0.0
build.scala.versions=2.8.1
project.initialize=false
76 changes: 76 additions & 0 deletions project/build/SparkProject.scala
@@ -0,0 +1,76 @@
import sbt._
import sbt.Process._

import assembly._

import de.element34.sbteclipsify._


class SparkProject(info: ProjectInfo)
extends ParentProject(info) with IdeaProject
{
lazy val core = project("core", "Spark Core", new CoreProject(_))

lazy val examples =
project("examples", "Spark Examples", new ExamplesProject(_), core)

class CoreProject(info: ProjectInfo)
extends DefaultProject(info) with Eclipsify with IdeaProject with DepJar with XmlTestReport
{}

class ExamplesProject(info: ProjectInfo)
extends DefaultProject(info) with Eclipsify with IdeaProject
{}
}


// Project mixin for an XML-based ScalaTest report. Unfortunately
// there is currently no way to call this directly from SBT without
// executing a subprocess.
trait XmlTestReport extends BasicScalaProject {
def testReportDir = outputPath / "test-report"

lazy val testReport = task {
log.info("Creating " + testReportDir + "...")
if (!testReportDir.exists) {
testReportDir.asFile.mkdirs()
}
log.info("Executing org.scalatest.tools.Runner...")
val command = ("scala -classpath " + testClasspath.absString +
" org.scalatest.tools.Runner -o " +
" -u " + testReportDir.absolutePath +
" -p " + (outputPath / "test-classes").absolutePath)
Process(command, path("."), "JAVA_OPTS" -> "-Xmx500m") !

None
}.dependsOn(compile, testCompile).describedAs("Generate XML test report.")
}


// Project mixin for creating a JAR with a project's dependencies. This is based
// on the AssemblyBuilder plugin, but because this plugin attempts to package Scala
// and our project too, we leave that out using our own exclude filter (depJarExclude).
trait DepJar extends AssemblyBuilder {
def depJarExclude(base: PathFinder) = {
(base / "scala" ** "*") +++ // exclude scala library
(base / "spark" ** "*") +++ // exclude Spark classes
((base / "META-INF" ** "*") --- // generally ignore the hell out of META-INF
(base / "META-INF" / "services" ** "*") --- // include all service providers
(base / "META-INF" / "maven" ** "*")) // include all Maven POMs and such
}

def depJarTempDir = outputPath / "dep-classes"

def depJarOutputPath =
outputPath / (name.toLowerCase.replace(" ", "-") + "-dep-" + version.toString + ".jar")

lazy val depJar = {
packageTask(
Path.lazyPathFinder(assemblyPaths(depJarTempDir,
assemblyClasspath,
assemblyExtraJars,
depJarExclude)),
depJarOutputPath,
packageOptions)
}.dependsOn(compile).describedAs("Bundle project's dependencies into a JAR.")
}

0 comments on commit b70d7e5

Please sign in to comment.