Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spark jobserver giving error while running linearRegressionwithSGD #341

Closed
nareshbab opened this issue Dec 9, 2015 · 8 comments
Closed

Comments

@nareshbab
Copy link

Hi,
First I will give contents of my project files.
basisStatistics/project/plugins.sbt

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.13.0")

basisStatistics/src/main/scala/linearRegression.scala

import _root_.spark.jobserver.{SparkJobValid, SparkJobInvalid, SparkJobValidation, SparkJob}
import org.apache.spark.{SparkContext, SparkConf}
import com.typesafe.config.{Config, ConfigFactory}
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.regression.{LabeledPoint, LinearRegressionModel, LinearRegressionWithSGD}
import scala.util.Try

// all Jobserver jobs must implement the SparkJob trait
object linearRegression extends SparkJob {

  def main(args: Array[String]) {
    val conf = new SparkConf().setMaster("local[4]").setAppName("linearRegression")
    val sc = new SparkContext(conf)
    val config = ConfigFactory.parseString("")
    val results = runJob(sc, config)
    println("Result is " + results)
  }
  //validate method can be used to check that required information is supplied in the config, context is of the right type, etc
  //to keep things simple, we will just return valid
  override def validate(sc: SparkContext, config: Config): SparkJobValidation ={
    SparkJobValid
  }

  //runJob is where the actual work of the job goes
  override def runJob(sc: SparkContext, config: Config): Any = {
    // Load and parse the data
    val data = sc.textFile("/home/vagrant/spark/data/mllib/ridge-data/lpsa.data")
    val parsedData = data.map { line =>
      val parts = line.split(",")
    LabeledPoint(parts(0).toDouble, Vectors.dense(parts(1).split(" ").map(_.toDouble)))
    }.cache()
    val numIterations = 10
    val model = LinearRegressionWithSGD.train(parsedData, numIterations)

    val valuesAndPreds = parsedData.map { point =>
      val predictions = model.predict(point.features)
      (point.label, predictions)
    }

    valuesAndPreds.map { case(v,p) => math.pow( (v-p), 2)}.mean()

  }

}

basisStatistics/build.sbt

name := "basisStatistics"

version := "1.0"

scalaVersion := "2.11.7"

resolvers += "Job Server Bintray" at "https://dl.bintray.com/spark-jobserver/maven"

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % "1.5.2",
  "org.apache.spark" %% "spark-mllib" % "1.5.2",
  "spark.jobserver" %% "job-server-api" % "0.6.1" % "provided"
  //"org.scalanlp" % "breeze_2.10" % "0.11.2"
)

assemblyJarName in assembly := "linearRegressionFatJar.jar"

assemblyMergeStrategy in assembly := {
  case PathList("META-INF", xs @ _*) => MergeStrategy.discard
  case x => MergeStrategy.first
}

I created a fat JAR by running sbt assembly
Logs while creating fat jar are
https://gist.github.com/nareshbab/03cee05520e28b0140e6

Logs for uploading jar to jobserver are as below:

curl --data-binary @/home/vagrant/basisStatistics/target/scala-2.11/linearRegressionFatJar.jar localhost:8090/jars/fatJar3
OKvagrant@precise64:~/basisStatistics$ curl -d "" 'localhost:8090/jobs?appName=fatJar3&classPath=linearRegression'
{
  "status": "STARTED",
  "result": {
    "jobId": "ab8511b7-f3ab-4513-8969-191578f0a422",
    "context": "13556ed6-linearRegression"
  }

But I am always getting an error while running the model

job-server[ERROR] Exception in thread "pool-27-thread-1" java.lang.NoSuchMethodError: scala.Predef$.ArrowAssoc(Ljava/lang/Object;)Ljava/lang/Object;
job-server[ERROR]       at breeze.generic.MMRegistry2$class.register(Multimethod.scala:188)
job-server[ERROR]       at breeze.linalg.VectorOps$$anon$1.breeze$linalg$operators$BinaryRegistry$$super$register(Vector.scala:307)
job-server[ERROR]       at breeze.linalg.operators.BinaryRegistry$class.register(BinaryOp.scala:87)
job-server[ERROR]       at breeze.linalg.VectorOps$$anon$1.register(Vector.scala:307)
job-server[ERROR]       at breeze.linalg.operators.DenseVectorOps$$anon$1.<init>(DenseVectorOps.scala:38)
job-server[ERROR]       at breeze.linalg.operators.DenseVectorOps$class.$init$(DenseVectorOps.scala:22)
job-server[ERROR]       at breeze.linalg.DenseVector$.<init>(DenseVector.scala:226)
job-server[ERROR]       at breeze.linalg.DenseVector$.<clinit>(DenseVector.scala)
job-server[ERROR]       at breeze.linalg.DenseVector.<init>(DenseVector.scala:64)
job-server[ERROR]       at breeze.linalg.DenseVector$mcD$sp.<init>(DenseVector.scala:50)
job-server[ERROR]       at breeze.linalg.DenseVector$mcD$sp.<init>(DenseVector.scala:55)
job-server[ERROR]       at org.apache.spark.mllib.linalg.DenseVector.toBreeze(Vectors.scala:557)
job-server[ERROR]       at org.apache.spark.mllib.optimization.SimpleUpdater.compute(Updater.scala:78)
job-server[ERROR]       at org.apache.spark.mllib.optimization.GradientDescent$.runMiniBatchSGD(GradientDescent.scala:215)
job-server[ERROR]       at org.apache.spark.mllib.optimization.GradientDescent.optimize(GradientDescent.scala:126)
job-server[ERROR]       at org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm.run(GeneralizedLinearAlgorithm.scala:308)
job-server[ERROR]       at org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm.run(GeneralizedLinearAlgorithm.scala:229)
job-server[ERROR]       at org.apache.spark.mllib.regression.LinearRegressionWithSGD$.train(LinearRegression.scala:166)
job-server[ERROR]       at org.apache.spark.mllib.regression.LinearRegressionWithSGD$.train(LinearRegression.scala:204)
job-server[ERROR]       at linearRegression$.runJob(linearRegression.scala:37)
job-server[ERROR]       at linearRegression$.runJob(linearRegression.scala:9)
job-server[ERROR]       at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:254)
job-server[ERROR]       at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
job-server[ERROR]       at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
job-server[ERROR]       at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
job-server[ERROR]       at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
job-server[ERROR]       at java.lang.Thread.run(Thread.java:745)

Can someone help me figure out the issue here.
Thanks in Advance

@yanji84
Copy link

yanji84 commented Dec 13, 2015

It looks like that you either miss Breeze as a dependency or have a version mismatch. Do you need to pack spark-core and spark-mllib into the fat jar? I would suggest to make them "provided"

@velvia
Copy link
Contributor

velvia commented Dec 13, 2015

I see you compiled your project as 2.11. Did you compile & deploy job server as 2.11? Does your version of Spark also match at 2.11?

I would also mark Spark dependency as provided as Ji Yan suggested.

On Dec 9, 2015, at 1:43 AM, Naresh Kumar notifications@github.com wrote:

Hi,
First I will give contents of my project files.
basisStatistics/project/plugins.sbt

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.13.0")

basisStatistics/src/main/scala/linearRegression.scala

import root.spark.jobserver.{SparkJobValid, SparkJobInvalid, SparkJobValidation, SparkJob}
import org.apache.spark.{SparkContext, SparkConf}
import com.typesafe.config.{Config, ConfigFactory}
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.regression.{LabeledPoint, LinearRegressionModel, LinearRegressionWithSGD}
import scala.util.Try

// all Jobserver jobs must implement the SparkJob trait
object linearRegression extends SparkJob {

def main(args: Array[String]) {
val conf = new SparkConf().setMaster("local[4]").setAppName("linearRegression")
val sc = new SparkContext(conf)
val config = ConfigFactory.parseString("")
val results = runJob(sc, config)
println("Result is " + results)
}
//validate method can be used to check that required information is supplied in the config, context is of the right type, etc
//to keep things simple, we will just return valid
override def validate(sc: SparkContext, config: Config): SparkJobValidation ={
SparkJobValid
}

//runJob is where the actual work of the job goes
override def runJob(sc: SparkContext, config: Config): Any = {
// Load and parse the data
val data = sc.textFile("/home/vagrant/spark/data/mllib/ridge-data/lpsa.data")
val parsedData = data.map { line =>
val parts = line.split(",")
LabeledPoint(parts(0).toDouble, Vectors.dense(parts(1).split(" ").map(_.toDouble)))
}.cache()
val numIterations = 10
val model = LinearRegressionWithSGD.train(parsedData, numIterations)

val valuesAndPreds = parsedData.map { point =>
  val predictions = model.predict(point.features)
  (point.label, predictions)
}

valuesAndPreds.map { case(v,p) => math.pow( (v-p), 2)}.mean()

}

}
basisStatistics/build.sbt

name := "basisStatistics"

version := "1.0"

scalaVersion := "2.11.7"

resolvers += "Job Server Bintray" at "https://dl.bintray.com/spark-jobserver/maven"

libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.5.2",
"org.apache.spark" %% "spark-mllib" % "1.5.2",
"spark.jobserver" %% "job-server-api" % "0.6.1" % "provided"
//"org.scalanlp" % "breeze_2.10" % "0.11.2"
)

assemblyJarName in assembly := "linearRegressionFatJar.jar"

assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs @ _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
I created a fat JAR by running sbt assembly

Logs for uploading jar to jobserver are as below:

curl --data-binary @/home/vagrant/basisStatistics/target/scala-2.11/linearRegressionFatJar.jar localhost:8090/jars/fatJar3
OKvagrant@precise64:~/basisStatistics$ curl -d "" 'localhost:8090/jobs?appName=fatJar3&classPath=linearRegression'
{
"status": "STARTED",
"result": {
"jobId": "ab8511b7-f3ab-4513-8969-191578f0a422",
"context": "13556ed6-linearRegression"
}
But I am always getting an error while running the model

job-server[ERROR] Exception in thread "pool-27-thread-1" java.lang.NoSuchMethodError: scala.Predef$.ArrowAssoc(Ljava/lang/Object;)Ljava/lang/Object;
job-server[ERROR] at breeze.generic.MMRegistry2$class.register(Multimethod.scala:188)
job-server[ERROR] at breeze.linalg.VectorOps$$anon$1.breeze$linalg$operators$BinaryRegistry$$super$register(Vector.scala:307)
job-server[ERROR] at breeze.linalg.operators.BinaryRegistry$class.register(BinaryOp.scala:87)
job-server[ERROR] at breeze.linalg.VectorOps$$anon$1.register(Vector.scala:307)
job-server[ERROR] at breeze.linalg.operators.DenseVectorOps$$anon$1.(DenseVectorOps.scala:38)
job-server[ERROR] at breeze.linalg.operators.DenseVectorOps$class.$init$(DenseVectorOps.scala:22)
job-server[ERROR] at breeze.linalg.DenseVector$.(DenseVector.scala:226)
job-server[ERROR] at breeze.linalg.DenseVector$.(DenseVector.scala)
job-server[ERROR] at breeze.linalg.DenseVector.(DenseVector.scala:64)
job-server[ERROR] at breeze.linalg.DenseVector$mcD$sp.(DenseVector.scala:50)
job-server[ERROR] at breeze.linalg.DenseVector$mcD$sp.(DenseVector.scala:55)
job-server[ERROR] at org.apache.spark.mllib.linalg.DenseVector.toBreeze(Vectors.scala:557)
job-server[ERROR] at org.apache.spark.mllib.optimization.SimpleUpdater.compute(Updater.scala:78)
job-server[ERROR] at org.apache.spark.mllib.optimization.GradientDescent$.runMiniBatchSGD(GradientDescent.scala:215)
job-server[ERROR] at org.apache.spark.mllib.optimization.GradientDescent.optimize(GradientDescent.scala:126)
job-server[ERROR] at org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm.run(GeneralizedLinearAlgorithm.scala:308)
job-server[ERROR] at org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm.run(GeneralizedLinearAlgorithm.scala:229)
job-server[ERROR] at org.apache.spark.mllib.regression.LinearRegressionWithSGD$.train(LinearRegression.scala:166)
job-server[ERROR] at org.apache.spark.mllib.regression.LinearRegressionWithSGD$.train(LinearRegression.scala:204)
job-server[ERROR] at linearRegression$.runJob(linearRegression.scala:37)
job-server[ERROR] at linearRegression$.runJob(linearRegression.scala:9)
job-server[ERROR] at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:254)
job-server[ERROR] at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
job-server[ERROR] at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
job-server[ERROR] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
job-server[ERROR] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
job-server[ERROR] at java.lang.Thread.run(Thread.java:745)
Can someone help me figure out the issue here.
Thanks in Advance


Reply to this email directly or view it on GitHub #341.

@nareship
Copy link

@velvia @yanji84 Thanks for quick turn around.
I will make the relevant changes and let you know.

Thanks

@nareshbab
Copy link
Author

@velvia & @yanji84 Error still persists after changes, weird thing is that its not consistent. Now I have compiled my project as well as jobserver on scala 2.10 but still I get same error but with some other dependency.

package Classification

import com.typesafe.config.{Config, ConfigFactory}
import org.apache.spark.SparkContext
import org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.evaluation.MulticlassMetrics
import org.apache.spark.mllib.util.MLUtils
import spark.jobserver.{SparkJob, SparkJobValid, SparkJobValidation}

/**
  * Created by inno on 12/14/2015.
  */
object logisticRegression extends SparkJob{

  def main(args: Array[String]) {
    val sc = new SparkContext("local[*]", "logisticRegression")
    val config = ConfigFactory.parseString("")
    val results = runJob(sc, config)
    println("Result is" + results)
  }

  override def validate(sc: SparkContext, config: Config): SparkJobValidation = {
    SparkJobValid
  }

  override def runJob(sc: SparkContext, config: Config): Any = {
    val data = MLUtils.loadLibSVMFile(sc, "/home/ubuntu/spark/data/mllib/sample_libsvm_data.txt")

    // Split data into training (60%) and test (40%).
    val splits = data.randomSplit(Array(0.6, 0.4), seed = 11L)
    val training = splits(0).cache()
    val test = splits(1)

    // Run training algorithm to build the model
    val model = new LogisticRegressionWithLBFGS()
      .setNumClasses(10)
      .run(training)

    // Compute raw scores on the test set.
    val predictionAndLabels = test.map { case LabeledPoint(label, features) =>
      val prediction = model.predict(features)
      (prediction, label)
    }

    // Get evaluation metrics.
    val metrics = new MulticlassMetrics(predictionAndLabels)
    metrics.precision
  }
}

name := "Classification"

version := "1.0"

scalaVersion := "2.10.5"

resolvers += "Job Server Bintray" at "https://dl.bintray.com/spark-jobserver/maven"

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % "1.5.2" % "provided",
  "org.apache.spark" %% "spark-mllib" % "1.5.2" % "provided",
  "spark.jobserver" %% "job-server-api" % "0.6.1" % "provided"
  //"org.scalanlp" % "breeze_2.10" % "0.11.2"
)

Logs:

> job-server-extras[ERROR] log4j:WARN No appenders could be found for logger (spark.jobserver.JobServer$).
job-server-extras[ERROR] log4j:WARN Please initialize the log4j system properly.
job-server-extras[ERROR] log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
job-server[ERROR] Exception in thread "pool-1-thread-1" java.lang.NoClassDefFoundError: org/apache/spark/mllib/util/MLUtils$
job-server[ERROR]       at Classification.logisticRegression$.runJob(logisticRegression.scala:28)
job-server[ERROR]       at Classification.logisticRegression$.runJob(logisticRegression.scala:14)
job-server[ERROR]       at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:254)
job-server[ERROR]       at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
job-server[ERROR]       at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
job-server[ERROR]       at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
job-server[ERROR]       at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
job-server[ERROR]       at java.lang.Thread.run(Thread.java:745)
job-server[ERROR] Caused by: java.lang.ClassNotFoundException: org.apache.spark.mllib.util.MLUtils$
job-server[ERROR]       at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
job-server[ERROR]       at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
job-server[ERROR]       at java.security.AccessController.doPrivileged(Native Method)
job-server[ERROR]       at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
job-server[ERROR]       at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
job-server[ERROR]       at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
job-server[ERROR]       ... 8 more

I have tested this error for various dependencies which are already avalable in spark.The error is always present with LabeledPoint class.

Thanks

@velvia
Copy link
Contributor

velvia commented Dec 15, 2015

What is your Spark distribution and how are you running job server?

On Dec 14, 2015, at 10:02 PM, Naresh Kumar notifications@github.com wrote:

@velvia https://github.com/velvia & @yanji84 https://github.com/yanji84 Error still persists after changes, weird thing is that its not consistent. Now I have compiled my project as well as jobserver on scala 2.10 but still I get same error but with some other dependency.

package Classification

import com.typesafe.config.{Config, ConfigFactory}
import org.apache.spark.SparkContext
import org.apache.spark.mllib.classification.LogisticRegressionWithLBFGS
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.evaluation.MulticlassMetrics
import org.apache.spark.mllib.util.MLUtils
import spark.jobserver.{SparkJob, SparkJobValid, SparkJobValidation}

/**

  • Created by inno on 12/14/2015.
    */
    object logisticRegression extends SparkJob{

    def main(args: Array[String]) {
    val sc = new SparkContext("local[*]", "logisticRegression")
    val config = ConfigFactory.parseString("")
    val results = runJob(sc, config)
    println("Result is" + results)
    }

    override def validate(sc: SparkContext, config: Config): SparkJobValidation = {
    SparkJobValid
    }

    override def runJob(sc: SparkContext, config: Config): Any = {
    val data = MLUtils.loadLibSVMFile(sc, "/home/ubuntu/spark/data/mllib/sample_libsvm_data.txt")

    // Split data into training (60%) and test (40%).
    val splits = data.randomSplit(Array(0.6, 0.4), seed = 11L)
    val training = splits(0).cache()
    val test = splits(1)

    // Run training algorithm to build the model
    val model = new LogisticRegressionWithLBFGS()
    .setNumClasses(10)
    .run(training)

    // Compute raw scores on the test set.
    val predictionAndLabels = test.map { case LabeledPoint(label, features) =>
    val prediction = model.predict(features)
    (prediction, label)
    }

    // Get evaluation metrics.
    val metrics = new MulticlassMetrics(predictionAndLabels)
    metrics.precision
    }
    }

name := "Classification"

version := "1.0"

scalaVersion := "2.10.5"

resolvers += "Job Server Bintray" at "https://dl.bintray.com/spark-jobserver/maven"

libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.5.2" % "provided",
"org.apache.spark" %% "spark-mllib" % "1.5.2" % "provided",
"spark.jobserver" %% "job-server-api" % "0.6.1" % "provided"
//"org.scalanlp" % "breeze_2.10" % "0.11.2"
)
Logs:

job-server-extras[ERROR] log4j:WARN No appenders could be found for logger (spark.jobserver.JobServer$).
job-server-extras[ERROR] log4j:WARN Please initialize the log4j system properly.
job-server-extras[ERROR] log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
job-server[ERROR] Exception in thread "pool-1-thread-1" java.lang.NoClassDefFoundError: org/apache/spark/mllib/util/MLUtils$
job-server[ERROR] at Classification.logisticRegression$.runJob(logisticRegression.scala:28)
job-server[ERROR] at Classification.logisticRegression$.runJob(logisticRegression.scala:14)
job-server[ERROR] at spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:254)
job-server[ERROR] at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
job-server[ERROR] at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
job-server[ERROR] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
job-server[ERROR] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
job-server[ERROR] at java.lang.Thread.run(Thread.java:745)
job-server[ERROR] Caused by: java.lang.ClassNotFoundException: org.apache.spark.mllib.util.MLUtils$
job-server[ERROR] at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
job-server[ERROR] at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
job-server[ERROR] at java.security.AccessController.doPrivileged(Native Method)
job-server[ERROR] at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
job-server[ERROR] at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
job-server[ERROR] at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
job-server[ERROR] ... 8 more
I have tested this error for various dependencies which are already avalable in spark.The error is always present with LabeledPoint class.

Thanks


Reply to this email directly or view it on GitHub #341 (comment).

@nareshbab
Copy link
Author

@velvia my spark distribution is 1.5.2
Steps to build sparkJobserver are as below

spark-jobserver$ sbt
>re-start

@velvia
Copy link
Contributor

velvia commented Dec 15, 2015

Naresh,

You need to do a local deploy. If you run reStart, then SJS is using job server’s own dependencies, which don’t include mllib at the moment. Or, you can modify package/Dependencies.scala and add the mllib dependency there.

-Evan

On Dec 15, 2015, at 1:03 AM, Naresh Kumar notifications@github.com wrote:

@velvia https://github.com/velvia my spark distribution is 1.5.2
Steps to build sparkJobserver are as below

spark-jobserver$ sbt

re-start

Reply to this email directly or view it on GitHub #341 (comment).

@nareshbab
Copy link
Author

@velvia Thanks a ton!!! Its working. We can close this thread now.

@hntd187 hntd187 closed this as completed Jun 3, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants