mllib test code - RAM / AUC improvements needed #5

mengxr opened this Issue May 5, 2015 · 6 comments


None yet

2 participants

mengxr commented May 5, 2015

@szilard For MLlib, you should repartition the data to match the number of cores. For example, try train.repartition(32).cache(). Otherwise, you may not use all the cores. Also, if the data is sparse, you should create sparse vectors instead of dense vectors.

szilard commented May 5, 2015

It was using all 32 cores (I think it made 41 partitions for n=1M). How about we do some work together to make sure I'm not missing obvious things?

Here is a script to generate the n=1M dataset and also the 1-hot encoding and the categoricalFeaturesInfo style-encoding:

Let's try to train a random forest with 10 trees first on r3.8xlarge (32 cores, 250G RAM), then maybe on a cluster. I'm starting with this:

mengxr commented May 5, 2015

41 partitions on 32 cores is not the best setting because you are measuring the wall-clock time. For sparse input, you need to create an RDD of sparse vectors (by removing all the zeros) instead of dense ones to take advantage of sparsity.

We have the official performance tests implemented at It would be great if you can add a test there with generated data (so we can change the scaling factors easily).

szilard commented May 7, 2015

Thanks. Using sparse vectors and 32 partitions made things a bit faster.

However, memory footprint is still pretty large and the AUC is lower than with any of the other methods (see writeup in the main README).

At this stage I think it's easier to experiment interactively on the command line and figure out what's going on rather than spending time writing a rigorous performance test in Scala (not my strength anyway). Any chance you can take a look?

@szilard szilard changed the title from mllib test code to mllib test code - RAM / AUC improvements needed May 17, 2015
mengxr commented Jun 28, 2015

@szilard I had been busy with the 1.4 release. I understand that it is easy to compare implementations with their default settings. However, the implementations were made with different assumptions. You should understand how to tune each implementation to publish benchmark results.

For example, I tried your dataset using logistic regression on a 8x 4-core EC2 cluster, which may match your setting. Here is my code:

import org.apache.spark.sql.DataFrame
import org.apache.spark.mllib.linalg.Vectors

def load(filename: String): DataFrame = {
  .map { line =>
    val vv = line.split(',').map(_.toDouble)
    val label = vv(0)
    val features = Vectors.dense(vv.slice(1, vv.length)).toSparse
    (label, features)
  }.toDF("label", "features")

val train = load("spark-train-1m.csv").repartition(32).cache()
val test = load("spark-test-1m.csv").repartition(32).cache()

val lr = new LogisticRegression()
val model =

val evaluator = new BinaryClassificationEvaluator()

train uses 216.6MB memory, while test uses 21.6MB memory (see attached screenshot). took less than 3 seconds to finish. ROC on train is 0.7139922364513273 and on test is 0.7088271083682873.

screen shot 2015-06-28 at 4 00 27 pm

Please check your test code and update the result, especially on the memory requirement.

szilard commented Jun 30, 2015

Thanks @mengxr. This github issue was related mostly to random forests, while your help is for logistic regression, so I'm splitting the issue into two: random forest issue continues here #16 while logistic regression continued here #17.

szilard commented Jul 3, 2015

closing this after split above into 2 new issues

@szilard szilard closed this Jul 3, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment