An implementation of DBSCAN runing on top of Apache Spark
Latest commit bb33ea4 Jun 14, 2016 @irvingc committed on GitHub Merge pull request #4 from eliasah/spark-1.6.1
bumping to spark 1.6.1

DBSCAN on Spark


This is an implementation of the DBSCAN clustering algorithm on top of Apache Spark. It is loosely based on the paper from He, Yaobin, et al. "MR-DBSCAN: a scalable MapReduce-based DBSCAN algorithm for heavily skewed data".

I have also created a visual guide that explains how the algorithm works.

Getting DBSCAN on Spark

DBSCAN on Spark is published to bintray. If you use SBT you can include SBT in your application adding the following to your build.sbt:

resolvers += "bintray/irvingc" at ""

libraryDependencies += "com.irvingc.spark" %% "dbscan" % "0.1.0"

If you use Maven or Ivy you can use a similar resolver, but you just need to account for the scala version (the example is for Scala 2.10):


            <name>Repo for DBSCAN on Spark</name>


DBSCAN on Spark is built against Scala 2.10.

Example usage

I have created a sample project showing how DBSCAN on Spark can be used. The following however should give you a good idea of how it should be included in your application.

import org.apache.spark.mllib.clustering.dbscan.DBSCAN

object DBSCANSample {

  def main(args: Array[String]) {

    val conf = new SparkConf().setAppName("DBSCAN Sample")
    val sc = new SparkContext(conf)

    val data = sc.textFile(src)

    val parsedData = => Vectors.dense(s.split(',').map(_.toDouble))).cache()"EPS: $eps minPoints: $minPoints")

    val model = DBSCAN.train(
      eps = eps,
      minPoints = minPoints,
      maxPointsPerPartition = maxPointsPerPartition) =>  s"${p.x},${p.y},${p.cluster}").saveAsTextFile(dest)



DBSCAN on Spark is available under the Apache 2.0 license. See the LICENSE file for details.


DBSCAN on Spark is maintained by Irving Cordova (