Spark-Scala Implementaion of XMeans
This is a clustering library that tries to guess how many centroids there are, instead of using a set number like many classical clustering algorithms.
This is my attempt at implementing Dan Pelleg and Andrew Moore's XMeans paper. This implementation does not use the k-d tree discussed in the paper, and uses Spark's RDD to store the datapoints.
This package uses Scala 2.12 and Spark 2.4.5. To add this package to your sbt project, add the following two lines in your
externalResolvers += "XMeans package" at "https://maven.pkg.github.com/mfleming99/XMeans" libraryDependencies += "org.mf" %% "XMeans" % "1.2"
The class functions similarly to Apache Spark's KMeans class except there is no need to specify the number of clusters, instead you specify the maximum number of centroids you are willing to compute (Note: The number of centroids found is nearly always lower than the KMax). An example for use would be as follows.
val centroids = new XMeans().setKMax(12).run(dataset)
centroids will contain all the centriods that XMeans computed