Skip to content

mfleming99/XMeans

master
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
src
 
 
 
 
 
 
 
 

XMeans

Spark-Scala Implementaion of XMeans

This is a clustering library that tries to guess how many centroids there are, instead of using a set number like many classical clustering algorithms.

This is my attempt at implementing Dan Pelleg and Andrew Moore's XMeans paper. This implementation does not use the k-d tree discussed in the paper, and uses Spark's RDD to store the datapoints.

Install

This package uses Scala 2.12 and Spark 2.4.5. To add this package to your sbt project, add the following two lines in your build.sbt file.

externalResolvers += "XMeans package" at "https://maven.pkg.github.com/mfleming99/XMeans"
libraryDependencies += "org.mf" %% "XMeans" % "1.2"

Use

The class functions similarly to Apache Spark's KMeans class except there is no need to specify the number of clusters, instead you specify the maximum number of centroids you are willing to compute (Note: The number of centroids found is nearly always lower than the KMax). An example for use would be as follows.

val centroids = new XMeans().setKMax(12).run(dataset)

Now centroids will contain all the centriods that XMeans computed

About

Spark-Scala Implementaion of XMeans

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages 3

 
 
 

Languages