Skip to content

muesli/kmeans

master
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Latest commit

Signed-off-by: Carlos A Becker <caarlos0@users.noreply.github.com>
06e72b5

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
July 22, 2022 14:35
July 22, 2022 14:35
May 26, 2018 06:00
May 26, 2018 06:00
September 13, 2021 15:37
July 22, 2022 14:40
July 22, 2022 14:40
July 22, 2022 14:35

kmeans

Latest Release Build Status Coverage Status Go ReportCard GoDoc

k-means clustering algorithm implementation written in Go

What It Does

k-means clustering partitions a multi-dimensional data set into k clusters, where each data point belongs to the cluster with the nearest mean, serving as a prototype of the cluster.

kmeans animation

When Should I Use It?

  • When you have numeric, multi-dimensional data sets
  • You don't have labels for your data
  • You know exactly how many clusters you want to partition your data into

Example

import (
	"github.com/muesli/kmeans"
	"github.com/muesli/clusters"
)

// set up a random two-dimensional data set (float64 values between 0.0 and 1.0)
var d clusters.Observations
for x := 0; x < 1024; x++ {
	d = append(d, clusters.Coordinates{
		rand.Float64(),
		rand.Float64(),
	})
}

// Partition the data points into 16 clusters
km := kmeans.New()
clusters, err := km.Partition(d, 16)

for _, c := range clusters {
	fmt.Printf("Centered at x: %.2f y: %.2f\n", c.Center[0], c.Center[1])
	fmt.Printf("Matching data points: %+v\n\n", c.Observations)
}

Complexity

If k (the amount of clusters) and d (the dimensions) are fixed, the problem can be exactly solved in time O(ndk+1), where n is the number of entities to be clustered.

The running time of the algorithm is O(nkdi), where n is the number of d-dimensional vectors, k the number of clusters and i the number of iterations needed until convergence. On data that does have a clustering structure, the number of iterations until convergence is often small, and results only improve slightly after the first dozen iterations. The algorithm is therefore often considered to be of "linear" complexity in practice, although it is in the worst case superpolynomial when performed until convergence.

Options

You can greatly reduce the running time by adjusting the required delta threshold. With the following options the algorithm finishes when less than 5% of the data points shifted their cluster assignment in the last iteration:

km, err := kmeans.NewWithOptions(0.05, nil)

The default setting for the delta threshold is 0.01 (1%).

If you are working with two-dimensional data sets, kmeans can generate beautiful graphs (like the one above) for each iteration of the algorithm:

km, err := kmeans.NewWithOptions(0.01, plotter.SimplePlotter{})

Careful: this will generate PNGs in your current working directory.

You can write your own plotters by implementing the kmeans.Plotter interface.

About

k-means clustering algorithm implementation written in Go

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project

 

Packages

No packages published

Languages