Permalink
Switch branches/tags
last_OK jenkins-tomk-hadoop-1 jenkins-tomas_jenkins-7 jenkins-tomas_jenkins-6 jenkins-tomas_jenkins-5 jenkins-tomas_jenkins-4 jenkins-tomas_jenkins-3 jenkins-tomas_jenkins-2 jenkins-tomas_jenkins-1 jenkins-sample-docs-3 jenkins-sample-docs-2 jenkins-sample-docs-1 jenkins-rel-wright-5 jenkins-rel-wright-4 jenkins-rel-wright-3 jenkins-rel-wright-2 jenkins-rel-wright-1 jenkins-rel-wolpert-11 jenkins-rel-wolpert-10 jenkins-rel-wolpert-9 jenkins-rel-wolpert-8 jenkins-rel-wolpert-7 jenkins-rel-wolpert-6 jenkins-rel-wolpert-5 jenkins-rel-wolpert-4 jenkins-rel-wolpert-3 jenkins-rel-wolpert-2 jenkins-rel-wolpert-1 jenkins-rel-wheeler-12 jenkins-rel-wheeler-11 jenkins-rel-wheeler-10 jenkins-rel-wheeler-9 jenkins-rel-wheeler-8 jenkins-rel-wheeler-7 jenkins-rel-wheeler-6 jenkins-rel-wheeler-5 jenkins-rel-wheeler-4 jenkins-rel-wheeler-3 jenkins-rel-wheeler-2 jenkins-rel-wheeler-1 jenkins-rel-weierstrass-7 jenkins-rel-weierstrass-6 jenkins-rel-weierstrass-5 jenkins-rel-weierstrass-4 jenkins-rel-weierstrass-3 jenkins-rel-weierstrass-2 jenkins-rel-weierstrass-1 jenkins-rel-vapnik-1 jenkins-rel-vajda-4 jenkins-rel-vajda-3 jenkins-rel-vajda-2 jenkins-rel-vajda-1 jenkins-rel-ueno-12 jenkins-rel-ueno-11 jenkins-rel-ueno-10 jenkins-rel-ueno-9 jenkins-rel-ueno-8 jenkins-rel-ueno-7 jenkins-rel-ueno-6 jenkins-rel-ueno-5 jenkins-rel-ueno-4 jenkins-rel-ueno-3 jenkins-rel-ueno-2 jenkins-rel-ueno-1 jenkins-rel-tverberg-6 jenkins-rel-tverberg-5 jenkins-rel-tverberg-4 jenkins-rel-tverberg-3 jenkins-rel-tverberg-2 jenkins-rel-tverberg-1 jenkins-rel-tutte-2 jenkins-rel-tutte-1 jenkins-rel-turnbull-2 jenkins-rel-turnbull-1 jenkins-rel-turing-10 jenkins-rel-turing-9 jenkins-rel-turing-8 jenkins-rel-turing-7 jenkins-rel-turing-6 jenkins-rel-turing-5 jenkins-rel-turing-4 jenkins-rel-turing-3 jenkins-rel-turing-2 jenkins-rel-turing-1 jenkins-rel-turin-4 jenkins-rel-turin-3 jenkins-rel-turin-2 jenkins-rel-turin-1 jenkins-rel-turchin-11 jenkins-rel-turchin-10 jenkins-rel-turchin-9 jenkins-rel-turchin-8 jenkins-rel-turchin-7 jenkins-rel-turchin-6 jenkins-rel-turchin-5 jenkins-rel-turchin-4 jenkins-rel-turchin-3 jenkins-rel-turchin-2 jenkins-rel-turchin-1 jenkins-rel-turan-4
Nothing to show
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
95 lines (55 sloc) 6.88 KB

K Means Tutorial

This tutorial describes how to perform a K-Means analysis. By the end of this tutorial the user should know how to specify, run, and interpret a K-means model in H2O using Flow.

Those who have never used H2O before should refer to Getting Started for additional instructions on how to run H2O Flow.

In the latest version of H2O, the K-means algorithm has a "k-modes" function that allows you to use mixed categorical and real-valued data. By using dissimilarity measures to handle categoricals, replacing cluster means with cluster modes, and using a frequency-based method to update modes in the clustering process to minimize the clustering costs, the k-modes algorithm is scalable in both the number of clusters and the number of records. The k-modes method is used anytime categorical data is present.

For more information, refer to "A Fast Clustering Algorithm to Cluster Very Large Categorical Data Sets in Data Mining" and "Extensions to the k-Means Algorithm for Clustering Large Data Sets with Catgorical Values" by Zhexue Huang.

Getting Started

First get the tutorial's data set from here (on the next page select File > Save to download the text file). (This data originally comes from the publicly available data set, which can be found at http://archive.ics.uci.edu/ml/datasets/seeds. In the tutorial's link above we've removed random spacing that causes the data set to look like 9 columns instead of 8.)

The data are composed of 210 observations, 7 attributes, and an a priori grouping assignment. All data are positively valued and continuous.

If you don't have any data of your own to work with, you can find some example datasets at http://data.h2o.ai.

Importing Data

Before creating a model, import data into H2O:

  1. Click the Assist Me! button (the last button in the row of buttons below the menus).

Assist Me button

  1. Click the importFiles link and enter the file path to the dataset in the Search entry field.
  2. Click the Add all link to add the file to the import queue, then click the Import button. Importing Files

Parsing Data

Now, parse the imported data:

  1. Click the Parse these files... button.

    Note: The default options typically do not need to be changed unless the data does not parse correctly.

  2. From the drop-down Parser list, select the file type of the data set (Auto, XLS, CSV, or SVMLight).

  3. If the data uses a separator, select it from the drop-down Separator list.

  4. If the data uses a column header as the first row, select the First row contains column names radio button. If the first row contains data, select the First row contains data radio button. You can also select the Auto radio button to have H2O automatically determine if the first row of the dataset contains the column names or data.

  5. If the data uses apostrophes ( ' - also known as single quotes), check the Enable single quotes as a field quotation character checkbox.

  6. Review the data in the Edit Column Names and Types section, then click the Parse button.

Parsing Data

NOTE: Make sure the parse is complete by confirming progress is 100% before continuing to the next step, model building. For small datasets, this should only take a few seconds, but larger datasets take longer to parse.

Building a Model

  1. Once data are parsed, click the View button, then click the Build Model button.

  2. Select K-means from the drop-down Select an algorithm menu, then click the Build model button.

  3. If the parsed seeds_dataset.hex file is not already listed in the Training_frame drop-down list, select it. Otherwise, continue to the next step.

  4. From the Ignored_columns section, select the columns to ignore in the Available area to move them to the Selected area. For this example, select column 8 (the a priori known clusters for this dataset).

  5. In the K field, specify the number of clusters. For this example, enter 3.

  6. In the Max_iterations field, specify the maximum number of iterations. For this example, enter 100.

  7. From the drop-down Init menu, select the initialization mode. For this example, select PlusPlus.

    • Random initialization randomly samples the k-specified value of the rows of the training data as cluster centers.

    • PlusPlus initialization chooses one initial center at random and weights the random selection of subsequent centers so that points furthest from the first center are more likely to be chosen.

    • Furthest initialization chooses one initial center at random and then chooses the next center to be the point furthest away in terms of Euclidean distance.

    • User initialization requires the corresponding User_points parameter. To define a specific initialization point, select the imported dataset .hex file from the drop-down User_points list, then select User from the drop-down Init list.

      Note: The user-specified points dataset must have the same number of columns as the training dataset.

  8. Uncheck the Standardize checkbox to disable column standardization.

  9. Click the Build Model button.

K-Means Model Builder cell

K-Means Output

K-Means randomly chooses starting points and converges on optimal centroids. The cluster number is arbitrary, and should be thought of as a factor.

The output for K-Means includes the following:

  • Model parameters (hidden)
  • A graph of the scoring history (number of iterations vs. average within the cluster's sum of squares)
  • Output (model category, validation metrics if applicable, and centers std)
  • Model Summary (number of clusters, number of categorical columns, number of iterations, avg. within sum of squares, avg. sum of squares, avg. between the sum of squares)
  • Scoring history (number of iterations, avg. change of standardized centroids, avg. within cluster sum of squares)
  • Training metrics (model name, checksum name, frame name, frame checksum name, description if applicable, model category, duration in ms, scoring time, predictions, MSE, avg. within sum of squares, avg. between sum of squares)
  • Centroid statistics (centroid number, size, within sum of squares)
  • Cluster means (centroid number, column)
  • Preview POJO