Permalink
Switch branches/tags
last_OK jenkins-tomk-hadoop-1 jenkins-tomas_jenkins-7 jenkins-tomas_jenkins-6 jenkins-tomas_jenkins-5 jenkins-tomas_jenkins-4 jenkins-tomas_jenkins-3 jenkins-tomas_jenkins-2 jenkins-tomas_jenkins-1 jenkins-sample-docs-3 jenkins-sample-docs-2 jenkins-sample-docs-1 jenkins-rel-wright-10 jenkins-rel-wright-9 jenkins-rel-wright-8 jenkins-rel-wright-7 jenkins-rel-wright-6 jenkins-rel-wright-5 jenkins-rel-wright-4 jenkins-rel-wright-3 jenkins-rel-wright-2 jenkins-rel-wright-1 jenkins-rel-wolpert-11 jenkins-rel-wolpert-10 jenkins-rel-wolpert-9 jenkins-rel-wolpert-8 jenkins-rel-wolpert-7 jenkins-rel-wolpert-6 jenkins-rel-wolpert-5 jenkins-rel-wolpert-4 jenkins-rel-wolpert-3 jenkins-rel-wolpert-2 jenkins-rel-wolpert-1 jenkins-rel-wheeler-12 jenkins-rel-wheeler-11 jenkins-rel-wheeler-10 jenkins-rel-wheeler-9 jenkins-rel-wheeler-8 jenkins-rel-wheeler-7 jenkins-rel-wheeler-6 jenkins-rel-wheeler-5 jenkins-rel-wheeler-4 jenkins-rel-wheeler-3 jenkins-rel-wheeler-2 jenkins-rel-wheeler-1 jenkins-rel-weierstrass-7 jenkins-rel-weierstrass-6 jenkins-rel-weierstrass-5 jenkins-rel-weierstrass-4 jenkins-rel-weierstrass-3 jenkins-rel-weierstrass-2 jenkins-rel-weierstrass-1 jenkins-rel-vapnik-1 jenkins-rel-vajda-4 jenkins-rel-vajda-3 jenkins-rel-vajda-2 jenkins-rel-vajda-1 jenkins-rel-ueno-12 jenkins-rel-ueno-11 jenkins-rel-ueno-10 jenkins-rel-ueno-9 jenkins-rel-ueno-8 jenkins-rel-ueno-7 jenkins-rel-ueno-6 jenkins-rel-ueno-5 jenkins-rel-ueno-4 jenkins-rel-ueno-3 jenkins-rel-ueno-2 jenkins-rel-ueno-1 jenkins-rel-tverberg-6 jenkins-rel-tverberg-5 jenkins-rel-tverberg-4 jenkins-rel-tverberg-3 jenkins-rel-tverberg-2 jenkins-rel-tverberg-1 jenkins-rel-tutte-2 jenkins-rel-tutte-1 jenkins-rel-turnbull-2 jenkins-rel-turnbull-1 jenkins-rel-turing-10 jenkins-rel-turing-9 jenkins-rel-turing-8 jenkins-rel-turing-7 jenkins-rel-turing-6 jenkins-rel-turing-5 jenkins-rel-turing-4 jenkins-rel-turing-3 jenkins-rel-turing-2 jenkins-rel-turing-1 jenkins-rel-turin-4 jenkins-rel-turin-3 jenkins-rel-turin-2 jenkins-rel-turin-1 jenkins-rel-turchin-11 jenkins-rel-turchin-10 jenkins-rel-turchin-9 jenkins-rel-turchin-8 jenkins-rel-turchin-7 jenkins-rel-turchin-6 jenkins-rel-turchin-5
Nothing to show
Find file Copy path
ed77175 Jun 23, 2017
2 contributors

Users who have contributed to this file

@jessica0xdata @angela0xdata
135 lines (90 sloc) 7.21 KB

Deep Learning Tutorial

The purpose of this tutorial is to walk new users through Deep Learning using H2O Flow.

Those who have never used H2O before should refer to Getting Started for additional instructions on how to run H2O Flow.

For tips on improving the performance and results of your Deep Learning model, refer to our Definintive Performance Tuning Guide for Deep Learning.

Using Deep Learning

H2O’s Deep Learning functionalities include:

  • purely supervised training protocol for regression and classification tasks
  • fast and memory-efficient Java implementations based on columnar compression and fine-grain Map/Reduce
  • multi-threaded and distributed parallel computation to be run on either a single node or a multi-node cluster
  • fully automatic per-neuron adaptive learning rate for fast convergence
  • optional specification of learning rate, annealing and momentum options
  • regularization options include L1, L2, dropout, Hogwild! and model averaging to prevent model overfitting
  • elegant web interface or fully scriptable R API from H2O CRAN package
  • grid search for hyperparameter optimization and model selection
  • model checkpointing for reduced run times and model tuning
  • automatic data pre and post-processing for categorical and numerical data
  • automatic imputation of missing values
  • automatic tuning of communication vs computation for best performance
  • model export in plain java code for deployment in production environments
  • additional expert parameters for model tuning
  • deep autoencoders for unsupervised feature learning and anomaly detection capabilities

Getting Started

This tutorial uses the publicly available MNIST data set of hand-written digits, where each row contains the 28^(2)=784 raw, gray-scale pixel values from 0 to 255 of the digitized digits (0 to 9).

If you don't have any data of your own to work with, you can find some example datasets at http://data.h2o.ai.

Importing Data

Before creating a model, import the data into H2O:

  1. Click the Assist Me! button (the last button in the row of buttons below the menus).

Assist Me button

  1. Click the importFiles link and enter the file path to the training dataset in the Search entry field. For this example, the following datasets are used:

Importing Testing Data

  1. Click the Add all link to add the file to the import queue, then click the Import button.

Importing Training Data

Parsing Data

Now, parse the imported data:

  1. Click the Parse these files... button.

Note: The default options typically do not need to be changed unless the data does not parse correctly.

  1. From the drop-down Parser list, select the file type of the data set (Auto, XLS, CSV, or SVMLight).

  2. If the data uses a separator, select it from the drop-down Separator list.

  3. If the data uses a column header as the first row, select the First row contains column names radio button. If the first row contains data, select the First row contains data radio button. You can also select the Auto radio button to have H2O automatically determine if the first row of the dataset contains the column names or data.

  4. If the data uses apostrophes ( ' - also known as single quotes), check the Enable single quotes as a field quotation character checkbox.

  5. Review the data in the Edit Column Names and Types section. The last column, C785, must be changed to an enum for a classification model.

  6. Enter C785 in the Search by column name entry field at the top.

  7. Click the drop-down column heading menu for C785 and select Enum.

    Selecting Enum

  8. Click the Parse button.

Parsing Data

NOTE: Make sure the parse is complete by confirming progress is 100% before continuing to the next step, model building. For small datasets, this should only take a few seconds, but larger datasets take longer to parse.

Building a Model

  1. Once data are parsed, click the View button, then click the Build Model button.
  2. Select Deep Learning from the drop-down Select an algorithm menu, then click the Build model button.
  3. If the parsed training data is not already listed in the Training_frame drop-down list, select it.

Note: If the Ignore_const_col checkbox is checked, a list of the excluded columns displays below the Training_frame drop-down list.

  1. From the drop-down Validation_frame list, select the parsed testing (validation) data.
  2. From the Ignored_columns section, select the columns to ignore in the Available area to move them to the Selected area. For this example, do not select any columns.
  3. From the drop-down Response list, select the last column (C785).
  4. From the drop-down Activation list, select the activation function (for this example, select Tanh).
  5. In the Hidden field, specify the hidden layer sizes (for this example, enter 50,50).
  6. In the Epochs field, enter the number of times to iterate the dataset (for this example, enter 0.1).
  7. Click the Build Model button.

Building Models

Results

To view the results, click the View button. The output for the Deep Learning model includes the following information for both the training and testing sets:

  • Model parameters (hidden)

  • A chart of the variable importances

  • A graph of the scoring history (training MSE and validation MSE vs epochs)

  • Training and validation metrics confusion matrix

  • Output (model category, weights, biases)

  • Status of neuron layers (layer number, units, type, dropout, L1, L2, mean rate, rate RMS, momentum, mean weight, weight RMS, mean bias, bias RMS)

  • Scoring history in tabular format

  • Training and validation metrics (model name, model checksum name, frame name, frame checksum name, description, model category, duration in ms, scoring time, predictions, MSE, R2, logloss)

  • Top-10 Hit Ratios for training and validation

  • Preview POJO

    Viewing Model Results

For more details, click the Inspect button.

Inspecting Results

Select the appropriate link to view details for:

  • Parameters
  • Output
  • Neuron layer status
  • Scoring history
  • Training metrics
  • Training metrics - Top-10 Hit Ratios
  • Training metrics confusion matrix
  • Validation metrics
  • Validation metrics confusion matrix
  • Variable importances

The scoring history graph, training metrics confusion matrix, and validation metrics confusion matrix are shown below.

Training Metrics Confusion Matrix