Skip to content

Latest commit

 

History

History
277 lines (219 loc) · 18.3 KB

README.md

File metadata and controls

277 lines (219 loc) · 18.3 KB

Table of Contents

  1. Automated ML Introduction
  2. Running samples in Azure Notebooks
  3. Running samples in a Local Conda environment
  4. Automated ML SDK Sample Notebooks
  5. Documentation
  6. Running using python command
  7. Troubleshooting

Automated ML introduction

Automated machine learning (automated ML) builds high quality machine learning models for you by automating model and hyperparameter selection. Bring a labelled dataset that you want to build a model for, automated ML will give you a high quality machine learning model that you can use for predictions.

If you are new to Data Science, AutoML will help you get jumpstarted by simplifying machine learning model building. It abstracts you from needing to perform model selection, hyperparameter selection and in one step creates a high quality trained model for you to use.

If you are an experienced data scientist, AutoML will help increase your productivity by intelligently performing the model and hyperparameter selection for your training and generates high quality models much quicker than manually specifying several combinations of the parameters and running training jobs. AutoML provides visibility and access to all the training jobs and the performance characteristics of the models to help you further tune the pipeline if you desire.

Running samples in Azure Notebooks - Jupyter based notebooks in the Azure cloud

  1. Azure Notebooks Import sample notebooks into Azure Notebooks.

  2. Follow the instructions in the ../00.configuration notebook to create and connect to a workspace.

  3. Open one of the sample notebooks.

    Make sure the Azure Notebook kernel is set to Python 3.6 when you open a notebook.

    set kernel to Python 3.6

Running samples in a Local Conda environment

To run these notebook on your own notebook server, use these installation instructions.

The instructions below will install everything you need and then start a Jupyter notebook. To start your Jupyter notebook manually, use:

conda activate azure_automl
jupyter notebook

or on Mac:

source activate azure_automl
jupyter notebook

1. Install mini-conda from here, choose Python 3.7 or higher.

  • Note: if you already have conda installed, you can keep using it but it should be version 4.4.10 or later (as shown by: conda -V). If you have a previous version installed, you can update it using the command: conda update conda. There's no need to install mini-conda specifically.

2. Downloading the sample notebooks

  • Download the sample notebooks from GitHub as zip and extract the contents to a local directory. The AutoML sample notebooks are in the "automl" folder.

3. Setup a new conda environment

The automl/automl_setup script creates a new conda environment, installs the necessary packages, configures the widget and starts a jupyter notebook. It takes the conda environment name as an optional parameter. The default conda environment name is azure_automl. The exact command depends on the operating system. It can take about 10 minutes to execute.

Windows

Start a conda command windows, cd to the automl folder where the sample notebooks were extracted and then run:

automl_setup

Mac

Install "Command line developer tools" if it is not already installed (you can use the command: xcode-select --install).

Start a Terminal windows, cd to the automl folder where the sample notebooks were extracted and then run:

bash automl_setup_mac.sh

Linux

cd to the automl folder where the sample notebooks were extracted and then run:

bash automl_setup_linux.sh

4. Running configuration.ipynb

  • Before running any samples you next need to run the configuration notebook. Click on 00.configuration.ipynb notebook
  • Execute the cells in the notebook to Register Machine Learning Services Resource Provider and create a workspace. (instructions in notebook)

5. Running Samples

  • Please make sure you use the Python [conda env:azure_automl] kernel when trying the sample Notebooks.
  • Follow the instructions in the individual notebooks to explore various features in AutoML

Automated ML SDK Sample Notebooks

Documentation

Table of Contents

  1. Automated ML Settings
  2. Cross validation split options
  3. Get Data Syntax
  4. Data pre-processing and featurization

Automated ML Settings

Property Description Default
primary_metric This is the metric that you want to optimize.

Classification supports the following primary metrics
accuracy
AUC_weighted
balanced_accuracy
average_precision_score_weighted
precision_score_weighted

Regression supports the following primary metrics
spearman_correlation
normalized_root_mean_squared_error
r2_score
normalized_mean_absolute_error
normalized_root_mean_squared_log_error
Classification: accuracy

Regression: spearman_correlation
max_time_sec Time limit in seconds for each iteration None
iterations Number of iterations. In each iteration trains the data with a specific pipeline. To get the best result, use at least 100. 100
n_cross_validations Number of cross validation splits None
validation_size Size of validation set as percentage of all training samples None
concurrent_iterations Max number of iterations that would be executed in parallel 1
preprocess True/False
Setting this to True enables preprocessing
on the input to handle missing data, and perform some common feature extraction
Note: If input data is Sparse you cannot use preprocess=True
False
max_cores_per_iteration Indicates how many cores on the compute target would be used to train a single pipeline.
You can set it to -1 to use all cores
1
exit_score double value indicating the target for primary_metric.
Once the target is surpassed the run terminates
None
blacklist_algos Array of strings indicating pipelines to ignore for Auto ML.

Allowed values for Classification
LogisticRegression
SGDClassifierWrapper
NBWrapper
BernoulliNB
SVCWrapper
LinearSVMWrapper
KNeighborsClassifier
DecisionTreeClassifier
RandomForestClassifier
ExtraTreesClassifier
LightGBMClassifier

Allowed values for Regression
ElasticNet
GradientBoostingRegressor
DecisionTreeRegressor
KNeighborsRegressor
LassoLars
SGDRegressor
RandomForestRegressor
ExtraTreesRegressor
None

Cross validation split options

K-Folds Cross Validation

Use n_cross_validations setting to specify the number of cross validations. The training data set will be randomly split into n_cross_validations folds of equal size. During each cross validation round, one of the folds will be used for validation of the model trained on the remaining folds. This process repeats for n_cross_validations rounds until each fold is used once as validation set. Finally, the average scores accross all n_cross_validations rounds will be reported, and the corresponding model will be retrained on the whole training data set.

Monte Carlo Cross Validation (a.k.a. Repeated Random Sub-Sampling)

Use validation_size to specify the percentage of the training data set that should be used for validation, and use n_cross_validations to specify the number of cross validations. During each cross validation round, a subset of size validation_size will be randomly selected for validation of the model trained on the remaining data. Finally, the average scores accross all n_cross_validations rounds will be reported, and the corresponding model will be retrained on the whole training data set.

Custom train and validation set

You can specify seperate train and validation set either through the get_data() or directly to the fit method.

get_data() syntax

The get_data() function can be used to return a dictionary with these values:

Key Type Dependency Mutually Exclusive with Description
X Pandas Dataframe or Numpy Array y data_train, label, columns All features to train with
y Pandas Dataframe or Numpy Array X label Label data to train with. For classification, this should be an array of integers.
X_valid Pandas Dataframe or Numpy Array X, y, y_valid data_train, label Optional All features to validate with. If this is not specified, X is split between train and validate
y_valid Pandas Dataframe or Numpy Array X, y, X_valid data_train, label Optional The label data to validate with. If this is not specified, y is split between train and validate
sample_weight Pandas Dataframe or Numpy Array y data_train, label, columns Optional A weight value for each label. Higher values indicate that the sample is more important.
sample_weight_valid Pandas Dataframe or Numpy Array y_valid data_train, label, columns Optional A weight value for each validation label. Higher values indicate that the sample is more important. If this is not specified, sample_weight is split between train and validate
data_train Pandas Dataframe label X, y, X_valid, y_valid All data (features+label) to train with
label string data_train X, y, X_valid, y_valid Which column in data_train represents the label
columns Array of strings data_train Optional Whitelist of columns to use for features
cv_splits_indices Array of integers data_train Optional List of indexes to split the data for cross validation

Data pre-processing and featurization

If you use preprocess=True, the following data preprocessing steps are performed automatically for you:

  1. Dropping high cardinality or no variance features
    • Features with no useful information are dropped from training and validation sets. These include features with all values missing, same value across all rows or with extremely high cardinality (e.g., hashes, IDs or GUIDs).
  2. Missing value imputation
    • For numerical features, missing values are imputed with average of values in the column.
    • For categorical features, missing values are imputed with most frequent value.
  3. Generating additional features
    • For DateTime features: Year, Month, Day, Day of week, Day of year, Quarter, Week of the year, Hour, Minute, Second.
    • For Text features: Term frequency based on bi-grams and tri-grams, Count vectorizer.
  4. Transformations and encodings
    • Numeric features with very few unique values are transformed into categorical features.

Running using python command

Jupyter notebook provides a File / Download as / Python (.py) option for saving the notebook as a Python file. You can then run this file using the python command. However, on Windows the file needs to be modified before it can be run. The following condition must be added to the main code in the file:

if __name__ == "__main__":

The main code of the file must be indented so that it is under this condition.

Troubleshooting

Iterations fail and the log contains "MemoryError"

This can be caused by insufficient memory on the DSVM. AutoML loads all training data into memory. So, the available memory should be more than the training data size. If you are using a remote DSVM, memory is needed for each concurrent iteration. The concurrent_iterations setting specifies the maximum concurrent iterations. For example, if the training data size is 8Gb and concurrent_iterations is set to 10, the minimum memory required is at least 80Gb. To resolve this issue, allocate a DSVM with more memory or reduce the value specified for concurrent_iterations.

Iterations show as "Not Responding" in the RunDetails widget.

This can be caused by too many concurrent iterations for a remote DSVM. Each concurrent iteration usually takes 100% of a core when it is running. Some iterations can use multiple cores. So, the concurrent_iterations setting should always be less than the number of cores of the DSVM. To resolve this issue, try reducing the value specified for the concurrent_iterations setting.