Skip to content

learn-co-students/dsc-tuning-neural-networks-from-start-to-finish-lab-houston-ds-060319

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tuning and Optimizing Neural Networks - Lab

Introduction

Now that you've seen some regularization, initialization and optimization techniques, its time to synthesize those concepts into a cohesive modeling pipeline.

With this pipeline, you will not only fit an initial model but will also attempt to set various hyperparameters for regularization techniques. Your final model selection will pertain to the test metrics across these models. This will more naturally simulate a problem you might be faced with in practice, and the various modeling decisions you are apt to encounter along the way.

Recall that our end objective is to achieve a balance between overfitting and underfitting. You've seen the bias variance trade-off, and the role of regularization in order to reduce overfitting on training data and improving generalization to new cases. Common frameworks for such a procedure include train/validate/test methodology when data is plentiful, and K-folds cross-validation for smaller, more limited datasets. In this lab, you'll perform the latter, as the dataset in question is fairly limited.

Objectives

You will be able to:

  • Implement a K-folds cross validation modeling pipeline
  • Apply normalization as a preprocessing technique
  • Apply regularization techniques to improve your model's generalization
  • Choose an appropriate optimization strategy

Loading the Data

Load and preview the dataset below.

# Your code here; load and preview the dataset

Defining the Problem

Set up the problem by defining X and y.

For this problem use the following variables for X:

  • loan_amnt
  • home_ownership
  • funded_amnt_inv
  • verification_status
  • emp_length
  • installment
  • annual_inc

Our target variable y will be total_pymnt

# Your code here; appropriately define X and y and apply a trian test split

Generating a Hold Out Test Set

While we will be using K-fold cross validation to select an optimal model, we still want a final hold out test set that is completely independent of any modeling decisions. As such, pull out a sample of 10% of the total available data. For consistency of results, use random seed 123.

# Your code here; generate a hold out test set for final model evaluation. Use random seed 123.

Preprocessing Steps

  • Fill in missing values with SimpleImputer
  • Standardize continuous features with StandardScalar()
  • One hot encode categorical features with OneHotEncoder()

Preprocess Your Holdout Set

Make sure to use your StandardScalar and OneHotEncoder that you already fit on the training set to transform your test set

Defining a K-fold Cross Validation Methodology

Now that your have a complete holdout test set, write a function that takes in the remaining data and performs k-folds cross validation given a model object.

Note: Think about how you will analyze the output of your models in order to select an optimal model. This may involve graphs, although alternative approaches are certainly feasible.

# Your code here; define a function to evaluate a model object using K folds cross validation.

def k_folds(features_train, labels_train, model_obj, k=10, n_epochs=100):
    pass

Building a Baseline Model

Here, it is also important to define your evaluation metric that you will look to optimize while tuning the model. Additionally, model training to optimize this metric may consist of using a validation and test set if data is plentiful, or k-folds cross-validation if data is limited. Since this dataset is not overly large, it will be most appropriate to set up a k-folds cross-validation

# Your code here; define and compile an initial model as described

Evaluating the Baseline Model with K-Folds Cross Validation

Use your k-folds function to evaluate the baseline model.

Note: This code block is likely to take 10-20 minutes to run depending on the specs on your computer. Because of time dependencies, it can be interesting to begin timing these operations for future reference.

Here's a simple little recipe to achieve this:

import time
import datetime

now = datetime.datetime.now()
later = datetime.datetime.now()
elapsed = later - now
print('Time Elapsed:', elapsed)
# Your code here; use your k-folds function to evaluate the baseline model.
# ⏰ This cell may take several mintes to run

Intentionally Overfitting a Model

Now that you've developed a baseline model, its time to intentionally overfit a model. To overfit a model, you can:

  • Add layers
  • Make the layers bigger
  • Increase the number of training epochs

Again, be careful here. Think about the limitations of your resources, both in terms of your computers specs and how much time and patience you have to let the process run. Also keep in mind that you will then be regularizing these overfit models, meaning another round of experiments and more time and resources.

# Your code here; try some methods to overfit your network
# ⏰ This cell may take several mintes to run
# Your code here; try some methods to overfit your network
# ⏰ This cell may take several mintes to run
# Your code here; try some methods to overfit your network
# ⏰ This cell may take several mintes to run

Regularizing the Model to Achieve Balance

Now that you have a powerful model (albeit an overfit one), we can now increase the generalization of the model by using some of the regularization techniques we discussed. Some options you have to try include:

  • Adding dropout
  • Adding L1/L2 regularization
  • Altering the layer architecture (add or remove layers similar to above)

This process will be constrained by time and resources. Be sure to test at least 2 different methodologies, such as dropout and L2 regularization. If you have the time, feel free to continue experimenting.

Notes:

# Your code here; try some regularization or other methods to tune your network
# ⏰ This cell may take several mintes to run
# Your code here; try some regularization or other methods to tune your network
# ⏰ This cell may take several mintes to run
# Your code here; try some regularization or other methods to tune your network
# ⏰ This cell may take several mintes to run
# Your code here; try some regularization or other methods to tune your network
# ⏰ This cell may take several mintes to run

Final Evaluation

Now that you have selected a network architecture, tested various regularization procedures and tuned hyperparameters via a validation methodology, it is time to evaluate your finalized model once and for all. Fit the model using all of the training and validation data using the architecture and hyperparameters that were most effective in your experiments above. Afterwards, measure the overall performance on the hold-out test data which has been left untouched (and hasn't leaked any data into the modeling process)!

# Your code here; final model training on entire training set followed by evaluation on hold-out data
# ⏰ This cell may take several mintes to run

Summary

In this lab, you investigated some data from The Lending Club in a complete data science pipeline regarding neural networks. You began with reserving a hold-out set for testing which never was touched during the modeling phase. From there, you implemented a k-fold cross validation methodology in order to assess an initial baseline model and various regularization methods. From here, you'll begin to investigate other neural network architectures such as CNNs.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •