Skip to content

learn-co-students/dsc-4-42-07-section-recap-nyc-career-ds-102218

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Section Recap

Introduction

This short lesson summarizes key takeaways from section 42.

Objectives

You will be able to:

  • Understand and explain what was covered in this section
  • Understand and explain why this section will help you become a data scientist

Key Takeaways

The key takeaways from this section include:

  • In deep learning a training, validation and test set are used when iteratively building the right deep networks
  • Like traditional machine learning models, we need to watch out for the bias variance trade-off when building deep learning models
  • Several regularization techniques can help us limit overfitting: L1 Regularization, L2 Regularization, Dropout Regularization,...
  • Deep network training can be sped up by using normalized inputs
  • Normalized inputs can also help mitigate a common issue of vanishing or exploding gradients
  • You learned about gradient descent, but in deep learning some other optimization algorithms are introduced that work faster than gradient descent
  • Examples of alternatives for gradient descent are: RMSprop, Adam, Gradient Descent with Momentum
  • Hyperparameter tuning is of crucial important when working with deep learning models, as setting the parameters right can lead to great model improvements

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published