Skip to content

denotevn/Deep-Learning-Specialization-Coursera

Repository files navigation

Deep-Learning-Specialization-Coursera

Course 1 - Neural Networks and Deep Learning

This link of course : Neural Networks and Deep Learning

1. Week1: Analyze the major trends driving the rise of deep learning, and give examples of where and how it is applied today.

  • We learn about basic of Neural Network

2. Week 2: Set up a machine learning problem with a neural network mindset and use vectorization to speed up your models.

  • We learn about Logistic Regression as a Neural Networks
  • How to calculate : Cost function, Gradient Descent, Derivatives and so on

3. Week 3: Build a neural network with one hidden layer, using forward propagation and backpropagation.

  • Describe hidden units and hidden layers
  • Use units with a non-linear activation function, such as tanh
  • Implement forward and backward propagation
  • Apply random initialization to your neural network
  • Increase fluency in Deep Learning notations and Neural Network Representations
  • Implement a 2-class classification neural network with a single hidden layer
  • Compute the cross entropy loss

4. Week 4: Analyze the key computations underlying deep learning, then use them to build and train deep neural networks for computer vision tasks.

  • Describe the successive block structure of a deep neural network
  • Build a deep L-layer neural network
  • Analyze matrix and vector dimensions to check neural network implementations
  • Use a cache to pass information from forward to back propagation
  • Explain the role of hyperparameters in deep learning
  • Build a 2-layer neural network

Course 2 - C2 - Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization

This link of course : Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization

Week 1: Discover and experiment with a variety of different initialization methods, apply L2 regularization and dropout to avoid model overfitting, then apply gradient checking to identify errors in a fraud detection model.

  • Give examples of how different types of initializations can lead to different results
  • Examine the importance of initialization in complex neural networks
  • Explain the difference between train/dev/test sets
  • Diagnose the bias and variance issues in your model
  • Assess the right time and place for using regularization methods such as dropout or L2 regularization
  • Explain Vanishing and Exploding gradients and how to deal with them
  • Use gradient checking to verify the accuracy of your backpropagation implementation
  • Apply zeros initialization, random initialization, and He initialization
  • Apply regularization to a deep learning model

Course 3:

Releases

No releases published

Packages

No packages published