Info: This course will teach you the "magic" of getting deep learning to work well. Rather than the deep learning process being a black box, you will understand what drives performance, and be able to more systematically get good results. You will also learn TensorFlow.
After 3 weeks, you will:
- Understand industry best-practices for building deep learning applications.
- Be able to effectively use the common neural network "tricks", including initialization, L2 and dropout regularization, Batch normalization, gradient checking,
- Be able to implement and apply a variety of optimization algorithms, such as mini-batch gradient descent, Momentum, RMSprop and Adam, and check for their convergence.
- Understand new best-practices for the deep learning era of how to set up train/dev/test sets and analyze bias/variance
- Be able to implement a neural network in TensorFlow.
This is the second course of the Deep Learning Specialization.
- Video: Train / Dev / Test sets
- Video: Bias / Variance
- Video: Basic Recipe for Machine Learning
- Video: Regularization
- Video: Why regularization reduces overfitting?
- Video: Dropout Regularization
- Video: Understanding Dropout
- Video: Other regularization methods
- Video: Normalizing inputs
- Video: Vanishing / Exploding gradients
- Video: Weight Initialization for Deep Networks
- Video: Numerical approximation of gradients
- Video: Gradient checking
- Video: Gradient Checking Implementation Notes
- Notepad: Initialization
- Notepad: Regularization
- Notepad: Gradient Checking
- Video: Yoshua Bengio interview
- Video: Mini-batch gradient descent
- Video: Understanding mini-batch gradient descent
- Video: Exponentially weighted averages
- Video: Understanding exponentially weighted averages
- Video: Bias correction in exponentially weighted averages
- Video: Gradient descent with momentum
- Video: RMSprop
- Video: Adam optimization algorithm
- Video: Learning rate decay
- Video: The problem of local optima
- Notepad: Optimization
- Video: Yuanqing Lin interview
- Video: Tuning process
- Video: Using an appropriate scale to pick hyperparameters
- Video: Hyperparameters tuning in practice: Pandas vs. Caviar
- Video: Normalizing activations in a network
- Video: Fitting Batch Norm into a neural network
- Video: Why does Batch Norm work?
- Video: Batch Norm at test time
- Video: Softmax Regression
- Video: Training a softmax classifier
- Video: Deep learning frameworks
- Video: TensorFlow
- Notepad: Tensorflow