Skip to content

jihadakbr/deep-learning-with-optimization-methods

Repository files navigation

Deep Learning with Optimization Methods

This project was completed as a part of the Honors portion of the Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization Course on Coursera.

Credit to DeepLearning.AI and the Coursera platform for providing the course materials and guidance.

Objective

In this report, we will acquire skills in various optimization methods capable of accelerating learning and potentially achieving better final values for the cost function. A robust optimization algorithm can significantly reduce the time required to obtain desirable results, making a remarkable difference between days of waiting and mere hours.

By the conclusion of this assignment, we will have gained proficiency in applying optimization methods such as (Stochastic) Gradient Descent, Momentum, RMSProp, and Adam. Additionally, we will learn how to leverage random minibatches to expedite convergence and further improve the optimization process. The knowledge and techniques acquired throughout this report will significantly enhance our deep learning capabilities, enabling us to achieve better results in a more efficient manner.

Results

Deep Learning with Optimization Methods

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published