This project was completed as a part of the Honors portion of the Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization Course on Coursera.
Credit to DeepLearning.AI and the Coursera platform for providing the course materials and guidance.
In this report, we will acquire skills in various optimization methods capable of accelerating learning and potentially achieving better final values for the cost function. A robust optimization algorithm can significantly reduce the time required to obtain desirable results, making a remarkable difference between days of waiting and mere hours.
By the conclusion of this assignment, we will have gained proficiency in applying optimization methods such as (Stochastic) Gradient Descent, Momentum, RMSProp, and Adam. Additionally, we will learn how to leverage random minibatches to expedite convergence and further improve the optimization process. The knowledge and techniques acquired throughout this report will significantly enhance our deep learning capabilities, enabling us to achieve better results in a more efficient manner.