A small Neural Network library for testing different gradient descent optimization algorithms.
This library is written for studying different first order gradient descent algorithms for CS 213 Optimization course.
The aim of the project is to study some of the first order optimization algorithms used in Neural Networks and compare them in terms of accuracy, loss and convergence time on a simple feed-forward Neural Networks. In particular, the project will study Nesterov Accelerated Gradient Descent, AdaGrad, RMSprop and Adam optimization algorithms.
To run the code you need to have the follwing python packages installed:
- Python 3
- Numpy
- MatPlotlib
- Scikit-learn
Yes, I tend to continue this project by testing other algorithms and methods used in machine learning, adding new features, as well as fixing and improving the exisitng algorithms.
Yes! I do need as much help, suggestion, advice or constructive criticism as possible.
Just cone the project, add the features you think are helpful, then contact me so that we can merge it with existing project. For advice, suggestion or bugs you can open issues in the issues section.