List of all optimizers for decreasing the loss function
- Momentum gradient descent:https://towardsdatascience.com/stochastic-gradient-descent-with-momentum-a84097641a5d
- Nesterov accelerated gradient descent:https://ieeexplore.ieee.org/document/7966082/
- Adagrad: http://akyrillidis.github.io/notes/AdaGrad
- RMS prop: https://www.coursera.org/lecture/deep-neural-network/rmsprop-BhJlm
Adam - https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/
Contribute - Contribute to this repo by adding more related useful links. If you like this repository give it a star