Skip to content

List of all optimizers for decreasing the loss function

Notifications You must be signed in to change notification settings

tabshaikh/optimizers

Repository files navigation

Optimizers

List of all optimizers for decreasing the loss function

For optimizing gradients:

  1. Momentum gradient descent:https://towardsdatascience.com/stochastic-gradient-descent-with-momentum-a84097641a5d
  2. Nesterov accelerated gradient descent:https://ieeexplore.ieee.org/document/7966082/

For optimizing learning :

  1. Adagrad: http://akyrillidis.github.io/notes/AdaGrad
  2. RMS prop: https://www.coursera.org/lecture/deep-neural-network/rmsprop-BhJlm

Combination of both the techiniques to optimize stride and learning rate:

Adam - https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/

Contribute - Contribute to this repo by adding more related useful links. If you like this repository give it a star

About

List of all optimizers for decreasing the loss function

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published