Skip to content
#

stochastic-gradient-descent

Here are 377 public repositories matching this topic...

Neural network-based character recognition using MATLAB. The algorithm does not rely on external ML modules, and is rigorously defined from scratch. A report is included which explains the theory, algorithm performance comparisons, and hyperparameter optimization.

  • Updated Aug 19, 2021
  • MATLAB

Gradient Descent is a technique used to fine-tune machine learning algorithms with differentiable loss functions. It's an open-ended mathematical expression, tirelessly calculating the first-order derivative of a loss function and making precise parameter adjustments.

  • Updated Apr 26, 2024
  • Jupyter Notebook

A project performing gradient descent and stochastic average gradient descent for matrix completion. The algorithms are tested on some synthetic data before being used on downscaled real X-ray absorption data from a spectromicroscopy experiment. The algorithms' behaviours and outputs are examined in the report.

  • Updated May 30, 2023
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the stochastic-gradient-descent topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the stochastic-gradient-descent topic, visit your repo's landing page and select "manage topics."

Learn more