Skip to content
/ saga Public

Mini-batch and distributed SAGA

License

Notifications You must be signed in to change notification settings

mdeff/saga

Repository files navigation

Mini-batch and distributed SAGA

Michaël Defferrard, Soroosh Shafiee

This small project explored two approaches to improve the SAGA incremental gradient algorithm:

  1. Take gradients over mini-batches to reduce the memory requirement.
  2. Compute gradients in parallel on multiple CPU cores to speed it up.

Content

See our proposal, report, and presentation for an exposition of the methods and some experimental results.

You'll also find a Python implementation of our mini-batch approach as well as a MATLAB implementation of our distributed approach.