Skip to content

root-master/lbfgs-tr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

84 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Implementation of the trust-region limited-memory BFGS quasi-Newton optimization in Deep Learning

The example here is using the classification task of MNIST dataset.

TensorFlow is used to compute the gradients. Numpy and Scipy is used for the matrix computations.

Run the Python program

$ python LBFGS_TR.py -m=10 -minibatch=1000

args:
-m=10             # the L-BFGS memory storage
-num-batch=4 # number of overlapped samples --> refer to the paper 
-minibatch=1000   # minibatch size
-use-whole-data # uses whole data to calculate gradients.

About

Limited Memory BFGS with Trust Region

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages