very adaptive optimizers in tensorflow
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
LICENSE
README.md
__init__.py
aggregate_optimizers.py
freerex.py

README.md

freerex

very adaptive optimizers in tensorflow based on this paper.

Three optimizers that provably achieve optimal convergence rates with no prior information about the data.

FreeRexDiag is a coordinate-wise optimizer (this is probably the best default algorithm).

FreeRexSphere uses an L2 update for dimension-independence (good for high-dimensional problems).

FreeRexLayerWise is an intermediate between the above two that might be computationally faster than FreeRexSphere.

These are all implemented as subclasses of Tensorflow's optimizer class. You should be able to use them as drop-in replacements for other optimizers. For example:

optimizer = tf.Train.AdamOptimizer(1e-4)
train_step = optimizer.minimize(loss)

Can be replaced with

optimizer = FreeRex() # FreeRex is an alias for FreeRexDiag
train_step = optimizer.minimize(loss)

Each algorithm takes as input the parameter k_inv (e.g. optimizer = FreeRex(0.5)). This parameter is analagous to a learning rate, but provably requires less tuning. The default is k_inv=1.0, which has worked well in my limited experiments.