Skip to content

nupam/keras-callbacks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 

Repository files navigation

keras-callbacks

Contains callbacks for cyclic learning rate, recording loss/lr/momentum, finding lr

For example usage see example_usage.ipynb(crazy high lr is to exaggerate decay)

This module is inspired from fastai learner and cyclic learning rate scheduler

This module is tested on keras 2.2.4 with tensorflow backend

Callbacks and functions defined:

  1. RecorderCallback:
    keras callback to store training losses, learning rate and momentum (if applicable) during training of any model
    parameters:

    alpha: float, smoothness factor, weight for exponetially weighted average to smooth loss
    alpha must be between [0,1)
    for no smoothing us alpha=0

  2. CyclicLRCallback:
    Warning: This callback is only implemented for Adam family of optimizers, i.e, Adam, Adamax, Nadam, with parameter beta_1 as momentum
    keras callback for cyclic learning rate.
    For more details on working see original paper: https://arxiv.org/abs/1506.01186
    This callback also features auto decay option that will decay learning rate after patience cycles if no improvement in monitored metric/loss is observed, In such case epochs must be multiple of cycles.

    Learning rate is linearly increased to max_lr from zero in pct_start (start percentage-[0,1]) part of cycle then decreases to zero as cosine function in (1-pct_start) part of cycle.
    This is repeated as number of cycles.
    In similar manner momentum is decresed from moms[0] to moms[1]
    parameters:

    max_lr: maximum value of learning rate, if not provided fetched from optimizer
    min_lr: minimum value of learning rate
    cycles: number of cycles to repeat of CLR
    auto_decay: set True to decay lr automatically at end of cycle
    patience: number of cycles to wait before decaying lr
    monitor: monitered metric/loss for auto decay
    pct_start: ratio of cycle to increase the learning rate from min_lr to max_lr, remaining to decrease
    moms: momentum range to be used
    decay: decay value of max_lr after each cycle, max_lr after each cycle becomes max_lr*decay
    it is possible to use decay > 1, no warning will be issued

  3. LRFindCallback:
    keras callback which gradually increases leraning rate from min_lr to max_lr and records loss at each step
    Stores learning rate nad respective loss in python lists lr_list, loss_list respectively<br Uses disk to write temporary model weights, read write permission and enough disk space is required
    parameters:

    max_lr: float, maximum value of learning rate to test on
    min_lr: float, manimum value of learning rate to test on
    max_epochs: integer, maximum number of epochs to run test upto
    multiplier: float, ratio to increase learning rate from previous step
    max_loss: float, maximum loss of model till which learning could be increased

  4. lr_find:
    Function
    uses LRFindCallback defined above to plot lr vs loss graph
    parameters:

    model: keras model object to test on
    data: numpy arrays (x, y) or data_generator yeilding mini-batches as such
    max_epochs: maximum number of epochs run test to
    steps_per_epoch: number of steps to take per epoch, only uses when generator=True is provided
    batch_size: batchsize to use in model.fit, not applicable if generator is used
    alpha: shooting factor(parameter for smoothing loss, use 0 for no smoothing)
    logloss: plots loss in logarithmic scale
    clip_loss: clips loss between 2.5 and 97.5 percentile
    max_lr: maximum value of learning rate to test on
    min_lr: manimum value of learning rate to test on
    multiplier: ratio to increase learning rate from previous step
    max_loss: maximum loss of model till which learning could be increased

It was a great learning experiecne <3

About

contains callbacks for cyclic learning rate, recording loss/lr/momentum, finding lr

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published