Skip to content
This repository was archived by the owner on Jun 3, 2025. It is now read-only.

Conversation

@markurtz
Copy link
Member

@markurtz markurtz commented Jun 15, 2021

A new modifier to scale the learning rate from an initial to a final lr based on a given function (currently linear and cosine). Example yaml:

!LearningRateFunctionModifier
    start_epoch: 0.0
    end_epoch: 10.0
    lr_func: linear
    init_lr: 0.1
    final_lr: 0.001

Additionally support has been added in to target changing the learning rate for specific param groups within an optimizer for this new modifier as well as the SetLRModifier.

@markurtz markurtz requested a review from a team June 15, 2021 13:18
@markurtz markurtz self-assigned this Jun 15, 2021
@markurtz markurtz requested review from bfineran, mgoin and natuan and removed request for a team June 15, 2021 13:18
@markurtz markurtz merged commit 985304e into main Jun 15, 2021
@markurtz markurtz deleted the lr-function-modifier branch June 18, 2021 14:06
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants