Skip to content

Commit

Permalink
Merge #1612
Browse files Browse the repository at this point in the history
1612: fix AdamW and improve decays docs r=DhairyaLGandhi a=CarloLucibello

There is great disorder under the sky with optimizers. Since in chaining optimizers 
```
opt = Optimizer(opt1, opt2)
```
the order generally matters (a lot!) we have to be very careful in documenting how to use decays. In fact, we were giving completely wrong indirections for `InvDecays` and `ExpDecays`. The correct ordering for standard use is

```julia
Optimizer(WeightDecay(), ADAM())   # equivalent to L2 regularization
Optimizer(ADAM(), InvDecay())   # learning rate scheduling
Optimizer(ADAM(), ExpDecay())   # learning rate scheduling
```
Different orderings are to be typically considered as bugs in user code. 

This PR fixes examples and tries to clarify documentation in this regard. 

Also fixes AdamW, which was doing something totally wrong due to the aforementioned confusion. 
(see https://towardsdatascience.com/why-adamw-matters-736223f31b5d for how AdamW works).

Related in model-zoo: FluxML/model-zoo#303 and FluxML/model-zoo#304




Co-authored-by: CarloLucibello <carlo.lucibello@gmail.com>
Co-authored-by: Carlo Lucibello <carlo.lucibello@unibocconi.it>
  • Loading branch information
3 people committed Jun 10, 2021
2 parents 3b7895e + 380ca76 commit 108cbc8
Showing 1 changed file with 31 additions and 11 deletions.
42 changes: 31 additions & 11 deletions src/optimise/optimisers.jl
Original file line number Diff line number Diff line change
Expand Up @@ -491,7 +491,7 @@ opt = ADAMW(0.001, (0.89, 0.995), 0.1)
```
"""
ADAMW= 0.001, β = (0.9, 0.999), decay = 0) =
Optimiser(ADAM(η, β), WeightDecay(decay))
Optimiser(ADAM(1, β), WeightDecay(decay), Descent))

"""
AdaBelief(η = 0.001, β::Tuple = (0.9, 0.999))
Expand Down Expand Up @@ -564,9 +564,18 @@ Apply inverse time decay to an optimiser, so that the effective step size at
iteration `n` is `eta / (1 + γ * n)` where `eta` is the initial step size.
The wrapped optimiser's step size is not modified.
See also the [Scheduling Optimisers](@ref) section of the docs
for more general scheduling techniques.
# Examples
`InvDecay` is typically composed with other optimizers
as the last transformation of the gradient:
```julia
Optimiser(InvDecay(..), Opt(..))
# Inverse decay of the learning rate
# with starting value 0.001 and decay coefficient 0.01.
opt = Optimiser(Adam(1f-3), InvDecay(1f-2))
```
"""
mutable struct InvDecay <: AbstractOptimiser
Expand Down Expand Up @@ -598,12 +607,16 @@ a minimum of `clip`.
two decay operations.
- `clip`: Minimum value of learning rate.
See also the [Scheduling Optimisers](@ref) section of the docs
for more general scheduling techniques.
# Examples
To apply exponential decay to an optimiser:
```julia
Optimiser(ExpDecay(..), Opt(..))
opt = Optimiser(ExpDecay(), ADAM())
`ExpDecay` is typically composed with other optimizers
as the last transformation of the gradient:
```julia
opt = Optimiser(ADAM(), ExpDecay())
```
"""
mutable struct ExpDecay <: AbstractOptimiser
Expand All @@ -614,7 +627,8 @@ mutable struct ExpDecay <: AbstractOptimiser
current::IdDict
end

ExpDecay(opt = 0.001, decay = 0.1, decay_step = 1000, clip = 1e-4) = ExpDecay(opt, decay, decay_step, clip, IdDict())
ExpDecay(opt = 0.001, decay = 0.1, decay_step = 1000, clip = 1e-4) =
ExpDecay(opt, decay, decay_step, clip, IdDict())

function apply!(o::ExpDecay, x, Δ)
η, s, decay = o.eta, o.step, o.decay
Expand All @@ -627,12 +641,18 @@ function apply!(o::ExpDecay, x, Δ)
end

"""
WeightDecay(wd = 0)
WeightDecay(λ = 0)
Decay weights by `wd`.
Decay weights by ``λ``.
Typically composed with other optimizers as the first transformation to the gradient,
making it equivalent to adding ``L_2`` regularization
with coefficient ``λ`` to the loss.
# Parameters
- Weight decay (`wd`)
# Examples
```julia
opt = Optimiser(WeigthDecay(1f-4), ADAM())
```
"""
mutable struct WeightDecay <: AbstractOptimiser
wd::Real
Expand Down

0 comments on commit 108cbc8

Please sign in to comment.