Optimizer "Lion" in Symbolic Discovery of Optimization Algorithms #94904
Labels
module: optimizer
Related to torch.optim
needs research
We need to decide whether or not this merits inclusion, based on research world
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
馃殌 The feature, motivation and pitch
A new optimizer "Lion" was proposed in Symbolic Discovery of Optimization Algorithms
https://arxiv.org/pdf/2302.06675.pdf
It is simple, memory efficient and have a faster runtime.
I think it can be a great addition to the Pytorch optimizer algorithms' collection as
it directly compares to AdamW which is already available.
the optimizer is a single file
https://github.com/google/automl/blob/master/lion/lion_pytorch.py
Pseudo Code:$\beta_1$ , $\beta_2$ , $\lambda$ , $\eta$ , $f$ $\theta_0$ , $m_0\leftarrow 0$ $g_t \leftarrow \nabla_\theta f(\theta_{t-1})$ $c_t \leftarrow \beta_1 m_{t-1} + (1-\beta_1)g_t$ $\theta_t \leftarrow \theta_{t-1} - \eta_t(\text{sign}(c_t) + \lambda\theta_{t-1})$ $g_t$ }$m_t \leftarrow \beta_2 m_{t-1} + (1 - \beta_2)g_t$ $\theta_t$
$$
\begin{algorithm}
\caption{\name{} Lion}
\begin{algorithmic}
\State \textbf{given}
\State \textbf{initialize}
\While{$\theta_t$ not converged}
\State
\State \textbf{update model parameters}
\State
\State
\State \textbf{update EMA of
\State
\EndWhile
\State \textbf{return}
\end{algorithmic}
\label{alg:ours}
\end{algorithm}
$$
Alternatives
No response
Additional context
No response
cc @vincentqb @jbschlosser @albanD @janeyx99
The text was updated successfully, but these errors were encountered: