diff --git a/README.md b/README.md index 02ea9a9a..750d388a 100644 --- a/README.md +++ b/README.md @@ -110,6 +110,25 @@ cd docs/ make html ``` +## Benchmarks +If you feel lost in the crowd of available optimizers for deep learning, there +exist some extensive benchmarks: + +[Benchmarking Neural Network Training Algorithms, Dahl G. et al, 2023](https://arxiv.org/pdf/2306.07179), + +[Descending through a Crowded Valley — Benchmarking Deep Learning Optimizers, Schmidt R. et al, 2021](https://proceedings.mlr.press/v139/schmidt21a). + +If you are interested in developing your own benchmark for some tasks, +consider the following framework + +[Benchopt: Reproducible, efficient and collaborative optimization benchmarks, Moreau T. et al, 2022](https://arxiv.org/abs/2206.13424). + +Finally, if you are searching for some recommendations on tuning optimizers, +consider taking a look at + +[Deep Learning Tuning Playbook, Godbole V. et al, 2023](https://github.com/google-research/tuning_playbook). + + ## Citing Optax This repository is part of the DeepMind JAX Ecosystem, to cite Optax