Jeremy Bernstein   ·  
Jiawei Zhao   ·  
Markus Meister
Ming‑Yu Liu   ·  
Anima Anandkumar   ·  
Yisong Yue
- Jax: open the demo notebook in Colab.
- Pytorch: grab madam.py and place it in your project directory. Then type:
from madam import Madam
optimizer = Madam(net.parameters(), lr=0.01, p_scale=3.0, g_bound=10.0)
To understand what the different hyperparameters do, note that the typical Madam update to a parameter w
is:
The largest possible Madam update to a parameter is:
And finally the parameters are clipped to lie within the range ± init_scale x p_scale
.
An initial learning rate of lr = 0.01
is the recommended default. The algorithm converges to a solution which "jitters" around the true solution, at which point the learning rate should be decayed. We didn't experiment much with g_bound, but g_bound = 10
was a good default. p_scale controls the size of the optimisation domain, and it was worth tuning this in the set [1.0, 2.0, 3.0]
.
This repository was built by Jeremy Bernstein and Jiawei Zhao to accompany the following paper:
Learning compositional functions via multiplicative weight updates.
We're putting this code here so that you can test out our optimisation algorithm in your own applications, and also so that you can attempt to reproduce the experiments in our paper.
If something isn't clear or isn't working, let us know in the Issues section or contact bernstein@caltech.edu.
.
├── pytorch/ # Pytorch code to reproduce experiments in the paper.
├── jax/ # A Jax demo notebook.
├── LICENSE # The license on our algorithm.
└── README.md # The very page you're reading now.
- Our GAN implementation is based on a codebase by Jiahui Yu.
- Our Transformer and ImageNet code is forked from the Pytorch examples repository.
- Our CIFAR-10 classification code is orginally by kuangliu.
- Our Jax demo is based on the Fourier feature networks codebase.
If you find Madam useful, feel free to cite the paper:
@inproceedings{madam,
title={Learning compositional functions via multiplicative weight updates},
author={Jeremy Bernstein and Jiawei Zhao and Markus Meister and Ming-Yu Liu and Anima Anandkumar and Yisong Yue},
booktitle = {Neural Information Processing Systems},
year={2020}
}
We are making our algorithm available under a CC BY-NC-SA 4.0 license. The other code we have used obeys other license restrictions as indicated in the subfolders.