New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MER Strategy + OnlineSupervisedMetaLearning Template #1227
Conversation
UPDATE interactive logger for online CL strategies
Pull Request Test Coverage Report for Build 3569271524
💛 - Coveralls |
|
||
""" | ||
|
||
PLUGIN_CLASS = BaseSGDPlugin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be SupervisedPlugin
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed for all!
For the reproducibility we can discuss this on the continual-learning-baselines repo. If we can have the original architecture it's better. |
This PR adds:
OnlineSupervisedMetaLearningTemplate
: added to the list of common templatesThe algorithm is adapted from version 1 of the algorithm in the official repository:
https://github.com/mattriemer/MER/blob/master/model/meralg1.py
Unfortunately, the results are only available for MNIST and Omniglot in the original paper, and the rest are RL experiments. In the first run, I tested it
SimpleMLP
from Avalanche on PMNIST, and the results were a little bit higher compared to what is reported in the paper (~0.76 vs. ~0.73), which I guess should be related to the higher capacity of the default MLP model in Avalanche. I can also run it for the original architecture, but since the algorithm is super slow due to the multiple levels of meta-updates in each iteration, I am going to add them later.