Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement PackNet #1496

Merged
merged 3 commits into from Sep 25, 2023
Merged

Implement PackNet #1496

merged 3 commits into from Sep 25, 2023

Conversation

tachyonicClock
Copy link
Contributor

  • The PackNet plugin is implemented inline with the PackNet module implementation. Is that okay?

from avalanche.training.templates.base_sgd import BaseSGDTemplate


class PackNetModule(ABC):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when you call it Module you mean nn.Module? if so, it should inherit from it. Otherwise I suggest to rename it for clarity.

avalanche/models/packnet.py Outdated Show resolved Hide resolved
avalanche/models/packnet.py Outdated Show resolved Hide resolved
avalanche/models/packnet.py Show resolved Hide resolved
avalanche/models/packnet.py Outdated Show resolved Hide resolved
@AntonioCarta
Copy link
Collaborator

Hi @tachyonicClock, thanks for the PR. To add new strategies we require a script reproducing the paper's results on some benchmark to add to continual-learning-baselines.

Please remember to add the doc to all the arguments and methods.

The PackNet plugin is implemented inline with the PackNet module implementation. Is that okay?

That is ok. It is a bit convoluted because the training logic is mixed between the module wrapper and the plugin but I don't think that's an issue.

@tachyonicClock
Copy link
Contributor Author

Hi @AntonioCarta thanks for taking the time to code review. Regarding the "continual-learning-baselines" project would it be reasonable to match a survey rather than the original paper? The original paper uses a non-standard multi-dataset problem where each task is an entire dataset.

@AntonioCarta
Copy link
Collaborator

yes, a review is also fine.

@AntonioCarta
Copy link
Collaborator

Hey @tachyonicClock, let me know when the benchmark is ready and I can do another review of the PR.

@tachyonicClock
Copy link
Contributor Author

Absolutely. I haven't forgotten about it :D, I've just been busy with other things.

@coveralls
Copy link

Pull Request Test Coverage Report for Build 6244038021

  • 90 of 240 (37.5%) changed or added relevant lines in 2 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage decreased (-0.4%) to 72.504%

Changes Missing Coverage Covered Lines Changed/Added Lines %
avalanche/training/supervised/strategy_wrappers.py 4 7 57.14%
avalanche/models/packnet.py 86 233 36.91%
Totals Coverage Status
Change from base Build 6223426282: -0.4%
Covered Lines: 17567
Relevant Lines: 24229

💛 - Coveralls

@tachyonicClock
Copy link
Contributor Author

I have made a pull request in the continual-learning-baselines project (ContinualAI/continual-learning-baselines#63). My implementation (46% on Split Tiny ImageNet) does not match the performance achieved in [1] (49% on Split Tiny ImageNet). Perfectly matching [1] is challenging because their benchmark used the Continual Hyperparameter Selection Framework, rather than published static hyper-parameters.

[1] Delange, M., Aljundi, R., Masana, M., Parisot, S., Jia, X., Leonardis, A., Slabaugh, G., & Tuytelaars, T. (2021). A continual learning survey: Defying forgetting in classification tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–1. https://doi.org/10.1109/TPAMI.2021.3057446

@AntonioCarta
Copy link
Collaborator

Thanks, I think your result is ok. Hyperparameters chosen with the "Continual Hyperparameter Selection" can be optimized for each timestep, so it makes sense that your results are slightly lower.

I'm going to merge this, please open the PR at continual-learning-baselines for the reproducibility script.

@AntonioCarta AntonioCarta merged commit 581df11 into ContinualAI:master Sep 25, 2023
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants