Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix integration with IncrementalClassifier #1447

Merged
merged 2 commits into from Jul 5, 2023

Conversation

JuliousHurtado
Copy link
Contributor

When testing the RAR implementation using a model with IncrementalClassifier class, @AlbinSou found a new error.

The problem arises between an incompatibility between IncremetnalClassifier and the feature_extraction functions from pytorch. Because of this, I changed the mask function in the IncrementalClassifier class to be an in_place operation. This solves the first problem.

A second problem was a KeyError problem with optimizer.state_dict(). Because of this, I simplified the RAR implementation. This can help with the performance but sometimes take more time to run.

@coveralls
Copy link

coveralls commented Jul 4, 2023

Pull Request Test Coverage Report for Build 5461914572

  • 2 of 3 (66.67%) changed or added relevant lines in 2 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage increased (+0.02%) to 72.392%

Changes Missing Coverage Covered Lines Changed/Added Lines %
avalanche/training/plugins/rar.py 0 1 0.0%
Totals Coverage Status
Change from base Build 5414134002: 0.02%
Covered Lines: 16388
Relevant Lines: 22638

💛 - Coveralls

@AntonioCarta
Copy link
Collaborator

Is the in-place operation compatible with backpropagation? Can you use the not in-place version of the operation instead?

@JuliousHurtado
Copy link
Contributor Author

I don't know if it is compatible con back-propagation, I don't see why not.
But yes, I change it. I run some small experiments with both versions and I don't see any difference in the performance.

Something that Albin suggest was to change the RAR plugins to something similar to ICarl, and having the feature extractor and classifier independently. This would help us remove the name_ext_layer parameters, since we will have a feature extractor that can return the vector. Don't know what you think?

@AntonioCarta
Copy link
Collaborator

Having FE and classifier in separate attributes is always convenient if possible, but I don't fully understand what's the issue with RAR. If it's working I think we can leave it as is.

@AntonioCarta AntonioCarta merged commit 200b8c4 into ContinualAI:master Jul 5, 2023
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants