Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CIL Examplar Free components #1528

Merged
merged 4 commits into from Nov 17, 2023
Merged

Conversation

AlbinSou
Copy link
Collaborator

I start this PR but it's not finished. Feel free to discuss the things that should be changed.

For now, here is what I added

  • FeCAM (NCM-like but using malahanobis distance (stores per-class covariance matrices) arxiv
  • FeCAM update utils (current, memory and oracle)
  • NCM update utils (current, memory and oracle)
  • Cosine Linear first draft

Here is what I want to add:

  • Tests for FeCAM
  • Tests for all the update utils
  • Change Cosine Linear so that it can have new number of new classes (for now it handles only cases with same number of classes in each exp)
  • Test for Cosine Linear
  • FeatureAdapters arxiv (for NCM, for now could not make it work for LinearSVC) + tests
  • Resnet18 adapted for Cifar and Tinyimagenet (without maxpool layer) -> @AntonioCarta Do we have this already ?
  • Maybe some examples of using all this in continual-learning-baselines
  • A lot of regularization based methods use higher lr and different scheduling strategy in the first task and in the remaining ones. I have made a plugin inheriting from LRSchedulerPlugin to handle this kind of things, I could adapt it a bit and propose it (I was thinking of smt general like a plugin that can switch the scheduler based on the task)

avalanche/models/cosine_layer.py Show resolved Hide resolved
avalanche/models/cosine_layer.py Show resolved Hide resolved
avalanche/models/cosine_layer.py Show resolved Hide resolved

class CosineIncrementalClassifier(DynamicModule):
# WARNING Maybe does not work with initial evaluation
def __init__(self, in_features, num_classes):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

num_classes should be optional in the incremental version? (1 is a good default)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

About that, I am not sure right now whether this behavior can be implemented with the current code state. In this code, whenever new classes arrive, the classifier switchs from using one CosineLinear layer to a SplitCosineLinear one (containing two CosineLinear). So, in the case we initialize with 1 class, this SplitCosineLinear will contain the old layer (with 1 logit) plus a new layer containing the new seen classes (let say 9 new ones for the first task). I think this behavior is a bit weird. But I agree that it would be nice to be consistent with the IncrementalClassifier behavior, I will make some more tests to make sure we can get a version that is both correct and consistent with the current IncrementalClassifier

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly, this means that you should not have a num_classes parameter at all. You must always start with 0 units.

def forward(self, x):
return self.fc(x)

def generate_fc(self, in_dim, out_dim):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

internal method? use _generate_fc. If it's public add doc.

@AntonioCarta
Copy link
Collaborator

Thanks @AlbinSou. Can we split this in multiple PRs? I think FeCAM and CosineLayer are ready once you add tests and doc. We can add the rest later.

I'm not sure about the plugins. The decoupling between the modules and their logic makes it a bit more complex to use them but I don't have a solution for this. If there is no alternative solution, I would at least add a reference to their implementation in the Module's doc.

@AlbinSou AlbinSou marked this pull request as ready for review November 16, 2023 14:29
@AntonioCarta AntonioCarta merged commit 69259e3 into ContinualAI:master Nov 17, 2023
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants