Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supervised Contrastive Replay implementation. #1356

Merged
merged 22 commits into from May 31, 2023

Conversation

AndreaCossu
Copy link
Collaborator

This PR adds Supervised Contrastive Replay (SCR). This is a draft implementation, open to discussion. The example is working in the sense that it runs without errors.
I have not used any online learning APIs so far.

Some caveats:

  1. The accuracy cannot be measured at training time, since during training the SCR does not use any classifier (just a custom loss).
  2. The SCR loss cannot be measured at test time since it requires building a multiviewed batch with augmentations. I switched to the CrossEntropy at eval time but people need to look at the APIDoc to know that. Also, I need to check if this switch may break something or if it is better to override self._criterion directly.
  3. Transformations to build the multiviewed batch are supported only if they can work on (batched) tensors, not on single PIL images. This really simplifies the implementation. I think it could be feasible to remove this constraint, but we would probably need a custom dataset to apply transformations and augment the mini-batch.

@AndreaCossu AndreaCossu marked this pull request as draft April 28, 2023 15:47
@AlbinSou
Copy link
Collaborator

The accuracy cannot be measured at training time, since during training the SCR does not use any classifier (just a custom loss).

  • I think this is not a problem, we are not really interested in the accuracy on the train stream (I mean, it's not a problem if we don't have it for free during training like what happens in supervised learning), but more on the validation stream. And I think this can be computed by using the nearest mean classifier on the buffer samples as they describe in the paper.

@AndreaCossu
Copy link
Collaborator Author

Yes, @AlbinSou I agree. I meant that if applied blindly, the user may expect to get the training accuracy and this would raise an error. But we cannot do much more than clarifying that in the APIdoc 😄

avalanche/models/dynamic_modules.py Outdated Show resolved Hide resolved
examples/supervised_contrastive_replay.py Outdated Show resolved Hide resolved
@AntonioCarta
Copy link
Collaborator

I agree with Albin, there are many other methods where you cannot compute the accuracy during training. Even better if it fails explicitly. Maybe add a comment in the doc.

@AndreaCossu
Copy link
Collaborator Author

I already updated the APIdoc. As for the TrainEvalModel, I will convert it to a standard pytorch Module.

@coveralls
Copy link

coveralls commented May 2, 2023

Pull Request Test Coverage Report for Build 5134216960

  • 60 of 167 (35.93%) changed or added relevant lines in 9 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage decreased (-0.3%) to 72.358%

Changes Missing Coverage Covered Lines Changed/Added Lines %
avalanche/models/scr_model.py 7 17 41.18%
avalanche/training/losses.py 6 47 12.77%
avalanche/training/supervised/supervised_contrastive_replay.py 22 78 28.21%
Totals Coverage Status
Change from base Build 5131641881: -0.3%
Covered Lines: 16125
Relevant Lines: 22285

💛 - Coveralls

@ContinualAI-bot
Copy link
Collaborator

Oh no! It seems there are some PEP8 errors! 😕
Don't worry, you can fix them! 💪
Here's a report about the errors and where you can find them:

avalanche/training/supervised/supervised_contrastive_replay.py:36:31: E251 unexpected spaces around keyword / parameter equals
avalanche/training/supervised/supervised_contrastive_replay.py:36:33: E251 unexpected spaces around keyword / parameter equals
avalanche/training/supervised/supervised_contrastive_replay.py:181:1: W391 blank line at end of file
2       E251 unexpected spaces around keyword / parameter equals
1       W391 blank line at end of file

# Conflicts:
#	avalanche/models/dynamic_modules.py
@ContinualAI-bot
Copy link
Collaborator

Oh no! It seems there are some PEP8 errors! 😕
Don't worry, you can fix them! 💪
Here's a report about the errors and where you can find them:

avalanche/training/supervised/supervised_contrastive_replay.py:36:31: E251 unexpected spaces around keyword / parameter equals
avalanche/training/supervised/supervised_contrastive_replay.py:36:33: E251 unexpected spaces around keyword / parameter equals
avalanche/training/supervised/supervised_contrastive_replay.py:181:1: W391 blank line at end of file
2       E251 unexpected spaces around keyword / parameter equals
1       W391 blank line at end of file

@AntonioCarta AntonioCarta mentioned this pull request May 5, 2023
@AndreaCossu AndreaCossu marked this pull request as ready for review May 8, 2023 08:12
@AntonioCarta
Copy link
Collaborator

@AndreaCossu you marked my first comment as fixed but I still see the old change

@AndreaCossu
Copy link
Collaborator Author

I didn't notice there were many of them experience = None. I don't know how that happened but I reverted it.

@AndreaCossu
Copy link
Collaborator Author

SCR now works as expected

@AndreaCossu
Copy link
Collaborator Author

@AntonioCarta I want to change the NCM Classifier because it is not very easy to use right now, but I will create a new PR for that, this can be merged if it looks ok for you

@AntonioCarta
Copy link
Collaborator

@AntonioCarta I want to change the NCM Classifier because it is not very easy to use right now, but I will create a new PR for that, this can be merged if it looks ok for you

of course.

@AntonioCarta AntonioCarta merged commit 4bcf8cc into ContinualAI:master May 31, 2023
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants