Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed lwf bug for alpha and prev classes per task dict #1155

Merged
merged 5 commits into from Oct 19, 2022

Conversation

AndreaCossu
Copy link
Collaborator

LwF now correctly uses one alpha per experience if alpha is given as a list.
The classes seen so far for each task were managed by a dictionary. Sometimes the key was a string, sometimes an int, leading to inconsistency. It now works.

@ContinualAI-bot
Copy link
Collaborator

Oh no! It seems there are some PEP8 errors! 😕
Don't worry, you can fix them! 💪
Here's a report about the errors and where you can find them:

avalanche/training/regularization.py:9:1: E302 expected 2 blank lines, found 1
1       E302 expected 2 blank lines, found 1

@coveralls
Copy link

Pull Request Test Coverage Report for Build 3282140647

  • 14 of 14 (100.0%) changed or added relevant lines in 3 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage increased (+0.004%) to 73.431%

Totals Coverage Status
Change from base Build 3242453389: 0.004%
Covered Lines: 13114
Relevant Lines: 17859

💛 - Coveralls

@AndreaCossu AndreaCossu merged commit 21e1672 into ContinualAI:master Oct 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants