Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: New generic scenarios & ExML scenarios #977

Merged
merged 15 commits into from Apr 14, 2022

Conversation

AntonioCarta
Copy link
Collaborator

As I told you some time ago, I did some changes to the scenarios to make them more general.

  • the Experience is designed to be the only object received by CL strategies. Streams and benchmarks are accessible from the experience but the are hidden from the strategy.
  • the generic scenario don't have mandatory datasets and task labels anymore. This means that scenarios like RL are easier to support.
  • The old generic scenarios have become "GenericClassificationScenario". These is mostly a renaming of internal stuff, so it shouldn't break anything.
  • The new API is based on generators. By default, it assumes that experiences are generated lazily. This allows for an easier (and possibly more efficient) implementation of long streams. Lists are still supported.
  • added a method to hide information. Now experiences have three modalities: "train", "eval", "logging". These allows to add any attribute to the experience without worrying that users may misuse it (e.g. task labels available only at train/test/logging time). At the same time, it allows full access to any info during logging and metric computations.

To experiment with the new API, I implemented two scenarios:

  • Online CL, with the attributes that were proposed by @HamedHemati in his last PR. Notice that most attributes are hidden during training and the experiences are lazily generated. Basically, this is a lazy version of the current data-incremental generator. I profiled it, and it is as fast as the naive implementation without Avalanche.
  • Ex-Model Continual Learning. This come from our last paper (me, @AndreaCossu, @vlomonaco, Davide Bacciu).

I'm leaving this open for comments, let me know what you think.

@vlomonaco
Copy link
Member

vlomonaco commented Apr 12, 2022

Wow, this is massive! Thanks @AntonioCarta I think these are quite important changes. It seems this is not impacting much the end users indeed. Still, the concept of "experience" changes a bit from a static set of examples that can be processed together to agnostic data sources. We can discuss this in the next meeting and change the docs / tutorials accordingly.

@AntonioCarta AntonioCarta merged commit b2846d1 into ContinualAI:master Apr 14, 2022
@AntonioCarta AntonioCarta deleted the exml_experiences branch April 14, 2022 08:40
@coveralls
Copy link

Pull Request Test Coverage Report for Build 2166083245

  • 516 of 767 (67.28%) changed or added relevant lines in 77 files are covered.
  • 27 unchanged lines in 12 files lost coverage.
  • Overall coverage remained the same at 70.868%

Changes Missing Coverage Covered Lines Changed/Added Lines %
avalanche/benchmarks/scenarios/generic_definitions.py 8 9 88.89%
avalanche/evaluation/metric_utils.py 2 3 66.67%
avalanche/training/plugins/agem.py 3 4 75.0%
avalanche/training/supervised/strategy_wrappers.py 1 2 50.0%
avalanche/training/utils.py 2 3 66.67%
tests/benchmarks/scenarios/test_scenarios_typechecks.py 39 40 97.5%
avalanche/benchmarks/datasets/penn_fudan/penn_fudan_dataset.py 5 7 71.43%
avalanche/benchmarks/scenarios/classification_scenario.py 24 26 92.31%
avalanche/benchmarks/scenarios/generic_scenario.py 84 86 97.67%
avalanche/benchmarks/utils/data_loader.py 4 6 66.67%
Files with Coverage Reduction New Missed Lines %
avalanche/benchmarks/classic/clear.py 1 21.57%
avalanche/benchmarks/datasets/penn_fudan/penn_fudan_dataset.py 1 20.48%
avalanche/evaluation/metrics/detection_evaluators/lvis_evaluator.py 1 16.67%
avalanche/evaluation/metrics/detection.py 1 22.34%
avalanche/training/plugins/generative_replay.py 1 27.78%
avalanche/training/supervised/lamaml.py 1 18.02%
avalanche/benchmarks/utils/adaptive_transform.py 2 76.84%
avalanche/evaluation/metrics/detection_evaluators/coco_evaluator.py 2 19.14%
tests/training/test_supervised_regression.py 2 77.85%
avalanche/training/templates/base.py 3 93.0%
Totals Coverage Status
Change from base Build 2114407767: 0%
Covered Lines: 11954
Relevant Lines: 16868

💛 - Coveralls

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants