Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

task_labels experience attribute type fix: now it is a list, not a set #1646

Merged
merged 1 commit into from
May 23, 2024

Conversation

vlomonaco
Copy link
Member

Hello, I found a bug in the task_incremental_benchmark generator. Even with the MNIST example in the notebook "03_benchmarks.ipynb" exp.task_labels is a set not a list as the train method expects.

I think the problem is here, where we convert the set in the list only for the task_label attribute:

def with_task_labels(obj):
"""Add `TaskAware` attributes.
The dataset must already have task labels.
`obj` must be a scenario, stream, or experience.
"""
def _add_task_labels(exp):
tls = exp.dataset.targets_task_labels.uniques
if len(tls) == 1:
# tls is a set. we need to convert to list to call __getitem__
exp.task_label = list(tls)[0]
exp.task_labels = tls
return exp
return _decorate_generic(obj, _add_task_labels)

The set is never converted into a list for the attribute task_labels and in in this method we use the indexing which is not supported by a set:

def phase_and_task(strategy: "SupervisedTemplate") -> Tuple[str, int]:
"""
Returns the current phase name and the associated task label.
The current task label depends on the phase. During the training
phase, the task label is the one defined in the "train_task_label"
field. On the contrary, during the eval phase the task label is the one
defined in the "eval_task_label" field.
:param strategy: The strategy instance to get the task label from.
:return: The current phase name as either "Train" or "Task" and the
associated task label.
"""
task_labels = getattr(strategy.experience, "task_labels", None)
if task_labels is not None:
task = task_labels
if len(task) > 1:
task = None # task labels per patterns
else:
task = task[0]
else:
task = None
if strategy.is_eval:
return EVAL, task
else:
return TRAIN, task

Below you can find the small fix + an assert I added in the TaskAware tests.

@vlomonaco vlomonaco added the bug Something isn't working label May 22, 2024
@coveralls
Copy link

Pull Request Test Coverage Report for Build 9193158769

Details

  • 2 of 3 (66.67%) changed or added relevant lines in 2 files are covered.
  • 1 unchanged line in 1 file lost coverage.
  • Overall coverage remained the same at 51.803%

Changes Missing Coverage Covered Lines Changed/Added Lines %
tests/benchmarks/scenarios/test_task_aware.py 0 1 0.0%
Files with Coverage Reduction New Missed Lines %
tests/benchmarks/scenarios/test_task_aware.py 1 34.21%
Totals Coverage Status
Change from base Build 9014120340: 0.0%
Covered Lines: 15082
Relevant Lines: 29114

💛 - Coveralls

@AntonioCarta AntonioCarta merged commit 8f0e61f into ContinualAI:master May 23, 2024
10 of 12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants