Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi task mzr #7

Merged
merged 7 commits into from
Dec 6, 2022
Merged

Multi task mzr #7

merged 7 commits into from
Dec 6, 2022

Conversation

marouaneamz
Copy link

No description provided.

mmcls/evaluation/metrics/multi_task.py Show resolved Hide resolved
tests/test_structures/test_datasample.py Outdated Show resolved Hide resolved
@piercus
Copy link
Owner

piercus commented Dec 5, 2022

FAILED tests/test_evaluation/test_metrics/test_multi_task_metrics.py::MultiTaskMetric::test_evaluate - TypeError: list indices must be integers or slices, not str

@codecov-commenter
Copy link

codecov-commenter commented Dec 5, 2022

Codecov Report

Merging #7 (3dc8324) into multi-task (404d15d) will decrease coverage by 0.05%.
The diff coverage is 89.90%.

@@              Coverage Diff               @@
##           multi-task       #7      +/-   ##
==============================================
- Coverage       89.04%   88.98%   -0.06%     
==============================================
  Files             151      151              
  Lines           11537    11459      -78     
  Branches         1844     1839       -5     
==============================================
- Hits            10273    10197      -76     
+ Misses            997      995       -2     
  Partials          267      267              
Flag Coverage Δ
unittests 88.98% <89.90%> (-0.06%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmcls/structures/multi_task_data_sample.py 75.00% <50.00%> (-15.15%) ⬇️
mmcls/models/utils/data_preprocessor.py 94.80% <60.00%> (-2.50%) ⬇️
mmcls/models/classifiers/image.py 96.15% <75.00%> (-3.85%) ⬇️
mmcls/evaluation/metrics/single_label.py 98.97% <88.23%> (-1.03%) ⬇️
mmcls/datasets/transforms/formatting.py 90.08% <91.66%> (-0.99%) ⬇️
mmcls/models/heads/multi_task_head.py 96.96% <94.59%> (+0.86%) ⬆️
mmcls/datasets/transforms/__init__.py 100.00% <100.00%> (ø)
mmcls/evaluation/metrics/multi_task.py 94.59% <100.00%> (+3.21%) ⬆️
mmcls/models/heads/cls_head.py 100.00% <100.00%> (ø)

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

if data_samples is None:
task_samples = None
else:
task_samples = [item.get(task_name) for item in data_samples]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the task does not exist, it will return None , and it is impossible to do set_pred for Nones

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

File "tools/train.py", line 164, in <module>
    main()
  File "tools/train.py", line 160, in main
    runner.train()
  File "/home/marouane/.local/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1659, in train
    model = self.train_loop.run()  # type: ignore
  File "/home/marouane/.local/lib/python3.8/site-packages/mmengine/runner/loops.py", line 94, in run
    self.runner.val_loop.run()
  File "/home/marouane/.local/lib/python3.8/site-packages/mmengine/runner/loops.py", line 343, in run
    self.run_iter(idx, data_batch)
  File "/home/marouane/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/marouane/.local/lib/python3.8/site-packages/mmengine/runner/loops.py", line 363, in run_iter
    outputs = self.runner.model.val_step(data_batch)
  File "/home/marouane/.local/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py", line 133, in val_step
    return self._run_forward(data, mode='predict')  # type: ignore
  File "/home/marouane/.local/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py", line 301, in _run_forward
    results = self(**data, mode=mode)
  File "/home/marouane/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/marouane/dev/mmclassification1.x/mmcls/models/classifiers/image.py", line 119, in forward
    return self.predict(inputs, data_samples)
  File "/home/marouane/dev/mmclassification1.x/mmcls/models/classifiers/image.py", line 244, in predict
    return self.head.predict(feats, data_samples, **kwargs)
  File "/home/marouane/dev/mmclassification1.x/mmcls/models/heads/multi_task_head.py", line 124, in predict
    task_samples = head.predict(feats, task_samples)
  File "/home/marouane/dev/mmclassification1.x/mmcls/models/heads/cls_head.py", line 130, in predict
    predictions = self._get_predictions(cls_score, data_samples)
  File "/home/marouane/dev/mmclassification1.x/mmcls/models/heads/cls_head.py", line 144, in _get_predictions
    data_sample.set_pred_score(score).set_pred_label(label)
AttributeError: 'NoneType' object has no attribute 'set_pred_score'

mmcls/models/heads/cls_head.py Outdated Show resolved Hide resolved
Co-authored-by: Colle <piercus@users.noreply.github.com>
@piercus piercus merged commit 3fe628e into piercus:multi-task Dec 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants