New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add mrr calculation #886
Add mrr calculation #886
Conversation
This pull request is now in conflicts. @zkid18, could you fix it? 🙏 |
catalyst/utils/metrics/mrr.py
Outdated
import torch | ||
|
||
|
||
def mrr(outputs: torch.Tensor, targets: torch.Tensor): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you please add docs here
https://github.com/catalyst-team/catalyst/blob/master/docs/api/utils.rst
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw, what do you think about MRRCallback? like https://github.com/catalyst-team/catalyst/blob/master/catalyst/dl/callbacks/metrics/iou.py#L12 for example
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I'll have a look at callbacks.
I still consider the proper design for MRR@K. Probably |
catalyst/utils/metrics/mrr.py
Outdated
mrr (float): the mrr score | ||
""" | ||
outputs = outputs.clone() | ||
targets = targets.clone() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do you need clone
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tried to follow the 'shared clones', which is a common pattern in torch. But it seems it not so necessary here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw, looks like we need to fix codestyle
catalyst/utils/metrics/mrr.py
Outdated
import torch | ||
|
||
|
||
def mrr(outputs: torch.Tensor, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you please add this metric to docs?
https://github.com/catalyst-team/catalyst/blob/master/docs/api/utils.rst#metrics
from catalyst.utils import metrics | ||
|
||
|
||
class MRRCallback(MetricCallback): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you please add this callback to the docs?
https://github.com/catalyst-team/catalyst/blob/master/docs/api/dl.rst#metrics
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As for tests, I think better return to the question when we implement at least one Learning to Rank models.
This pull request is now in conflicts. @zkid18, could you fix it? 🙏 |
catalyst/dl/callbacks/metrics/mrr.py
Outdated
output_key (str): output key to use for auc calculation; | ||
specifies our ``y_pred`` | ||
prefix (str): name to display for mrr when printing | ||
activation (str): An torch.nn activation applied to the outputs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need activation
? mrr metric doesn't have activation
support
@zkid18 could you please add a sanity check test for MRRCallback? like "init and compute on one sample" |
catalyst/utils/metrics/__init__.py
Outdated
from catalyst.utils.metrics.auc import auc | ||
from catalyst.utils.metrics.cmc_score import cmc_score_count, cmc_score | ||
from catalyst.utils.metrics.dice import dice, calculate_dice | ||
from catalyst.utils.metrics.f1_score import f1_score | ||
from catalyst.utils.metrics.mrr import mrr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you please follow the alphabetical order of imports?
num_epochs=2, | ||
verbose=True, | ||
callbacks=[MRRCallback, SchedulerCallback(reduced_metric="loss")] | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
new line at the end of py file is missed
catalyst/utils/metrics/mrr.py
Outdated
import torch | ||
|
||
|
||
def mrr(outputs: torch.Tensor, targets: torch.Tensor, k=100) -> torch.Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you please define the types for all args?
catalyst/utils/metrics/mrr.py
Outdated
The mrr score for each user. | ||
|
||
""" | ||
k = min(outputs.size()[1], k) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
k = min(outputs.size()[1], k) | |
k = min(outputs.size(1), k) |
catalyst/utils/metrics/mrr.py
Outdated
def mrr(outputs: torch.Tensor, targets: torch.Tensor, k=100) -> torch.Tensor: | ||
|
||
""" | ||
Calculate the MRR score given model ouptputs and targets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd appreciate it if you could extend documentation with the explanation of the metric or add link users will be able to read more about this score.
catalyst/utils/metrics/mrr.py
Outdated
|
||
""" | ||
k = min(outputs.size()[1], k) | ||
_, indices_for_sort = outputs.sort(descending=True, dim=-1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could we use torch.topk here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess the comment is more relevant to the next part
true_sorted_by_pred_shrink = true_sorted_by_preds[:, :k]
Anyway, I might missing the advantage of torch.topk
over the proposed approach.
We need to sort predictions by the corresponding indexes of the outputs. Is there is a way in pytorch to sort in that fashion?
CHANGELOG.md
Outdated
@@ -39,7 +39,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/). | |||
## [20.08] - 2020-08-09 | |||
|
|||
### Added | |||
- Full metric learning pipeline including training and validation stages ([#886](https://github.com/catalyst-team/catalyst/pull/876)) | |||
- MRR metrics calculation ([#886](https://github.com/catalyst-team/catalyst/pull/886)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks like we need to move it up :)
@zkid18 could you please update your branch and resolver the above questions? thank you! |
@Scitator have fixed the typos, and moved |
@Scitator |
docs/api/utils.rst
Outdated
|
||
MRR | ||
~~~~~~~~~~~~~~~~~~~~~~ | ||
.. automodule:: catalyst.utils.metrics.mrr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks like here is an error with docs :)
Pull request has been modified.
Before submitting
catalyst-make-codestyle && catalyst-check-codestyle
(pip install -U catalyst-codestyle
).make check-docs
?Description
Related Issue
Type of Change
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.