-
Notifications
You must be signed in to change notification settings - Fork 862
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix matcher recall threshold #2652
Conversation
Job PR-2652-c79bfd5 is done. |
@@ -228,6 +228,7 @@ def compute_hit_rate(features_a, features_b, logit_scale, top_ks=[1, 5, 10]): | |||
for k in top_ks: | |||
hit_rate += (preds < k).float().mean() | |||
|
|||
hit_rate /= len(top_ks) * len(logits) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider to add a test-case via the hit_rate implemented in torchmetrics: https://torchmetrics.readthedocs.io/en/stable/retrieval/hit_rate.html. I've implemented the test-case as follows and have verified the new implementation. Feel free to add it intest_utils.py
.
from torchmetrics import RetrievalHitRate
import numpy.testing as npt
def ref_symmetric_hit_rate(features_a, features_b, logit_scale, top_ks=[1, 5, 10]):
assert len(features_a) == len(features_b)
hit_rate = 0
logits_per_a = (logit_scale * features_a @ features_b.t()).detach().cpu()
logits_per_b = logits_per_a.t().detach().cpu()
num_elements = len(features_a)
for logits in [logits_per_a, logits_per_b]:
preds = logits.reshape(-1)
indexes = torch.broadcast_to(torch.arange(num_elements).reshape(-1, 1), (num_elements, num_elements)).reshape(-1)
target = torch.eye(num_elements, dtype=bool).reshape(-1)
for k in top_ks:
hr_k = RetrievalHitRate(k=k)
hit_rate += hr_k(preds, target, indexes=indexes)
return hit_rate / (2 * len(top_ks))
def test_symmetric_hit_rate():
generator = torch.Generator()
generator.manual_seed(0)
for repeat in range(3):
for top_ks in [[1, 5, 10], [20], [3, 7, 9]]:
features_a = torch.randn(50, 2, generator=generator)
features_b = torch.randn(50, 2, generator=generator)
hit_rate_impl = compute_hit_rate(features_a, features_b, logit_scale=1.0, top_ks=top_ks)
hit_rate_ref = ref_symmetric_hit_rate(features_a, features_b, logit_scale=1.0, top_ks=top_ks)
npt.assert_equal(hit_rate_impl.item(), hit_rate_ref.item())
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added. Thanks.
Job PR-2652-582d50a is done. |
Issue #, if available:
Description of changes:
Normalize the accumulated recall to (0, 1) to satisfy the stopping threshold.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.