Skip to content

Latest commit

 

History

History
44 lines (31 loc) · 1.08 KB

evaluation.rst

File metadata and controls

44 lines (31 loc) · 1.08 KB

Evaluation

PyTorchLTR provides several built-in evaluation metrics including ARP evaluation-joachims2017unbiased and DCG evaluation-kalervo2002cumulated. Furthermore, the library has support for creating pytrec_eval evaluation-gysel2018pytreceval compatible output.

Example

>>> import torch >>> from pytorchltr.evaluation import ndcg >>> scores = torch.tensor([[1.0, 0.0, 1.5], [1.5, 0.2, 0.5]]) >>> relevance = torch.tensor([[0, 1, 0], [0, 1, 1]]) >>> n = torch.tensor([3, 3]) >>> ndcg(scores, relevance, n, k=10) tensor([0.5000, 0.6934])

Built-in metrics

pytorchltr.evaluation.arp

pytorchltr.evaluation.dcg

pytorchltr.evaluation.ndcg

Integration with pytrec_eval

pytorchltr.evaluation.generate_pytrec_eval

References

references.bib