PyTorchLTR provides several built-in evaluation metrics including ARP evaluation-joachims2017unbiased
and DCG evaluation-kalervo2002cumulated
. Furthermore, the library has support for creating pytrec_eval evaluation-gysel2018pytreceval
compatible output.
>>> import torch >>> from pytorchltr.evaluation import ndcg >>> scores = torch.tensor([[1.0, 0.0, 1.5], [1.5, 0.2, 0.5]]) >>> relevance = torch.tensor([[0, 1, 0], [0, 1, 1]]) >>> n = torch.tensor([3, 3]) >>> ndcg(scores, relevance, n, k=10) tensor([0.5000, 0.6934])
pytorchltr.evaluation.arp
pytorchltr.evaluation.dcg
pytorchltr.evaluation.ndcg
pytorchltr.evaluation.generate_pytrec_eval
References
references.bib