Skip to content
forked from bsxfan/PYLLR

Python toolkit for likelihood-ratio calibration of binary classifiers

License

Notifications You must be signed in to change notification settings

davidavdav/llreval

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLR-Evaluation (llreval)

This is an authorized fork from PYLLR.

Python toolkit for likelihood-ratio calibration of binary classifiers.

The emphasis is on binary classifiers (for example speaker verification), where the output of the classifier is in the form of a well-calibrated log-likelihood-ratio (LLR). The tools include:

  • PAV and ROCCH score analysis.
  • DET curves and EER
  • DCF and minDCF
  • Bayes error-rate plots
  • Cllr

Most of the algorithms in LLR-Evaluation are Python translations of the older MATLAB BOSARIS Tookit. Descriptions of the algorithms are available in:

Niko Brümmer and Edward de Villiers, The BOSARIS Toolkit: Theory, Algorithms and Code for Surviving the New DCF, 2013.

Install

Install using pip

pip install llreval

Usage

import llreval

Out of a hundred trials, how many errors does your speaker verifier make?

We have included in the examples directory, some code that reproduces the plots in our paper:

Niko Brümmer, Luciana Ferrer and Albert Swart, "Out of a hundred trials, how many errors does your speaker verifier make?", 2011, https://arxiv.org/abs/2104.00732.

For instructions, go to the readme

About

Python toolkit for likelihood-ratio calibration of binary classifiers

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%