Probabilities from classification models can have two problems:
Miscalibration: A p of .9 often doesn't mean a 90% chance of 1 (assuming a dichotomous y). (You can calibrate it using isotonic regression.)
Optimal cut-offs: For multi-class classifiers, we do not know what probability value will maximize the accuracy or F1 score. Or any metric for which you need to trade-off between FP and FN.
Here we share a solution for #2. It involves running the outputs through a brute-force optimizer. We provide a simple wrapper to make it yet easier to use.
get_probability takes the following arguments:
true_labs(required): NumPy array or Pandas Series in which the true labels are stored.
pred_prob(required): NumPy array or Pandas Series in which the predicted probabilities are stored.
False(default) to show/hide verbose messages.
The function outputs a numeric p-value that gives the lowest F1-score or FP+FN (max. accuracy).
To use the function, just download it and put it in the local directory and call import.
import optimal_cut_offs df = ... p = optimal_cut_offs.get_probability(df.true_labs, df.pred_prob, 'accuracy')
Suriyan Laohaprapanon and Gaurav Sood