A Python library for quickly calculating & displaying machine learning model performance metrics with confidence intervals.
-
Python >= 3.6
-
numba
-
numpy
-
scikit-learn
-
plotly
pip install fronni
Functions from the classification module:
Generates confidence intervals for precision, recall, & F1 metrics for a binary or multi-class classification model, given arrays of predicted & label values.
Parameter | Type | Default |
---|---|---|
label | Numpy array or Pandas series | None |
predicted | Numpy array or Pandas series | None |
n | integer, number of bootstrap iterations | 1,000 |
confidence_level | integer value between 1 & 100 | 95 |
as_dict | Boolean, return nested dictionary if True otherwise Pandas dataframe | False |
confidence_level | value between 1 & 100 | 95 |
sort_by_sample_size | Boolean, return the Pandas dataframe, sorted in descending order of class sample size | False |
Plots precision, recall, & confidence intervals for F1 metrics for a binary or multi-class classification model, given a classification report input.
Parameter | Type | Default |
---|---|---|
report | output from classification_report | None |
save_to_filename | string, path of filename image to save like "image.png" | None |
From the regression module:
Generates confidence intervals for RMSE, MAE, and R^2 metrics for a regression model, given arrays of predicted & label values.
Parameter | Type | Default |
---|---|---|
label | Numpy array or Pandas series | None |
predicted | Numpy array or Pandas series | None |
n | integer, number of bootstrap iterations | 1,000 |
as_dict | Boolean, return nested dictionary if True otherwise Pandas dataframe | False |
See the CONTRIBUTING file for how to help out.
fronni is Apache 2.0 licensed, as found in the LICENSE file.