Feature Engineering, Cross Validation, ROC, AUC, Pipeline, Model Tuning, Hyper Parameter Tuning, Grid Search
-
Updated
Oct 31, 2021 - HTML
Feature Engineering, Cross Validation, ROC, AUC, Pipeline, Model Tuning, Hyper Parameter Tuning, Grid Search
Evaluation of the performance of classification models can be facilitated through a combination of calculating certain types of performance metrics and generating model performance evaluation graphics. The purpose of this exercise is to calculate a suite of classification model performance metrics via Python code functions.
Add a description, image, and links to the areaundercurve topic page so that developers can more easily learn about it.
To associate your repository with the areaundercurve topic, visit your repo's landing page and select "manage topics."