Skip to content

Commit a46c9d3

Browse files
authored
Update README.md
1 parent 625e1ed commit a46c9d3

File tree

1 file changed

+8
-3
lines changed

1 file changed

+8
-3
lines changed

README.md

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# All-About-Performance-Metrics
1+
# Evaluation Metrics in Machine Learning
22
This repository contains a comprehensive collection of performance metrics for various machine learning tasks, including regression, classification, and clustering. These metrics have been implemented from scratch to provide a reliable and customizable way of evaluating the performance of your machine learning models.
33

44
## Table of Contents
@@ -25,6 +25,9 @@ The repository currently includes the following performance metrics:
2525
- Mean Absolute Error (MAE)
2626
- R-squared (R2) Score
2727
- Adjusted R-squared (R2) Score
28+
- Pearson correlation
29+
- spearman correlation
30+
2831

2932
### [Classification Metrics](classification_metrics.ipynb)
3033

@@ -35,7 +38,9 @@ The repository currently includes the following performance metrics:
3538
- Recall Score
3639
- Log Loss/ Binary Cross Entropy Loss
3740
- Area Under the Receiver Operating Characteristic Curve (ROC AUC)
38-
- Classification report
41+
- Classification report
42+
- Average precision
43+
- precision-recall curve
3944

4045
### [Clustering Metrics](clustering_metrics.ipynb)
4146

@@ -91,4 +96,4 @@ This repository is licensed under the MIT License. See the [LICENSE](LICENSE) fi
9196

9297
---
9398

94-
I hope that this repository and the included performance metrics will be valuable tools in evaluating the effectiveness of your machine learning models. Feel free to explore, experiment, and contribute to further improve the available metrics. If you have any questions or encounter any issues, please don't hesitate to reach out to me. Happy modeling and evaluating!
99+
I hope that this repository and the included performance metrics will be valuable tools in evaluating the effectiveness of your machine learning models. Feel free to explore, experiment, and contribute to further improve the available metrics. If you have any questions or encounter any issues, please don't hesitate to reach out to me. Happy modeling and evaluating!

0 commit comments

Comments
 (0)