Skip to content

Commit

Permalink
remove double $ latex (#464)
Browse files Browse the repository at this point in the history
  • Loading branch information
czaloom committed Feb 29, 2024
1 parent a91a2cc commit 58d945c
Showing 1 changed file with 14 additions and 22 deletions.
36 changes: 14 additions & 22 deletions docs/metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,29 +8,29 @@ If we're missing an important metric for your particular use case, please [write

| Name | Description | Equation |
|:- | :- | :- |
| Precision | The number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives). | $$\dfrac{\|TP\|}{\|TP\|+\|FP\|}$$ |
| Recall | The number of true positives divided by the total count of the class of interest (i.e., the number of true positives plus the number of true negatives). | $$\dfrac{\|TP\|}{\|TP\|+\|FN\|}$$ |
| F1 | A weighted average of precision and recall. | $$\frac{2 * Precision * Recall}{Precision + Recall}$$ |
| Accuracy | The number of true predictions divided by the total number of predictions. | $$\dfrac{\|TP\|+\|TN\|}{\|TP\|+\|TN\|+\|FP\|+\|FN\|}$$ |
| Precision | The number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives). | $\dfrac{\|TP\|}{\|TP\|+\|FP\|}$ |
| Recall | The number of true positives divided by the total count of the class of interest (i.e., the number of true positives plus the number of true negatives). | $\dfrac{\|TP\|}{\|TP\|+\|FN\|}$ |
| F1 | A weighted average of precision and recall. | $\frac{2 * Precision * Recall}{Precision + Recall}$ |
| Accuracy | The number of true predictions divided by the total number of predictions. | $\dfrac{\|TP\|+\|TN\|}{\|TP\|+\|TN\|+\|FP\|+\|FN\|}$ |
| ROC AUC | The area under the Receiver Operating Characteristic (ROC) curve for the predictions generated by a given model. | See [ROCAUC methods](#binary-roc-auc). |

## Object Detection and Instance Segmentation Metrics[^1]

| Name | Description | Equation |
| :- | :- | :- |
| Average Precision (AP) | The weighted mean of precisions achieved at several different recall thresholds for a single Intersection over Union (IOU), grouped by class. | See [AP methods](#average-precision-ap). |
| AP Averaged Over IOUs | The average of several AP metrics, calculated at various IOUs, grouped by class. | $$\dfrac{1}{\text{number of thresholds}} \sum\limits_{iou \in thresholds} AP_{iou}$$ |
| Mean Average Precision (mAP) | The mean of several AP scores, calculated over various classes. | $$\dfrac{1}{\text{number of classes}} \sum\limits_{c \in classes} AP_{c}$$ |
| mAP Averaged Over IOUs | The mean of several averaged AP scores, calculated over various classes. | $$\dfrac{1}{\text{number of thresholds}} \sum\limits_{iou \in thresholds} mAP_{iou}$$ |
| AP Averaged Over IOUs | The average of several AP metrics, calculated at various IOUs, grouped by class. | $\dfrac{1}{\text{number of thresholds}} \sum\limits_{iou \in thresholds} AP_{iou}$ |
| Mean Average Precision (mAP) | The mean of several AP scores, calculated over various classes. | $\dfrac{1}{\text{number of classes}} \sum\limits_{c \in classes} AP_{c}$ |
| mAP Averaged Over IOUs | The mean of several averaged AP scores, calculated over various classes. | $\dfrac{1}{\text{number of thresholds}} \sum\limits_{iou \in thresholds} mAP_{iou}$ |

[^1]: When calculating IOUs for object detection metrics, Valor handles the necessary conversion between different types of geometric annotations. For example, if your model prediction is a polygon and your groundtruth is a raster, then the raster will be converted to a polygon prior to calculating the IOU.

## Semantic Segmentation Metrics

| Name | Description | Equation |
| :- | :- | :- |
| Intersection Over Union (IOU) | A ratio between the groundtruth and predicted regions of an image, measured as a percentage, grouped by class. |$$\dfrac{area( prediction \cap groundtruth )}{area( prediction \cup groundtruth )}$$ |
| Mean IOU | The average of IOUs, calculated over several different classes. | $$\dfrac{1}{\text{number of classes}} \sum\limits_{c \in classes} IOU_{c}$$ |
| Intersection Over Union (IOU) | A ratio between the groundtruth and predicted regions of an image, measured as a percentage, grouped by class. |$\dfrac{area( prediction \cap groundtruth )}{area( prediction \cup groundtruth )}$ |
| Mean IOU | The average of IOUs, calculated over several different classes. | $\dfrac{1}{\text{number of classes}} \sum\limits_{c \in classes} IOU_{c}$ |

# Appendix: Metric Calculations

Expand All @@ -57,17 +57,13 @@ In Valor, we use the confidence scores sorted in decreasing order as our thresho

We now use the confidence scores, sorted in decreasing order, as our thresholds in order to generate points on a curve.

$$
Point(score) = (FPR(score), \ TPR(score))
$$
$Point(score) = (FPR(score), \ TPR(score))$

### Area Under the ROC Curve (ROC AUC)

After calculating the ROC curve, we find the ROC AUC metric by approximating the integral using the trapezoidal rule formula.

$$
ROC AUC = \sum_{i=1}^{|scores|} \frac{ \lVert Point(score_{i-1}) - Point(score_i) \rVert }{2}
$$
$ROC AUC = \sum_{i=1}^{|scores|} \frac{ \lVert Point(score_{i-1}) - Point(score_i) \rVert }{2}$

See [Classification: ROC Curve and AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc) for more information.

Expand All @@ -79,7 +75,7 @@ For object detection and instance segmentation tasks, average precision is calcu

Tasks that predict geometries (such as object detection or instance segmentation) use the ratio intersection-over-union (IOU) to calculate precision and recall. IOU is the ratio of the intersecting area over the joint area spanned by the two geometries, and is defined in the following equation.

$$Intersection \ over \ Union \ (IOU) = \dfrac{Area( prediction \cap groundtruth )}{Area( prediction \cup groundtruth )}$$
$Intersection \ over \ Union \ (IOU) = \dfrac{Area( prediction \cap groundtruth )}{Area( prediction \cup groundtruth )}$

Using different IOU thresholds, we can determine whether we count a pairing between a prediction and a ground truth pairing based on their overlap.

Expand Down Expand Up @@ -141,13 +137,9 @@ Average precision is defined as the area under the precision-recall curve.

We will use a 101-point interpolation of the curve to be consistent with the COCO evaluator. The intent behind interpolation is to reduce the fuzziness that results from ranking pairs.

$$
AP = \frac{1}{101} \sum\limits_{r\in\{ 0, 0.01, \ldots , 1 \}}\rho_{interp}(r)
$$
$AP = \frac{1}{101} \sum\limits_{r\in\{ 0, 0.01, \ldots , 1 \}}\rho_{interp}(r)$

$$
\rho_{interp} = \underset{\tilde{r}:\tilde{r} \ge r}{max \ \rho (\tilde{r})}
$$
$\rho_{interp} = \underset{\tilde{r}:\tilde{r} \ge r}{max \ \rho (\tilde{r})}$

### References
- [MS COCO Detection Evaluation](https://cocodataset.org/#detection-eval)
Expand Down

0 comments on commit 58d945c

Please sign in to comment.