Skip to content

Commit

Permalink
Fix docs metrics formatting (Lightning-AI#5077)
Browse files Browse the repository at this point in the history
* fix functional f1 fbeta formatting

* Update f_beta.py

* remove line breaks

* Update f_beta.py

add line breaks and pad

* pad linea breaks with 2 spaces instead of tab
  • Loading branch information
s-rog committed Dec 12, 2020
1 parent d38e4d1 commit 0de43d1
Show file tree
Hide file tree
Showing 2 changed files with 20 additions and 20 deletions.
20 changes: 10 additions & 10 deletions pytorch_lightning/metrics/classification/f_beta.py
Expand Up @@ -52,11 +52,11 @@ class FBeta(Metric):
Threshold value for binary or multi-label logits. default: 0.5
average:
* `'micro'` computes metric globally
* `'macro'` computes metric for each class and uniformly averages them
* `'weighted'` computes metric for each class and does a weighted-average,
where each class is weighted by their support (accounts for class imbalance)
* `None` computes and returns the metric per class
- ``'micro'`` computes metric globally
- ``'macro'`` computes metric for each class and uniformly averages them
- ``'weighted'`` computes metric for each class and does a weighted-average,
where each class is weighted by their support (accounts for class imbalance)
- ``'none'`` computes and returns the metric per class
multilabel: If predictions are from multilabel classification.
compute_on_step:
Expand Down Expand Up @@ -185,11 +185,11 @@ class F1(FBeta):
Threshold value for binary or multi-label logits. default: 0.5
average:
* `'micro'` computes metric globally
* `'macro'` computes metric for each class and uniformly averages them
* `'weighted'` computes metric for each class and does a weighted-average,
where each class is weighted by their support (accounts for class imbalance)
* `None` computes and returns the metric per class
- ``'micro'`` computes metric globally
- ``'macro'`` computes metric for each class and uniformly averages them
- ``'weighted'`` computes metric for each class and does a weighted-average,
where each class is weighted by their support (accounts for class imbalance)
- ``'none'`` computes and returns the metric per class
multilabel: If predictions are from multilabel classification.
compute_on_step:
Expand Down
20 changes: 10 additions & 10 deletions pytorch_lightning/metrics/functional/f_beta.py
Expand Up @@ -83,11 +83,11 @@ def fbeta(
Threshold value for binary or multi-label logits. default: 0.5
average:
* `'micro'` computes metric globally
* `'macro'` computes metric for each class and uniformly averages them
* `'weighted'` computes metric for each class and does a weighted-average,
where each class is weighted by their support (accounts for class imbalance)
* `None` computes and returns the metric per class
- ``'micro'`` computes metric globally
- ``'macro'`` computes metric for each class and uniformly averages them
- ``'weighted'`` computes metric for each class and does a weighted-average,
where each class is weighted by their support (accounts for class imbalance)
- ``'none'`` computes and returns the metric per class
multilabel: If predictions are from multilabel classification.
Expand Down Expand Up @@ -136,11 +136,11 @@ def f1(
Threshold value for binary or multi-label logits. default: 0.5
average:
* `'micro'` computes metric globally
* `'macro'` computes metric for each class and uniformly averages them
* `'weighted'` computes metric for each class and does a weighted-average,
where each class is weighted by their support (accounts for class imbalance)
* `None` computes and returns the metric per class
- ``'micro'`` computes metric globally
- ``'macro'`` computes metric for each class and uniformly averages them
- ``'weighted'`` computes metric for each class and does a weighted-average,
where each class is weighted by their support (accounts for class imbalance)
- ``'none'`` computes and returns the metric per class
multilabel: If predictions are from multilabel classification.
Expand Down

0 comments on commit 0de43d1

Please sign in to comment.