Skip to content

Commit

Permalink
Remove deprecated functions, and warnings - Text (#773)
Browse files Browse the repository at this point in the history
* Remove deprecated functions, and warnings
* Update links for docstring
* chlog

Co-authored-by: Daniel Stancl <46073029+stancld@users.noreply.github.com>
Co-authored-by: Jirka Borovec <Borda@users.noreply.github.com>
  • Loading branch information
3 people committed Jan 18, 2022
1 parent b8e67f9 commit 43a2261
Show file tree
Hide file tree
Showing 18 changed files with 34 additions and 254 deletions.
34 changes: 19 additions & 15 deletions CHANGELOG.md
Expand Up @@ -20,6 +20,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Removed

- Removed deprecated functions, and warnings in Text ([#773](https://github.com/PyTorchLightning/metrics/pull/773))
* `functional.wer`
* `WER`


### Fixed

Expand Down Expand Up @@ -58,8 +62,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

- Renamed IoU -> Jaccard Index ([#662](https://github.com/PyTorchLightning/metrics/pull/662))
- Renamed text WER metric ([#714](https://github.com/PyTorchLightning/metrics/pull/714))
* `functional.wer` -> `functional.word_error_rate`
* `WER` -> `WordErrorRate`
* `functional.wer` -> `functional.word_error_rate`
* `WER` -> `WordErrorRate`
- Renamed correlation coefficient classes: ([#710](https://github.com/PyTorchLightning/metrics/pull/710))
* `MatthewsCorrcoef` -> `MatthewsCorrCoef`
* `PearsonCorrcoef` -> `PearsonCorrCoef`
Expand All @@ -81,27 +85,27 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
* `SNR` -> `SignalNoiseRatio`
* `SI_SNR` -> `ScaleInvariantSignalNoiseRatio`
- Renamed F-score metrics: ([#731](https://github.com/PyTorchLightning/metrics/pull/731), [#740](https://github.com/PyTorchLightning/metrics/pull/740))
* `functional.f1` -> `functional.f1_score`
* `F1` -> `F1Score`
* `functional.fbeta` -> `functional.fbeta_score`
* `FBeta` -> `FBetaScore`
* `functional.f1` -> `functional.f1_score`
* `F1` -> `F1Score`
* `functional.fbeta` -> `functional.fbeta_score`
* `FBeta` -> `FBetaScore`
- Renamed Hinge metric: ([#734](https://github.com/PyTorchLightning/metrics/pull/734))
* `functional.hinge` -> `functional.hinge_loss`
* `Hinge` -> `HingeLoss`
* `functional.hinge` -> `functional.hinge_loss`
* `Hinge` -> `HingeLoss`
- Renamed image PSNR metrics ([#732](https://github.com/PyTorchLightning/metrics/pull/732))
* `functional.psnr` -> `functional.peak_signal_noise_ratio`
* `PSNR` -> `PeakSignalNoiseRatio`
- Renamed image PIT metric: ([#737](https://github.com/PyTorchLightning/metrics/pull/737))
* `functional.pit` -> `functional.permutation_invariant_training`
* `PIT` -> `PermutationInvariantTraining`
* `functional.pit` -> `functional.permutation_invariant_training`
* `PIT` -> `PermutationInvariantTraining`
- Renamed image SSIM metric: ([#747](https://github.com/PyTorchLightning/metrics/pull/747))
* `functional.ssim` -> `functional.scale_invariant_signal_noise_ratio`
* `SSIM` -> `StructuralSimilarityIndexMeasure`
* `functional.ssim` -> `functional.scale_invariant_signal_noise_ratio`
* `SSIM` -> `StructuralSimilarityIndexMeasure`
- Renamed detection `MAP` to `MeanAveragePrecision` metric ([#754](https://github.com/PyTorchLightning/metrics/pull/754))
- Renamed Fidelity & LPIPS image metric: ([#752](https://github.com/PyTorchLightning/metrics/pull/752))
* `image.FID` -> `image.FrechetInceptionDistance`
* `image.KID` -> `image.KernelInceptionDistance`
* `image.LPIPS` -> `image.LearnedPerceptualImagePatchSimilarity`
* `image.FID` -> `image.FrechetInceptionDistance`
* `image.KID` -> `image.KernelInceptionDistance`
* `image.LPIPS` -> `image.LearnedPerceptualImagePatchSimilarity`

### Removed

Expand Down
2 changes: 1 addition & 1 deletion docs/source/links.rst
Expand Up @@ -28,7 +28,7 @@
.. _sklearn averaging methods: https://scikit-learn.org/stable/modules/model_evaluation.html#multiclass-and-multilabel-classification
.. _Cosine Similarity: https://en.wikipedia.org/wiki/Cosine_similarity
.. _spearmans rank correlation coefficient: https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient
.. _WER: https://en.wikipedia.org/wiki/Word_error_rate
.. _WordErrorRate: https://en.wikipedia.org/wiki/Word_error_rate
.. _FID: https://en.wikipedia.org/wiki/Fr%C3%A9chet_inception_distance
.. _mean-squared-error: https://en.wikipedia.org/wiki/Mean_squared_error
.. _SSIM: https://en.wikipedia.org/wiki/Structural_similarity
Expand Down
6 changes: 3 additions & 3 deletions docs/source/references/functional.rst
Expand Up @@ -507,10 +507,10 @@ translation_edit_rate [func]
.. autofunction:: torchmetrics.functional.translation_edit_rate
:noindex:

wer [func]
~~~~~~~~~~
word_error_rate [func]
~~~~~~~~~~~~~~~~~~~~~~

.. autofunction:: torchmetrics.functional.wer
.. autofunction:: torchmetrics.functional.word_error_rate
:noindex:

word_information_lost [func]
Expand Down
6 changes: 3 additions & 3 deletions docs/source/references/modules.rst
Expand Up @@ -678,10 +678,10 @@ TranslationEditRate
.. autoclass:: torchmetrics.TranslationEditRate
:noindex:

WER
~~~
WordErrorRate
~~~~~~~~~~~~~

.. autoclass:: torchmetrics.WER
.. autoclass:: torchmetrics.WordErrorRate
:noindex:

WordInfoLost
Expand Down
2 changes: 0 additions & 2 deletions torchmetrics/__init__.py
Expand Up @@ -90,7 +90,6 @@
RetrievalRPrecision,
)
from torchmetrics.text import ( # noqa: E402
WER,
BLEUScore,
CharErrorRate,
CHRFScore,
Expand Down Expand Up @@ -187,7 +186,6 @@
"SumMetric",
"SymmetricMeanAbsolutePercentageError",
"TranslationEditRate",
"WER",
"WordErrorRate",
"CharErrorRate",
"MatchErrorRate",
Expand Down
3 changes: 1 addition & 2 deletions torchmetrics/functional/__init__.py
Expand Up @@ -76,7 +76,7 @@
from torchmetrics.functional.text.sacre_bleu import sacre_bleu_score
from torchmetrics.functional.text.squad import squad
from torchmetrics.functional.text.ter import translation_edit_rate
from torchmetrics.functional.text.wer import wer, word_error_rate
from torchmetrics.functional.text.wer import word_error_rate
from torchmetrics.functional.text.wil import word_information_lost
from torchmetrics.functional.text.wip import word_information_preserved
from torchmetrics.utilities.imports import _TRANSFORMERS_AUTO_AVAILABLE
Expand Down Expand Up @@ -158,7 +158,6 @@
"stat_scores",
"symmetric_mean_absolute_percentage_error",
"translation_edit_rate",
"wer",
"word_error_rate",
"char_error_rate",
"match_error_rate",
Expand Down
2 changes: 1 addition & 1 deletion torchmetrics/functional/text/__init__.py
Expand Up @@ -20,7 +20,7 @@
from torchmetrics.functional.text.sacre_bleu import sacre_bleu_score # noqa: F401
from torchmetrics.functional.text.squad import squad # noqa: F401
from torchmetrics.functional.text.ter import translation_edit_rate # noqa: F401
from torchmetrics.functional.text.wer import wer, word_error_rate # noqa: F401
from torchmetrics.functional.text.wer import word_error_rate # noqa: F401
from torchmetrics.functional.text.wil import word_information_lost # noqa: F401
from torchmetrics.functional.text.wip import word_information_preserved # noqa: F401
from torchmetrics.utilities.imports import _NLTK_AVAILABLE, _TRANSFORMERS_AUTO_AVAILABLE
Expand Down
16 changes: 0 additions & 16 deletions torchmetrics/functional/text/bert.py
Expand Up @@ -19,11 +19,9 @@
from warnings import warn

import torch
from deprecate import deprecated
from torch import Tensor
from torch.utils.data import DataLoader, Dataset

from torchmetrics.utilities import _future_warning
from torchmetrics.utilities.imports import _TQDM_AVAILABLE, _TRANSFORMERS_AUTO_AVAILABLE

if _TRANSFORMERS_AUTO_AVAILABLE:
Expand Down Expand Up @@ -457,13 +455,6 @@ def _rescale_metrics_with_baseline(
return all_metrics[..., 0], all_metrics[..., 1], all_metrics[..., 2]


@deprecated(
args_mapping={"predictions": "preds", "references": "target"},
target=True,
deprecated_in="0.7",
remove_in="0.8",
stream=_future_warning,
)
def bert_score(
preds: Union[List[str], Dict[str, Tensor]],
target: Union[List[str], Dict[str, Tensor]],
Expand Down Expand Up @@ -549,13 +540,6 @@ def bert_score(
Returns:
Python dictionary containing the keys `precision`, `recall` and `f1` with corresponding values.
.. deprecated:: v0.7
Args:
predictions:
This argument is deprecated in favor of `preds` and will be removed in v0.8.
references:
This argument is deprecated in favor of `target` and will be removed in v0.8.
Raises:
ValueError:
If `len(preds) != len(target)`.
Expand Down
23 changes: 0 additions & 23 deletions torchmetrics/functional/text/bleu.py
Expand Up @@ -18,14 +18,10 @@
# Link: https://pytorch.org/text/_modules/torchtext/data/metrics.html#bleu_score
from collections import Counter
from typing import Callable, Sequence, Tuple, Union
from warnings import warn

import torch
from deprecate import deprecated
from torch import Tensor, tensor

from torchmetrics.utilities import _future_warning


def _count_ngram(ngram_input_list: Sequence[str], n_gram: int) -> Counter:
"""Counting how many times each word appears in a given text with ngram.
Expand Down Expand Up @@ -146,13 +142,6 @@ def _bleu_score_compute(
return bleu


@deprecated(
args_mapping={"translate_corpus": "preds", "reference_corpus": "target"},
target=True,
deprecated_in="0.7",
remove_in="0.8",
stream=_future_warning,
)
def bleu_score(
preds: Union[str, Sequence[str]],
target: Sequence[Union[str, Sequence[str]]],
Expand All @@ -174,13 +163,6 @@ def bleu_score(
Return:
Tensor with BLEU Score
.. deprecated:: v0.7
Args:
translate_corpus:
This argument is deprecated in favor of `preds` and will be removed in v0.8.
reference_corpus:
This argument is deprecated in favor of `target` and will be removed in v0.8.
Example:
>>> from torchmetrics.functional import bleu_score
>>> preds = ['the cat is on the mat']
Expand All @@ -195,11 +177,6 @@ def bleu_score(
[2] Automatic Evaluation of Machine Translation Quality Using Longest Common Subsequence
and Skip-Bigram Statistics by Chin-Yew Lin and Franz Josef Och `Machine Translation Evolution`_
"""
warn(
"Input order of targets and preds were changed to predictions firsts and targets second in v0.7."
" Warning will be removed in v0.8."
)

preds_ = [preds] if isinstance(preds, str) else preds
target_ = [[tgt] if isinstance(tgt, str) else tgt for tgt in target]

Expand Down
17 changes: 1 addition & 16 deletions torchmetrics/functional/text/cer.py
Expand Up @@ -15,11 +15,9 @@
from typing import List, Tuple, Union

import torch
from deprecate import deprecated
from torch import Tensor, tensor

from torchmetrics.functional.text.helper import _edit_distance
from torchmetrics.utilities import _future_warning


def _cer_update(
Expand Down Expand Up @@ -61,30 +59,17 @@ def _cer_compute(errors: Tensor, total: Tensor) -> Tensor:
return errors / total


@deprecated(
args_mapping={"predictions": "preds", "references": "target"},
target=True,
deprecated_in="0.7",
remove_in="0.8",
stream=_future_warning,
)
def char_error_rate(preds: Union[str, List[str]], target: Union[str, List[str]]) -> Tensor:
"""character error rate is a common metric of the performance of an automatic speech recognition system. This
value indicates the percentage of characters that were incorrectly predicted. The lower the value, the better the
performance of the ASR system with a CER of 0 being a perfect score.
Args:
preds: Transcription(s) to score as a string or list of strings
target: Reference(s) for each speech input as a string or list of strings
Returns:
Character error rate score
.. deprecated:: v0.7
Args:
predictions:
This argument is deprecated in favor of `preds` and will be removed in v0.8.
references:
This argument is deprecated in favor of `target` and will be removed in v0.8.
Examples:
>>> preds = ["this is the prediction", "there is an other sample"]
>>> target = ["this is the reference", "there is another one"]
Expand Down
21 changes: 0 additions & 21 deletions torchmetrics/functional/text/sacre_bleu.py
Expand Up @@ -40,15 +40,12 @@
import re
from functools import partial
from typing import Sequence
from warnings import warn

import torch
from deprecate import deprecated
from torch import Tensor, tensor
from typing_extensions import Literal

from torchmetrics.functional.text.bleu import _bleu_score_compute, _bleu_score_update
from torchmetrics.utilities import _future_warning
from torchmetrics.utilities.imports import _REGEX_AVAILABLE

AVAILABLE_TOKENIZERS = ("none", "13a", "zh", "intl", "char")
Expand Down Expand Up @@ -278,13 +275,6 @@ def _lower(line: str, lowercase: bool) -> str:
return line


@deprecated(
args_mapping={"translate_corpus": "preds", "reference_corpus": "target"},
target=True,
deprecated_in="0.7",
remove_in="0.8",
stream=_future_warning,
)
def sacre_bleu_score(
preds: Sequence[str],
target: Sequence[Sequence[str]],
Expand Down Expand Up @@ -314,13 +304,6 @@ def sacre_bleu_score(
Return:
Tensor with BLEU Score
.. deprecated:: v0.7
Args:
translate_corpus:
This argument is deprecated in favor of `preds` and will be removed in v0.8.
reference_corpus:
This argument is deprecated in favor of `target` and will be removed in v0.8.
Example:
>>> from torchmetrics.functional import sacre_bleu_score
>>> preds = ['the cat is on the mat']
Expand All @@ -337,10 +320,6 @@ def sacre_bleu_score(
[3] Automatic Evaluation of Machine Translation Quality Using Longest Common Subsequence
and Skip-Bigram Statistics by Chin-Yew Lin and Franz Josef Och `Machine Translation Evolution`_
"""
warn(
"Input order of targets and preds were changed to predictions firsts and targets second in v0.7."
" Warning will be removed in v0.8."
)

if tokenize not in AVAILABLE_TOKENIZERS:
raise ValueError(f"Argument `tokenize` expected to be one of {AVAILABLE_TOKENIZERS} but got {tokenize}.")
Expand Down

0 comments on commit 43a2261

Please sign in to comment.