Skip to content

Latest commit

History

History
65 lines (44 loc) 路 3 KB

wav2vec2_phoneme.md

File metadata and controls

65 lines (44 loc) 路 3 KB

Wav2Vec2Phoneme

Overview

The Wav2Vec2Phoneme model was proposed in Simple and Effective Zero-shot Cross-lingual Phoneme Recognition (Xu et al., 2021 by Qiantong Xu, Alexei Baevski, Michael Auli.

The abstract from the paper is the following:

Recent progress in self-training, self-supervised pretraining and unsupervised learning enabled well performing speech recognition systems without any labeled data. However, in many cases there is labeled data available for related languages which is not utilized by these methods. This paper extends previous work on zero-shot cross-lingual transfer learning by fine-tuning a multilingually pretrained wav2vec 2.0 model to transcribe unseen languages. This is done by mapping phonemes of the training languages to the target language using articulatory features. Experiments show that this simple method significantly outperforms prior work which introduced task-specific architectures and used only part of a monolingually pretrained model.

Relevant checkpoints can be found under https://huggingface.co/models?other=phoneme-recognition.

This model was contributed by patrickvonplaten

The original code can be found here.

Usage tips

  • Wav2Vec2Phoneme uses the exact same architecture as Wav2Vec2
  • Wav2Vec2Phoneme is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
  • Wav2Vec2Phoneme model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using [Wav2Vec2PhonemeCTCTokenizer].
  • Wav2Vec2Phoneme can be fine-tuned on multiple language at once and decode unseen languages in a single forward pass to a sequence of phonemes
  • By default, the model outputs a sequence of phonemes. In order to transform the phonemes to a sequence of words one should make use of a dictionary and language model.

Wav2Vec2Phoneme's architecture is based on the Wav2Vec2 model, for API reference, check out Wav2Vec2's documentation page except for the tokenizer.

Wav2Vec2PhonemeCTCTokenizer

[[autodoc]] Wav2Vec2PhonemeCTCTokenizer - call - batch_decode - decode - phonemize