Skip to content

italian_cv v2.0.0

Compare
Choose a tag to compare
@mmcauliffe mmcauliffe released this 23 Mar 01:35
· 33 commits to main since this release
e092254

Italian CV acoustic model v2.0.0

Link to documentation on mfa-models

Jump to section:

Model details

  • Maintainer: Vox Communis
  • Language: Italian
  • Dialect: N/A
  • Phone set: Epitran
  • Model type: Acoustic model
  • Features: MFCC
  • Architecture: gmm-hmm
  • Model version: v2.0.0
  • Trained date: 02-11-2022
  • Compatible MFA version: v2.0.0
  • License: CC-0
  • Citation:
@misc{
	Ahn_Chodroff_2022,
	author={Ahn, Emily and Chodroff, Eleanor},
	title={VoxCommunis Corpus},
	address={\url{https://osf.io/t957v}},
	publisher={OSF},
	year={2022},
	month={Jan}
}

Installation

Install from the MFA command line:

mfa models download acoustic italian_cv

Or download from the release page.

Intended use

This model is intended for forced alignment of Italian transcripts.

This model uses the Epitran phone set for Italian, and was trained with the pronunciation dictionaries above. Pronunciations can be added on top of the dictionary, as long as no additional phones are introduced.

Performance Factors

As forced alignment is a relatively well-constrained problem (given accurate transcripts), this model should be applicable to a range of recording conditions and speakers. However, please note that it was trained on read speech in low-noise environments, so as your data diverges from that, you may run into alignment issues or need to increase the beam size of MFA or see other recommendations in the troubleshooting section below.

Please note as well that MFA does not use state-of-the-art ASR models for forced alignment. You may get better performance (especially on speech-to-text tasks) using other frameworks like Coqui.

Ethical considerations

Deploying any Speech-to-Text model into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias

You should assume every machine learning model has demographic bias unless proven otherwise. For STT models, it is often the case that transcription accuracy is better for men than it is for women. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.

Troubleshooting issues

Machine learning models (like this acoustic model) perform best on data that is similar to the data on which they were trained.

The primary sources of variability in forced alignment will be the applicability of the pronunciation dictionary and how similar the speech, demographics, and recording conditions are. If you encounter issues in alignment, there are couple of avenues to improve performance:

  1. Increase the beam size of MFA

    • MFA defaults to a narrow beam to ensure quick alignment and also as a way to detect potential issues in your dataset, but depending on your data, you might benefit from boosting the beam to 100 or higher.
  2. Add pronunciations to the pronunciation dictionary

    • This model was trained a particular dialect/style, and so adding pronunciations more representative of the variety spoken in your dataset will help alignment.
  3. Check the quality of your data

    • MFA includes a validator utility, which aims to detect issues in the dataset.
    • Use MFA's anchor utility to visually inspect your data as MFA sees it and correct issues in transcription or OOV items.
  4. Adapt the model to your data

    • MFA has an adaptation command to adapt some of the model to your data based on an initial alignment, and then run another alignment with the adapted model.

Training data

This model was trained on the following corpora: