EmotiVoice 😊: a Multi-Voice and Prompt-Controlled TTS Engine
-
Updated
Aug 13, 2024 - Python
EmotiVoice 😊: a Multi-Voice and Prompt-Controlled TTS Engine
AirPlay and AirPlay 2 audio player
PyTorch implementation of convolutional neural networks-based text-to-speech synthesis models
Chinese Mandarin tts text-to-speech 中文 (普通话) 语音 合成 , by fastspeech 2 , implemented in pytorch, using waveglow as vocoder, with biaobei and aishell3 datasets
A Non-Autoregressive Transformer based Text-to-Speech, supporting a family of SOTA transformers with supervised and unsupervised duration modelings. This project grows with the research community, aiming to achieve the ultimate TTS
Two-talker Speech Separation with LSTM/BLSTM by Permutation Invariant Training method.
A Non-Autoregressive End-to-End Text-to-Speech (text-to-wav), supporting a family of SOTA unsupervised duration modelings. This project grows with the research community, aiming to achieve the ultimate E2E-TTS
VoxNovel: generate audiobooks giving each character a different voice actor.
Adaptive and Focusing Neural Layers for Multi-Speaker Separation Problem
PyTorch Implementation of Google's Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions. This implementation supports both single-, multi-speaker TTS and several techniques to enforce the robustness and efficiency of the model.
This is the official implementation of our multi-channel multi-speaker multi-spatial neural audio codec architecture.
Multi-Speaker FastSpeech2 applicable to Korean. Description about train and synthesize in detail.
An Algorithm for Speaker Recognition in a Multi-Speaker Environment
Urdu Speech Recognition using Kaldi ASR, by training Triphone Acoustic GMMs using the PRUS dataset.
Add a description, image, and links to the multi-speaker topic page so that developers can more easily learn about it.
To associate your repository with the multi-speaker topic, visit your repo's landing page and select "manage topics."