For your information: Run VITS models from Coqui with sherpa-onnx (supporting Android, Raspberry Pi, etc) #3194
Replies: 4 comments 10 replies
-
Does this integrate with the android Text-to-speech output api, I mean can we switch the Preferred engine to coqui? |
Beta Was this translation helpful? Give feedback.
-
for me, the application skips words when reading: I tested the three French versions for arm64 v8 and they all skipped words. |
Beta Was this translation helpful? Give feedback.
-
@csukuangfj thanks! |
Beta Was this translation helpful? Give feedback.
-
@csukuangfj Awesome I tried the speech recognition model example on iOS. There was also some methods already for getting vits TTS model working, are there some examples expected soon of how to use the TTS on iOS? |
Beta Was this translation helpful? Give feedback.
-
FYI: We have supported exporting vits models from Coqui to ONNX and run it with sherpa-onnx
sherpa-onnx supports both text-to-speech and speech-to-text and it runs on Linux/macOS/Windows/Android/iOS
and provides various APIs for different languages, e.g., C++/C/Python/C#/Kotlin/Swift/Java/Go, etc.
The following colab notebook shows how to convert vits models from Coqui to sherpa-onnx
https://colab.research.google.com/drive/1cI9VzlimS51uAw4uCR-OBeSXRPBc4KoK?usp=sharing
You can also try the exported models by visiting the following huggingface space
https://huggingface.co/spaces/k2-fsa/text-to-speech
We also have pre-built Android APKs for the VITS English models from Coqui.
https://k2-fsa.github.io/sherpa/onnx/tts/apk.html
Beta Was this translation helpful? Give feedback.
All reactions