-
Notifications
You must be signed in to change notification settings - Fork 247
Add Silero Speech-To-Text models #153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
7 commits
Select commit
Hold shift + click to select a range
0bfd55a
Add Silero Speech-To-Text models
snakers4 92d6c3a
Move the MD file to root folder
snakers4 1121205
Fix util imports, fix models.yml path issues
snakers4 11a9e15
Add TorchAudio to the test environment
snakers4 4162891
Add -y to TorchAudio install
snakers4 4061c7d
Add OmegaConf installation to build
snakers4 78f39ca
Add dependencies installation, add one file example
snakers4 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,84 @@ | ||
--- | ||
layout: hub_detail | ||
background-class: hub-background | ||
body-class: hub | ||
category: researchers | ||
title: Silero Speech-To-Text Models | ||
summary: A set of compact enterprise-grade pre-trained STT Models for multiple languages. | ||
image: silero_logo.jpg | ||
author: Silero AI Team | ||
tags: [audio, scriptable] | ||
github-link: https://github.com/snakers4/silero-models | ||
github-id: snakers4/silero-models | ||
featured_image_1: silero_stt_model.jpg | ||
featured_image_2: silero_imagenet_moment.png | ||
accelerator: cuda-optional | ||
--- | ||
|
||
```bash | ||
# this assumes that you have a proper version of PyTorch already installed | ||
pip install -q torchaudio omegaconf soundfile | ||
``` | ||
|
||
```python | ||
import torch | ||
import zipfile | ||
import torchaudio | ||
from glob import glob | ||
# see https://github.com/snakers4/silero-models for utils and more examples | ||
|
||
device = torch.device('cpu') # gpu also works, but our models are fast enough for CPU | ||
model, decoder, utils = torch.hub.load(github='snakers4/silero-models', | ||
model='silero_stt', | ||
device=device, force_reload=True) | ||
(read_batch, split_into_batches, | ||
read_audio, prepare_model_input) = utils # see function signature for details | ||
|
||
# download a single file, any format compatible with TorchAudio (soundfile backend) | ||
torch.hub.download_url_to_file('https://opus-codec.org/static/examples/samples/speech_orig.wav', | ||
dst ='speech_orig.wav', progress=True) | ||
test_files = glob('speech_orig.wav') | ||
|
||
# or run a test on a whole batch of files | ||
# torch.hub.download_url_to_file('http://www.openslr.org/resources/83/midlands_english_female.zip', | ||
# dst ='midlands_english_female.zip', | ||
# progress=True) | ||
# with zipfile.ZipFile('midlands_english_female.zip', 'r') as zip_ref: | ||
# zip_ref.extractall('midlands_english_female') | ||
# test_files = glob('midlands_english_female/*.wav') | ||
|
||
batches = split_into_batches(test_files, batch_size=10) | ||
input = prepare_model_input(read_batch(batches[0]), | ||
device=device) | ||
|
||
output = model(input) | ||
for example in output: | ||
print(decoder(example.cpu())) | ||
``` | ||
|
||
### Model Description | ||
|
||
Silero Speech-To-Text models provide enterprise grade STT in a compact form-factor for several commonly spoken languages. Unlike conventional ASR models our models are robust to a variety of dialects, codecs, domains, noises, lower sampling rates (for simplicity audio should be resampled to 16 kHz). The models consume a normalized audio in the form of samples (i.e. without any pre-processing except for normalization to -1 ... 1) and output frames with token probabilities. We provide a decoder utility for simplicity (we could include it into our model itself, but scripted modules had problems with storing model artifacts i.e. labels during certain export scenarios). | ||
|
||
We hope that our efforts with Open-STT and Silero Models will bring the ImageNet moment in speech closer. | ||
|
||
### Supported Languages and Formats | ||
|
||
As of this page update, the following languages are supported: | ||
|
||
- English | ||
- German | ||
- Spanish | ||
|
||
To see the always up-to-date language list, please visit our [repo](https://github.com/snakers4/silero-models) and see the `yml` [file](https://github.com/snakers4/silero-models/blob/master/models.yml) for all available checkpoints. | ||
|
||
### Additional Examples and Benchmarks | ||
|
||
For additional examples and other model formats please visit this [link](https://github.com/snakers4/silero-models). For quality and performance benchmarks please see the [wiki](https://github.com/snakers4/silero-models/wiki). These resources will be updated from time to time. | ||
|
||
### References | ||
|
||
- [Silero Models](https://github.com/snakers4/silero-models) | ||
- [Alexander Veysov, "Toward's an ImageNet Moment for Speech-to-Text", The Gradient, 2020](https://thegradient.pub/towards-an-imagenet-moment-for-speech-to-text/) | ||
- [Alexander Veysov, "A Speech-To-Text Practitioner’s Criticisms of Industry and Academia", The Gradient, 2020](https://thegradient.pub/a-speech-to-text-practitioners-criticisms-of-industry-and-academia/) | ||
|
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would this require omegaconf and torchaudio to be installed?
if so, you should add a cell like in https://github.com/pytorch/hub/blob/master/nvidia_deeplearningexamples_waveglow.md#example
which pip installs these extra packages.
that will make the thing instantly run in Google Colab, and is really valuable!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi,
Yeah, this would
That is why I needed to include them in your CI environment above
Actually in colab I also needed to install soundfile, as we are using it as a backend for TorchAudio
yeah, this totally makes sense
by the way, you can see a more extended colab version here
I will add this shortly