Skip to content

Easy way to use one of transformer models to do inference locally. Can be done live through mic, or on local files. The first run needs to be online to download necessary models.

License

Notifications You must be signed in to change notification settings

AnkushMalaker/easy-stt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Using this repo is simple:

  1. Clone repo: git clone https://github.com/AnkushMalaker/easy-stt.git
  2. Create conda environment:
conda create -n easy_stt python=3.8
cd easy-stt
pip install -r requirements.txt
  1. Run inference script and select the model python3 src/scripts/infer_live.py -c to run live inference through a connected microphone or python3 src/scripts/infer.py ./input_file.wav ./output.csv -c to run inference on input_file.wav and save result in output.csv. You can even provide a directory instead of a file in the above command and it'll run inference on all files and save results in the csv.

Note: For infer_live.py, the user needs to find which audio device is suitable. Maybe we could build an interface to prompt the user to select the audio device. For now it's manual.

Results

Audio 1: clip1

Transcription: WEREBASICALLY TRYING TO RETAIN THE FINAL LAYER OF THE MODEL SO THAT IT CAN RECOGNIZE MY VOICE AND ACCENT AND ME BETTER

Audio 2: clip2

Transcription THESE MODELS ARE TRAINED ON LARGE CORPORA THAT DOES N'T ALWAYS TRANSLATE TO GREAT PERFORMANCE IN SPECIFIC CASES

About

Easy way to use one of transformer models to do inference locally. Can be done live through mic, or on local files. The first run needs to be online to download necessary models.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages