-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
speech2text cuts long speeches #146
Comments
Hello. Thanks for the question. Most likely what you need is to enable in the setting "Listening Mode: Always On". By default, the "One Sentence" mode is used, which processes audio only up to the first non-speech fragment (up to first silence). When "Always On" is enabled, audio is processed continuously (Always), but still in chunks limited by moments of data other than speech. The "Always On" model runs more smoothly if speech includes occasional moments of silence.
As for the input audio, there is no specific fixed limit. Speech Note always tries to detect moments of silence in the audio, and process the data in chunks. This should mintage the problem of "eating" all of available RAM in your system. I didn't test it on a very long audio files but you should be able to transcribe 30 minutes of live speech without a problem.
I recommend you trying and testing the Beta version. It has few unsolved bugs but it also has significantly improved CPU-only processing speed in WhisperCpp models. What's more, you can try the "OpenVINO" CPU acceleration, which speeds up STT even more with WhisperCpp. To enable "flathub-beta" follow these instructions. |
Please re-open if you need more information. |
First of all thank you for a great software. Everything works fine exept one issue.
I can talk for several minutes but eventually getting just several first sentenses after processing. So I have to split my speech to short pieces and it's quite annoying.
Tried whisper and fast whisper models
Not using GPU acceleration
Any advice? Is there any limit on input\output information?
The text was updated successfully, but these errors were encountered: