Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

speech2text cuts long speeches #146

Closed
ice10101 opened this issue Jun 25, 2024 · 2 comments
Closed

speech2text cuts long speeches #146

ice10101 opened this issue Jun 25, 2024 · 2 comments

Comments

@ice10101
Copy link

First of all thank you for a great software. Everything works fine exept one issue.

I can talk for several minutes but eventually getting just several first sentenses after processing. So I have to split my speech to short pieces and it's quite annoying.

Tried whisper and fast whisper models

Not using GPU acceleration

Any advice? Is there any limit on input\output information?

@mkiol
Copy link
Owner

mkiol commented Jun 27, 2024

Hello. Thanks for the question.

Most likely what you need is to enable in the setting "Listening Mode: Always On". By default, the "One Sentence" mode is used, which processes audio only up to the first non-speech fragment (up to first silence). When "Always On" is enabled, audio is processed continuously (Always), but still in chunks limited by moments of data other than speech. The "Always On" model runs more smoothly if speech includes occasional moments of silence.

image

Any advice? Is there any limit on input\output information?

As for the input audio, there is no specific fixed limit. Speech Note always tries to detect moments of silence in the audio, and process the data in chunks. This should mintage the problem of "eating" all of available RAM in your system. I didn't test it on a very long audio files but you should be able to transcribe 30 minutes of live speech without a problem.

Not using GPU acceleration

I recommend you trying and testing the Beta version. It has few unsolved bugs but it also has significantly improved CPU-only processing speed in WhisperCpp models. What's more, you can try the "OpenVINO" CPU acceleration, which speeds up STT even more with WhisperCpp. To enable "flathub-beta" follow these instructions.

@mkiol
Copy link
Owner

mkiol commented Aug 3, 2024

Please re-open if you need more information.

@mkiol mkiol closed this as completed Aug 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants