-
Notifications
You must be signed in to change notification settings - Fork 187
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leakage and wish the cancel() without keeping current PartialResult #49
Comments
Ok, I will check some time later when I have time. A bit busy now.
It is just about UI? Should |
Assume that we speak "Hello" and press a button to call recornizer.cancel(). Afterward, press button "Recording microphone" to call recognizer.startlistening() and we speak "my name is A" and wait until get the final result, then the output will be "Hello my name is A". It continued from the previous canceling point. So I mean that how we can remove "Hello" when calling cancel() at the previous step. |
Ok, I got your problem, can you verify this patch works:
|
Thank you a lot! |
Hi Nickolay, |
Ok, I pushed stopListening change, thanks for testing. It doesn't really matter what you use since hcl and g are expanded to hclg on the fly. What model are you trying to use so you see 500Gb? Is it a default model or your custom one? |
What is the maximum memory usage you see with a default model? |
I trained a model with size of 67MB on disk, and about 160MB after loading on RAM. If HCLr and Gr are used, affer 1 hour decoding it can reach to 500MB. But it does not happen to the same model with HCLG.
I did not check this |
My model configuration: |
This should be fixed now with vosk 0.3.8 |
Hi Nickolay,
I am now testing with a model. Sizes of AM, LM, and i-vector extractor are 19MB,11MB, and 9MB. There are 2 issues:
The text was updated successfully, but these errors were encountered: