You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
DeepSpeech v 0.2 implements a streaming API to execute inference while data arrives. This avoid waiting for the complete data before doing the inference, and should be faster. A code snippet is available here:
did you by any chance think on how this could work @MainRo ?
I have this working without an http server, just some quick & dirty code taken from a medium article where chunks of audio are read from a microphone. Not much experience with streaming over http but it sounds like a fun project to look into. If you gave this some thought already and have some pointers for what might need changing, I'd love to hear it.
AFAIR the python deepspeech API for this was quite straightforward. However, there was more work on the deepspeech-server part so that the HTTP requests data is read In several steps.
DeepSpeech v 0.2 implements a streaming API to execute inference while data arrives. This avoid waiting for the complete data before doing the inference, and should be faster. A code snippet is available here:
https://gist.github.com/reuben/80d64de15d1f46d34d28c7e83fc5f57e#file-ds_mic-py
The text was updated successfully, but these errors were encountered: