Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use the streaming inference API to improve inference speed #16

Closed
MainRo opened this issue Sep 23, 2018 · 2 comments
Closed

Use the streaming inference API to improve inference speed #16

MainRo opened this issue Sep 23, 2018 · 2 comments

Comments

@MainRo
Copy link
Owner

MainRo commented Sep 23, 2018

DeepSpeech v 0.2 implements a streaming API to execute inference while data arrives. This avoid waiting for the complete data before doing the inference, and should be faster. A code snippet is available here:

https://gist.github.com/reuben/80d64de15d1f46d34d28c7e83fc5f57e#file-ds_mic-py

@poohsen
Copy link

poohsen commented Jun 28, 2021

did you by any chance think on how this could work @MainRo ?

I have this working without an http server, just some quick & dirty code taken from a medium article where chunks of audio are read from a microphone. Not much experience with streaming over http but it sounds like a fun project to look into. If you gave this some thought already and have some pointers for what might need changing, I'd love to hear it.

@MainRo
Copy link
Owner Author

MainRo commented Nov 12, 2021

AFAIR the python deepspeech API for this was quite straightforward. However, there was more work on the deepspeech-server part so that the HTTP requests data is read In several steps.

@MainRo MainRo closed this as completed Jul 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants