Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Streaming Convnets Online Inference is taking initially around 2 seconds to give the hypothesis text] #1008

Open
vchagari opened this issue Sep 23, 2022 · 0 comments
Labels
question Further information is requested

Comments

@vchagari
Copy link

vchagari commented Sep 23, 2022

Question

I am using streaming convnets models for Online Speech Recognition on a CPU, I see that the first response from the ASR is taking around 1.5-2.5 Seconds after the client has started streaming the audio data, but the next consecutive responses are faster though. Could you please help by sharing your thoughts on why this is happening and your suggestions on how to reduce the initial response time.

I am using the models which are there in the following page:
https://github.com/flashlight/wav2letter/tree/main/recipes/streaming_convnets/librispeech

@vchagari vchagari added the question Further information is requested label Sep 23, 2022
@vchagari vchagari changed the title [Streaming Convnets Online Inference is taking initially around 2 seconds to get the hypothesis text] [Streaming Convnets Online Inference is taking initially around 2 seconds to give the hypothesis text] Sep 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant