You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using streaming convnets models for Online Speech Recognition on a CPU, I see that the first response from the ASR is taking around 1.5-2.5 Seconds after the client has started streaming the audio data, but the next consecutive responses are faster though. Could you please help by sharing your thoughts on why this is happening and your suggestions on how to reduce the initial response time.
vchagari
changed the title
[Streaming Convnets Online Inference is taking initially around 2 seconds to get the hypothesis text]
[Streaming Convnets Online Inference is taking initially around 2 seconds to give the hypothesis text]
Sep 23, 2022
Question
I am using streaming convnets models for Online Speech Recognition on a CPU, I see that the first response from the ASR is taking around 1.5-2.5 Seconds after the client has started streaming the audio data, but the next consecutive responses are faster though. Could you please help by sharing your thoughts on why this is happening and your suggestions on how to reduce the initial response time.
I am using the models which are there in the following page:
https://github.com/flashlight/wav2letter/tree/main/recipes/streaming_convnets/librispeech
The text was updated successfully, but these errors were encountered: