New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gpu decoding #2
Comments
Sorry for the late response. I was on vacation and then forgot about this message. Personally, I don't see a reason to do the realtime decoding on the GPU. I mean if you were decoding offline data, then you can possibly get a speedup by processing a file faster than realtime. WIth online recognition (eg. from the microphone) you can't really go faster than realtime so the GPU would be severely underutilized. In fact, you can easily cram more than one conversation on a single CPU core and it will work fine. Am I missing anything? |
My idea that GPU has more computation capacity and can decoding more waves in realtime. best regards |
That's an interesting hypothesis, but I'm not sure if we're there yet. The current cudadecoder exists only in the "batched" version. I don't think anyone tried to make one that would process many realtime streams on a single GPU. As far as the port limitation, I think 6000 threads in parallel is plenty for one server, or am I mistaken? |
I don't think 6000 concurrent is possible, it is usually limited by cpu core, so 4 cores means 4 concurrent. A gpu can increase concurrency, but there is only a batched version for now
|
Hi
Now Kaldi support GPU decoding (kaldi-asr/kaldi#3114) is it possible to configure TCP port on GPU?
What your opinion?
How can change TCP port to use GPU decoding?
Best regards
The text was updated successfully, but these errors were encountered: