Text-to-speech (inference) on GPU #577
Unanswered
kormoczi
asked this question in
General Q&A
Replies: 1 comment
-
I am still testing, but it looks like that finally I could find a good config: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
Can anybody tell me which TTS version and CUDA/cuDNN/python/torch/etc versions shall I use to be able to handle text-to-speech (no interest in the training yet) on GPU?
I have tried 'tts-server --use_cuda True' and 'synthesize.py --use_cuda True' as well, with different CUDA/torch versions, but I receive the following error:
RuntimeError: CUDA error: no kernel image is available for execution on the device
Thanks for all the help!
Best regards,
Csaba
Beta Was this translation helpful? Give feedback.
All reactions