New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question on the speed #214
Comments
You can inference evene on CPU |
i dont say it dosent work ... the problem i have is on a text with 40 words it takes approx. 6h :-) thats why i am asking. I ordered now a 3080 , maybe that speeds things up. |
Hey, built something that solves this problem - it's an at-cost per-second open-sourced API on top of Tortoise... More details here: https://twitter.com/vatsal_aggarwal/status/1612536547248836608?s=20 You can use it at https://tts.themetavoice.xyz |
Also check out this fork: https://github.com/152334H/tortoise-tts-fast |
Though you'll want to be aware that the fork changes the license away from Apache.. |
How long did it take it to render the red riding hood on a single RTX 3090 in each quality?
I am thinking about buying a RTX but they are a little expensive and would like to know if the ram requirements (24GB) are required.
Or can i go with a smaler card like 1080ti or 2080ti?
Additionally did you try to use Nebullvm to accelerate the model? (For Linux Users)
Or much better for Windows Users adding support for Microsofts direct ML so all GPU's can use it? (speed up for Windows Users with AMD/Intel/Nvidia GPU), DirectML supports pytorch....
Thanks for the answer... (Questions from a poor AMD card user :-) ....)
The text was updated successfully, but these errors were encountered: