Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on the speed #214

Open
snapo opened this issue Dec 8, 2022 · 5 comments
Open

Question on the speed #214

snapo opened this issue Dec 8, 2022 · 5 comments

Comments

@snapo
Copy link

snapo commented Dec 8, 2022

How long did it take it to render the red riding hood on a single RTX 3090 in each quality?
I am thinking about buying a RTX but they are a little expensive and would like to know if the ram requirements (24GB) are required.
Or can i go with a smaler card like 1080ti or 2080ti?

Additionally did you try to use Nebullvm to accelerate the model? (For Linux Users)
Or much better for Windows Users adding support for Microsofts direct ML so all GPU's can use it? (speed up for Windows Users with AMD/Intel/Nvidia GPU), DirectML supports pytorch....

Thanks for the answer... (Questions from a poor AMD card user :-) ....)

@NikitaKononov
Copy link

How long did it take it to render the red riding hood on a single RTX 3090 in each quality? I am thinking about buying a RTX but they are a little expensive and would like to know if the ram requirements (24GB) are required. Or can i go with a smaler card like 1080ti or 2080ti?

Additionally did you try to use Nebullvm to accelerate the model? (For Linux Users) Or much better for Windows Users adding support for Microsofts direct ML so all GPU's can use it? (speed up for Windows Users with AMD/Intel/Nvidia GPU), DirectML supports pytorch....

Thanks for the answer... (Questions from a poor AMD card user :-) ....)

You can inference evene on CPU
I've tested the model on 6gb vram - it works (it has dynamic batch size depending on vram amount)

@snapo
Copy link
Author

snapo commented Dec 16, 2022

i dont say it dosent work ... the problem i have is on a text with 40 words it takes approx. 6h :-) thats why i am asking. I ordered now a 3080 , maybe that speeds things up.

@vatsalaggarwal
Copy link

Hey, built something that solves this problem - it's an at-cost per-second open-sourced API on top of Tortoise...

More details here: https://twitter.com/vatsal_aggarwal/status/1612536547248836608?s=20

You can use it at https://tts.themetavoice.xyz

@NathanJGaul
Copy link

Also check out this fork: https://github.com/152334H/tortoise-tts-fast

@cryolite-ai
Copy link

Also check out this fork: https://github.com/152334H/tortoise-tts-fast

Though you'll want to be aware that the fork changes the license away from Apache..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants