-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature request] Can we add the batch inference or batch decoding for XTTS #3776
Labels
feature request
feature requests for making TTS better.
Comments
I face same problem when infer with batch size. Do you solve it |
@Onkarsus13 Could you implement batched inference successfully? |
Yes Rakshith I can implement it
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I tried the batch inference in XTTS, So I am doing padding till the max text sequence in the batch and also adding the attention mask for this, But for shorter sequences,
I am getting some random noise at the end of the audio
It would be helpful if we get this feature in Coqui tts.
The text was updated successfully, but these errors were encountered: